<<

SKELLEY, CHELSEA ATKINS, Ph.D. Interfaces and Interfacings: Posthuman Ecologies, Bodies and Identities. (2016) Directed by Dr. Stephen Yarbrough, 241 pp.

This dissertation posits a posthuman theory for a technologically-driven

(ubicomp) world, specifically theorizing cognition, intentionality

and interface. The larger aim of this project is to open up discussions about human and

technological relations and how these relations shape our understanding of what it means

to be human. Situating my argument within posthuman and rhetorical theories, I discuss

the metaphorical cyborg as a site of resistance, the everyday cyborg and its relations to

through technogenesis and technology extension theories, and lastly the posthuman cyborg resulting from advances in biotechnology. I argue that this posthuman cyborg is an enmeshed network of biological and informatic code with neither having primacy. Building upon Anthony Miccoli, I see the interface (the space in between) as a

functional myth, as humans are mutually constituted by material, biological,

technological and social substrates of a networked ecology. I, then, reconfigure Kenneth

Burke’s identification theory for the technological age and argue that the posthuman subject consubstantiates with the substrates, (or substances), to continuously invent a fluid intersubjectivity in a networked ecology.

This project, then, explores both metaphorical and technological interfaces to better understand each. I argue that interfacing is a more thorough term to understand how humans, , objects, spaces, language and code interact and thus constitute what we conceptualize as “human” and “reality.” This framework dismantles the interface as a space in between in favor of a networked ecology of dynamic relations. Then, I examine technological interfaces and their development as they have moved from

the desktop to touchscreens to spaces wherein the body becomes a literal interface and site of interaction. These developments require rhetoric and composition scholars to interrogate not only the discourse of technologies but the interfaces themselves if we are

to fully understand how human users come to identify with technologies that shape not

only our communication but also our sense of subjectivity, autonomy, agency and

intentionality.

To make my claims clearer, I analyze fiction representations of interfaces to chart more accessible means through which to understand the larger philosophical arcs in posthuman theory, intentionality as well as artificial intelligence. Using the films, then, this work seeks to elucidate the complexities of relations in the networked ecologies that define how we understand ourselves and the world in which we live.

INTERFACES AND INTERFACINGS: POSTHUMAN ECOLOGIES, BODIES AND

IDENTITIES

by

Chelsea Atkins Skelley

A Dissertation Submitted to the Faculty of The Graduate School at The University of North Carolina at Greensboro in Partial Fulfillment of the Requirements for the Degree Doctor of

Greensboro 2016

Approved by

______Committee Chair

© 2016 Chelsea Atkins Skelley

APPROVAL PAGE

This dissertation written by Chelsea Atkins Skelley has been approved by the following committee of the Faculty of The Graduate School at The University of North

Carolina at Greensboro.

Committee Chair ______

Committee Members ______

______

______

______Date of Acceptance by Committee

______Date of Final Oral Examination

ii

TABLE OF CONTENTS

Page

LIST OF FIGURES ...... v

CHAPTER

I. THE AGE AND ANXIETIES OF UBIQUITOUS COMPUTING ...... 1

Tensions and Anxieties Arise ...... 10 Unpacking the Binaries ...... 16 Virtual vs. “Real” ...... 16 The Corporeal Turn: Materiality Matters ...... 19 Moving Forward ...... 23

II. CYBORGS, BODIES AND INTERFACINGS ...... 28

Why Posthumanism? ...... 29 Challenging Cartesian Dualism ...... 33 Branches of Posthumanism ...... 36 The Posthuman Cyborg...... 41 Metaphorical Cyborg as Resistance ...... 43 “Everyday” Cyborgs ...... 44 Technology and Cyborg Relations ...... 46 Technogenesis ...... 47 Technology as Extension of the Body and Mind ...... 48 Problems with Extension Theories ...... 53 Posthuman Cyborg as Interface ...... 54 Code as Material ...... 55 Material Cyborgs ...... 58 Posthuman Invention: Identification and Consubstantiality ...... 64

III. INTERROGATING “INTERFACE” IN A POSTHUMAN WORLD ...... 67

Interfaces Defined ...... 69 Human-Computer Interaction Design Goals ...... 72 Natural User Interfaces ...... 76 Body Movement and Gesture in Interface Interaction ...... 80 Gestural Interfaces and ...... 83 The Interface in Rhetoric and Composition ...... 88 Troubling the Discourse of HCI ...... 91

iii

The Myth of Objectivity and Neutrality ...... 93 Transparency as Sleepwalking ...... 95 Interrogating the “Natural” ...... 97 Posthuman Reconsiderations of the Interface and Body ...... 102 Redefining “Interface” ...... 103

IV. FROM SEPARATION TO AGGREGATION: TRACING THE POSTHUMAN ARC IN SCIENCE FICTION ...... 110

Why Study Science Fiction Interfaces? ...... 113 Comparing Filmic Interfaces and Implications ...... 122 Minority Report – Dystopian Ubicomp ...... 123 Iron Man – The Beginning of a Cyborg Subjectivity ...... 133 Iron Man 2 – Stark’s Extended Cognition ...... 137 Iron Man 3 – Beyond Extension to Aggregated Cognition ...... 143 Avengers: Age of Ultron – Fully Bodied AI ...... 153 Posthuman Visions...... 162

V. RETHINKING INTENTIONALITY FOR POSTHUMANS AND ARTIFICIAL INTELLIGENCE ...... 167

Machine Intelligence and Intentionality ...... 178 Artificial Intelligence and Identification ...... 183 Spike Jonze’s Her: AI’s Bildungsroman ...... 188

BIBLIOGRAPHY ...... 199

NOTES ...... 239

iv

LIST OF FIGURES

Page

Figure 1. Tweet between Jon Favreau and Elon Musk Shows Sci-fi’s Influence ...... 118

Figure 2. The Precogs in the “Temple” with BCIs Linking their Cognition ...... 123

Figure 3. The Pre-Crime Scrubber Interface ...... 125

Figure 4. Anderton “Corrects” the Interface ...... 127

Figure 5. Wearable Brain Interface that Imprisons the Wearer ...... 130

Figure 6. Tony Stark Physically Touches and Tests Designs via Hologram ...... 136

Figure 7. Stark Places his Design into a Holographic Chest Cavity ...... 138

Figure 8. Stark Grasps the Holographic Nucleus ...... 140

Figure 9. Stark Expands His Arms and the Hologram Rendering a 360º View ...... 141

Figure 10. JARVIS’ Digital Representation of the Crime Scene ...... 144

Figure 11. Stark Pulls out a Digital Cross Section of the Hologram ...... 145

Figure 12. Stark’s Suit Responds to his Mind’s “Call” Attaching to his Body ...... 148

Figure 13. Stark’s XLII Suit, Coded to his DNA, Activates while Stark Sleeps ...... 152

Figure 14. JARVIS and Ultron’s Holographic Renderings ...... 157

Figure 15. Ultron’s First “Body.” ...... 158

v

CHAPTER I

THE AGE AND ANXIETIES OF UBIQUITOUS COMPUTING

Imagine three scenarios, three scenes in time. In the first, a man clad in black stands directly in front of a wall-sized curved, clear screen. On his hands, he wears black gloves that cover his thumbs, index and middle fingers, bluish lights emanating from the tips of his fingers. Raising his arms like a conductor, he sweeps his arms, twists his hands, reaches out and moves images hovering on the transparent, curved screen, without actually touching the screen in front of him. His hands rewind and fast-forward video

images, rotate, zoom in and out, as his actions control the user interface windows floating in front of him. Later, this same man walks into a shopping mall wearing everyday clothing, engaging in a normal, seemingly banal activity. Upon his physical presence in this retail space, advertisements using eye-scanning technology identify him and call his name, offering products and sales based on his individual metadata. So pervasive are the scanners, sensors, and screens, the man would have to physically alter his material body to avoid such constant interfacing and identification.

In our second scenario, a man in a black shirt sits at a typical computer desk while using his hands to move a three-dimensional (3D) wireframe, a skeletal representation made of lines and points, of a rocket engine displayed on a typical computer monitor. He reaches out and “grabs” the object without touching the screen, using only hand gestures

1 to move and manipulate it. He zooms in and out, spins and catches the model, moving it on the computer screen in front of him, his hands never touching the screen itself. He can manipulate and shape the model at will. Then, this engineer uses another interface to interact with a full 3D Computer Aided Design (CAD) model of the engine displayed on a large black screen. Again, he uses his hands to alter the engine’s design without relying on having to figure out how the computer software can achieve the changes he wants.

Rather, his hands simply move the model and make the desired alterations as needed. In his , another engineer puts on a pair of 3D goggles, and she further manipulates a 3D projection of this mechanical part, the object floating in midair in front of a screen.

Using hand gestures alone, she builds, examines, and adjusts the model, shaping the engine to the required specifications. With yet another interface, the original engineer uses a freestanding holographic projection of the 3D wireframe engine that floats above a glass structure. Finally, another engineer dons a virtual reality headset that tracks his head and body movements, allowing him to move around the 3D model in an immersive environment, the 3D projection right in front of his body, there to further move, manipulate, create, compose. Utilizing these various technological interfaces, these and engineers are able to take an imagined idea and translate it into actual objects, building their rocket engine components much more intuitively and efficiently than if they used traditional computer software and interfaces that use less intuitive interactions to achieve these design and alterations.

For the final scenario, a woman wearing glasses walks down a sidewalk in a foreign country. She casually tries to find her way to her supposedly nearby destination:

2 an art museum. She glances up at the sky, focuses on the graying clouds, and with a few utterances she sees an instant weather report for her location on her glasses lens as she walks. Along this unknown street, she then spots a sign written in Dutch, a language she can neither read nor understand. With a few more quick phrases spoken aloud the woman knows the meaning of the sign and has turn-by-turn directions to the museum. A few more words and a wink and she photographs the historical landmark that the street sign notes and gets in-depth information about the site and its . The world is her interface; nearly anything she comes across is searchable, potentially mappable, documentable, compositional.

These three anecdotes suggest a world teeming with interfaces, biometrics, wearables, 3D projections, holographs and many, many more, a world of interfacing happening between spaces, bodies and objects. The first example is fictional character,

John Anderton, protagonist of Steven Spielberg’s Minority Report set in 2054, using what is considered a canonical gestural user interface. This fictional interface is but one example of a spatial, natural user interface (NUI) wherein the user can directly interact with multimedia content using hand gestures in existing three-dimensional (3D) space.

His body exists as yet another interface, at once controlling data while simultaneously brimming with data to be scanned and traced; in fact, he is data. Science fiction writers and filmmakers have historically imagined technologies, and these often dystopian, fictional representations explore the dynamics and imagined potentialities between humans, technologies, and culture. The second scene describes South African engineer and entrepreneur Elon Musk and his engineers in his laboratory in 2013, specifically the

3 various interactive gestural interfaces his team uses to design rockets for SpaceX. The last scene is a hypothetical example of a woman wearing Google Glass using some of the features of this recent wearable technology. The two “real world” anecdotes look similar to the fictional depictions of science fiction interfaces and signal the arrival, or at least burgeoning, of pervasive and ubiquitous computing (ubicomp).

In 1991 , envisioned “a new way of thinking about computers, one that takes into account the human world and allows the computers themselves to vanish into the background” (94). Weiser does not refer to personal computing (the trend at the time of publication) or mobile computing (which dominates our current moment); both of these instantiations of technological development still focus on a single device. And unlike virtual reality that relies upon a computer-generated environment, ubiquitous computing (ubicomp) concerns “invisibly enhancing the world that already exists” (94).

Ubiquitous computing means not only “everywhere” but “in every thing” (Greenfield

11). Computers1 shrink into small microprocessors embedded into the built world, and people “interact with these systems fluently and naturally, barely noticing the powerful informatics” at work (Greenfield 11). Everyday objects, surfaces, and spaces then have new characteristics and become loci of information processing.

The concept of ubicomp has expanded since first Weiser first theorized it, and there is no single, absolute definition.2 Weiser understood it as distributed computing that is contextually aware of the environment with implicit human computer interaction.

Similarly, , another form of ubicomp, is embedded, personalized, adaptive, context-aware, and anticipatory (Aarts and Marzano 16). Others see ubicomp as

4

primarily concerned with augmented reality and distributed mobile intelligence, like the

third hypothetical (Endres, Butz and MacWilliams). Most broadly, ubiquitous computing

references computing without computers; information processing is everywhere,

invisible, yet available, and situated in the material environment.3 Pervasive and

ubiquitous computing collectively together constitute “a third wave in computing”

following distributed and mobile computing (Zhao and Wang 594). I use the term

ubicomp to describe the overarching model of human computer interaction (HCI)

wherein information processing is moved from the desktop to the environment as

Information and Communication Technology (ICT) systems are embedded everywhere, available for interaction, yet are invisible (Greenfield 2). Technology, specifically interfaces are transparent, blending unobtrusively in the material environment (including bodies) as sites of interaction. In other words, devices and functionalities are hidden in larger, interacting systems offering seemingly natural user interactions. Explicit interactions in complex systems become too distracting, intrusive, or overwhelming, so ubicomp, according to technology designers and theorists, supports more implicit human computer interaction (iHCI) with hidden, invisible interfaces (Poslad 11). Put simply, the

computer disappears, fading into the periphery.

As hardware and software companies rapidly develop digital technologies, they proliferate with consumer adoption and are embedded in the world around us (and ourselves). With this progress “things” become crucial. This “Internet/Cloud of Things”

(Pew Research Center, “The Internet” 6) can be understood as a “global, immersive, invisible, ambient networked computing environment” constituted through an ecology of

5

globally-connected sensors, software, cameras, etc. embedded in objects, spaces and even

our bodies (Pew Research Center, “The Internet” 1). Users wear sensors (such as fit bits

and Apple Watches) to monitor themselves or others, as they move in and out of spaces

and environments also containing sensors—for example, sensors that monitor rates of

traffic, parking spaces, climate conditions, water usage and more. Built and natural

environments—prime sites for technology such as cars, homes, oceans, forests, etc.—

become sites of sensors monitoring the states and interactions taking place, whether

monitoring temperature controls in your home to pollution levels in oceans or the air, or controlling safety functions in a vehicle. J. P. Rangaswami predicts, “‘Everything’ will become nodes on a network” [emphasis added] (qtd. in Pew Research Center, “The

Internet” 6). Further, Patrick Tucker explains,

Here are the easy facts: In 2008, the number of Internet-connected devices first outnumbered the human population, and they have been growing far faster than have we. There were 13 billion Internet-connected devices in 2013, according to Cisco, and there will be 50 billion in 2020. These will include phones, chips, sensors, implants, and devices of which we have not yet conceived. (qtd. in Pew Research Center, “The Internet” 6)

These types of predictions show a move toward a networked world wherein everything we do creates data. With vast networks of sensors and devices all producing and processing data, human users become part of this network. Calleja directly states, “Our interaction with the network turns us into part of the network itself” (7). Whether bodies are nodes through wearable sensors, tracked via metadata, operating an interface, or sustained by medical implants, the body is a material, biological, and technological

6 component of ubicomp networks. As human bodies become loci for processing and interaction the lines between the body and technology blur.

A world abounding with interfaces thus shapes how we understand what it means to be human, as humans converge with technologies in varied ways. When technologies and humans merge into something else, like the cyborg, technologies can be a source of anxiety for users, particularly when they feel out of control or that technologies are

“doing” things to them. These anxieties can be seen in the ways that users, tech writers, and cultural critics talk about technologies in popular media and the subsequent debates that ensue between the writers and their online audiences. The anxieties are also seen in films and literature, particularly in science fiction, which offers the ideal platform for playing out the potentialities and pitfalls of technology on human life.

Technology affects how society understands itself in terms of “technoculture”

the dynamics between technology, politics, and cultureand how we understand ourselves (Penley and Ross). Therefore, analyzing technology develops a deeper understanding not only of technology itself but also our engagements with it. Dorothy

Winsor states that “technology has an enormous shaping force on our lives…As rhetoricians, we can contribute to an understanding of this crucial human activity”

(“Guest Editor’s” 287).

Rhetoric of technology scholars study technological discourse in texts to parse how people speak and write about technology. Charles Bazerman asserts that this field analyzes “the rhetorical productions that surround a material technology” (“The

Production” 381). Some take a classical approach to technology, focusing on how the

7

appeals, topoi, kairos, etc. work in technical genres and praxes (Miller; Walzer and

Gross; Miller and Selzer). Carolyn Miller, in fact, argues that “no conceptual vocabulary other than that of classical rhetoric makes it possible to attend to the suasory nature of discourse” in all of its complex contexts and communities including technological ones

(“Opportunity” 93). Others take an epistemic approach to technology focusing on how meaning is socially constructed and reconstructed through discourse generated from various actors; these scholars specifically examine how discourse, within and outside expert communities, shapes technology and vice versa (Winsor; Warnick; Doheny-

Farina; Ornatowski; Medway; Bazerman; Van Nostrand). For instance, Bazerman’s book

The Languages of Edison’s Light analyzes the social interactions between the public and technology manufacturers, identifying the ways that Thomas Edison used analogy as

“symbolic engineering” to facilitate the 1870s public’s acceptance of a new electric power source (Languages 335). Carol Berkenkotter analyzes “everyday texts” (50) meaning the “the mundane bureaucratic, institutional, and organizational documents” like official records, websites, proposals, memos, etc. to better understand how technologies are created and adopted (50). Regardless of the perspective, these scholars focus on language about technologies or the rhetorical effects of technology on communication.

Technology-infused environments raise broader social and cultural questions for a diverse array of scholars including those in rhetoric, cultural theory, women’s studies, media studies, communications, philosophy, literature, cognitive science, and technology studies. Some of these queries concern whether human beings are transforming into a different type of being or human physiology or the potential changes in how we think or

8

act. It is these posthuman theorists that this project speaks to. Using a posthuman

framework posthumanists explore these larger queries about what it means to be human.

Vivian Sobchack argues that “transformation through technology has . . . detached itself

from visions of rationality and [social] progress and attached itself (with some anxiety) to

more subjective states of technological being” (157). Posthumanism explores how conceptions of the human can be re-imagined and redefined given the coalescence of

human and non-human. Various branches of posthuman theory seek to destabilize the

autonomous self and the mind as well as the binaries between human/object, body/mind,

and human/technology. This project in particular builds upon this interdisciplinary

scholarship that addresses these broader concerns of posthuman thought.

I seek to outline a posthuman reconfiguration of human cognition, one that disrupts the closed, autonomous, rational subject in favor of a distributed posthuman figure with constitutive relations in a networked ecology that includes material, biological, technological, and social substrates. I build upon the work from posthuman,

post-phenomenologist scholar Anthony Miccoli. However, what I aim to do is to bring

rhetoric to bear on his larger claim to deepen his argument about cognition and

intentionality, and the multiple functions of interfaces, which I discuss in subsequent

chapters.

This initial chapter introduces ubicomp and argues that pervasive networks of

sensors and devices are embedded in the environment and our bodies. These technologies

are rapidly increasing with fast-paced technological development and consumer adoption.

And human bodies become part of ubiquitous networks as key sites of both interaction

9

and information collection and processing as users carry mobile devices, wear sensors,

and/or use interfaces that engage the body and space. With this merging with the

network, and the ways that users “become one with” the technologies, the lines between

the human and technology blur. This instability often brings about tension and anxieties,

as users feel overwhelmed by the ubiquity of technology and the role it plays in many

aspects of life.

Tensions and Anxieties Arise

A cursory look at some of debates taking place in popular media during the last

decade highlights the tensions of which I speak. I examine the set of texts below

specifically because they are widely read and frequently shared texts, indicative of a

broader public conversation about technology. They have generated further debate in op-

eds, TEDTalks, and online conversations. Technology scholar (and internet advocate)

Zeynep Tufekci points out how quickly media outlets published opinions or narratives

about technology’s negative consequences such as deleterious effects on users’ thinking

processes as well as the fabric of human social relationships. But these conversations

spark larger debates about technology by highlighting binary points of view, forcing the

reader to pick a side. One strategy adopted by rhetoric of technology scholars, as noted

above, is to examine these types of texts to see how technological concerns get distilled

for mass audiences and how this discourse drives broader discussions about technology

users as a whole.

Each of the writers discussed raises questions about technologies’ powerful

effects on our physiology and behavior. And these narratives illuminate commonplace

10

thinking about the presumed static boundaries between the virtual and real as well as

technology and the body. The authors’ apprehensions are apparent in the ways that the

narratives construct digital culture and grapple with the angst that society at large has

long held about technological impact on humanity. For instance, there has been an

ongoing conversation in popular media about what specific technologies are “doing” to

us, whether the debate questions about technology’s effects on our brains, intelligence,

social relationships, literacies, social skills, or attention spans.

This largely pessimistic camp exhibits dystopian attitudes, ranging from cautious

suspicion to apocalyptic predictions of irreversible effects. For instance, in 2007

technology writer Nicholas Carr published his extremely popular, much discussed article

in The Atlantic “Is Google Making Us Stupid?,” speculating that internet usage has

affected users’ ability to concentrate and read in-depth. Web searches, he argues, splinter

our attention and thus rewire the brain, making it less adept at focused, deep thought

(Carr). Furthering his point in a subsequent research report he writes that the web

diminishes deeper intelligence in favor of “what might be called a utilitarian

intelligence…The price of zipping among lots of bits of information is a loss of depth in our thinking” (“The Internet” 6). His follow-up book The Shallows: What the Internet is

Doing to Our Brains4 develops his claims about the brain’s neuroplasticity and outlines how shallow habits and neurological changes sacrifice focused attention for a more fragmented way of thinking shored up by the convenient, yet frantic nature of the

Internet. Carr says that for him the Internet in particular is “not just technological progress but a form of human regress” (Shallows). Tech writer and journalist Maggie

11

Jackson voices a similar concern about the Internet as fostering distraction, and rather than generating intelligence promised by having access to copious amounts of information and data, we are becoming more ignorant (16). These authors raise questions about the direct relationship between technology and cognition, a debate that scholars have discussed at length as I’ll discuss in subsequent chapters.

Other strands of this broader conversation debates technologies’ effects on communication practices and literacy, illuminating the virtual and “real” binary, in which many authors privilege the real. For instance, Stephen Marche’s widely read “Is

Facebook Making Us Lonely?” in The Atlantic questions the disconnection from “real” relationships to more surface ones online and the effects of this shift on users’ mental health. The New York Times ran multiple, similar op-eds from the last decade, all asserting a deterioration of human relationships due to the prevalence of social media and mobile devices (Cohen; Egan; Foer; Fredrickson; Franzen). Sherry Turkle’s popular work

Alone Together raises similar concerns about the “real world” connections lost in favor of virtual ones. These texts5 rely upon a basic binary between the real and the virtual and argue that the virtual is less valuable.

Further, this strand of popular discourse demonstrates the notion that technology either deterministically changes humanness or that virtual interactions are altogether disembodying. Some see the supposed disembodiment as worrisome, others, liberating; regardless, the divide between real and virtual, human and technology surface in each text in various ways, and these ideas are the source of the anxieties.

12

These popular texts do, however, raise important theoretical questions that spark conversations in the public sphere, even if they do not delve into the scholarly theories that undergird them. Does technology “do” things to us? Our brains? What constitutes a divide between the “real” and the “virtual?” Does technology disembody the user? Does it render the human into data? Does it engage the mind, leaving the body behind? Or does the divide between body and mind even exist? What, ultimately, is the relationship between reality, our bodies, and technology? These are some of the same questions that lead posthumanists to offer more nuanced theories to try and address some of these concerns.

These questions and anxieties, however, are not new. For instance, in Plato’s

Phaedrus, Socrates tells the story of Theuth and Thamus, and an exchange about the merits of writing ensues. Theuth believes that writing will “make the Egyptians wiser and will give them better memories” (274b). Thamus disagrees arguing,

… this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. (274b-275)

Much like Carr’s concern about attention span, here in Phaedrus lies disquietude about the deleterious effects of writing on the memory; if one has the luxury of writing, she

13 does not need to commit ideas to memory but will rather “trust” the words on a page over those in the mind.

Seneca raised concerns similar to Carr’s about books: “the abundance of books is a distraction” (qtd. in Blair 15). Later with the rise of the printing press critiques like

Seneca’s echoed. Immanuel Kant, for instance, argued that increasing the number of books available certainly fostered reading, but reading “superficially” (30). Chad

Wellmon points out that like the anxieties contemporary critics have with search engines concerning oversimplification of information that leads to surface-level thought, similar concerns were raised about “Enlightenment reading technologies” that managed the massive amounts of texts produced from the printing press, such as encyclopedias, bibliographies, marginalia, commonplace books and more (69). The point is that with the emergence of technological developments come these types of tensions and questions about the effects on those that use them.

More importantly, though, is the fact that the “newness” ascribed to technologies is recurrent throughout history. Lisa Gitelman posits “the truism that all media were once new” (1). All media and technology experience “novelty years, transitional states, and identity crises,” and studying these characteristics develops a fuller picture of media history, with its values and assumptions, and how communication is historically and continuously shaped (Gitelman 1). In other words, Gitelman understands that at some point all technologies are lauded initially, then bring about periods of change. She does not, though, say that all are the same. In addition, she notes a tendency for people to

14 inscribe a deterministic agency to media and technologies, something seen in the mass media texts discussed above.

Further, when debates about technologies offer only “dichotomous perceptions,” this ignores the richer ecological dynamics that technological environments have (Selfe,

Technology and Literacy 36). Cynthia Selfe sees such perspectives as too limited and simplistic as they force readers to choose a side, “either beneficial or detrimental”

(Technology and Literacy 36).

The charge Selfe levels at such binary responses also implies another issue with these types of arguments: the binary frameworks they assume are problematic. Whether utopian views that claim technology will free humans from their bodily form or dystopian narratives that worry about the deleterious effects of technology on bodies, brain processes, and human relations, both poles focus on the causal effects of technology on humans. This focus is one of technological determinism, regardless of the attitude, and it simplifies not only how humans and technologies form each other, but more importantly, the extremely complex dynamics between bodies, objects, spaces, interfaces and code. In essence, these typical popular narratives make humans appear as causal results or worse victims. Such debates, as Wellmon notes, obfuscate “the ways in which we actually engage it [technology] and the world of which it and we are a part. All of this…tends not only to marginalize human persons, but also render technology just as abstract”

(Wellmon 67). But technologies are not abstract; they are quite material, and users design, use, and adapt technologies to their purposes in very material ways.

15

Unpacking the Binaries

Binary constructions such as human/technology, human/object, body/mind, and virtual/real inform these tension-driven narratives, but they problematically oversimplify complexities and render troublesome effects that further the presumed divides, and these divisions are untenable. These assumptions must be reconsidered in favor of a more nuanced, ecological framework that encompasses the materiality of objects, bodies, and spaces, one that can make more sense of the posthuman world we inhabit.

Virtual vs. “Real”

The first assumption to dispel in brief before moving to the larger, more sweeping ones is the seemingly stark divide between the virtual and the material or “real.” Despite evidence to the contrary, the idea of the virtual typically evokes images of disembodied engagement like virtual reality, often conceptualized as static users simply staring at screens (perhaps wearing headsets), while their mobile minds are exploring “cyberspace.”

William Gibson’s designation “cyberspace” in Neuromancer conceptualized a mind/body, virtual/real split, defining cyberspace as

A consensual hallucination experienced daily by billions of legitimate operators, in every nation…. A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non-space of the mind, clusters and constellations of data. Like city lights, receding…. [emphasis added] (51)

By no means is Gibson a technology naysayer, far from it; however, his characterization of cyberspace as a “hallucination” and “non-space of the mind” established some lasting tenets: that cyberspace isn’t real and that it is not a space of the body but rather of the

16 mind. Gibson understood this space as liberating, the physical body left behind no longer bearing its own weight, as the mind wanders disconnected from material reality into a purely virtual realm cut off from the body’s physicality.

This vision of the virtual as something other than real, something illusory and immaterial, persisted in critical writings about cyberspace. In their critical essay

“Cyberspace and the American Dream: A Magna Carta for the Knowledge Age,” Ester

Dyson et al. argue that the material, “has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things.” The world, in essence, found value in bytes instead of matter. John Barlow’s similar essay “A Declaration of

Independence of Cyberspace” concurs, “Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” He reiterates his point in another essay, observing that “in the years to come, most human exchange will be virtual rather than physical” (Barlow, “The Economy of Ideas”). Media theorist Allucquere Rosanne

Stone’s The of Desire and Technology at the Close of the Mechanical Age forecasts pervasive computing, in essence saying that technology will be invisible and natural in this “technosocial” age wherein identity, agency, and bodies emerge in fragmented, unstable ways, fundamentally disconnected from materiality (36-59). Some rhetorical scholarship likewise relies upon the divide between virtual and real, specifically with terminology concerning on- and offline identities (Zappen; Grabill and Pigg; Warnick and Heineman). James Zappen, for instance, outlines a digital rhetoric theory, one element of which includes “a complex negotiations between various version of our online and our real selves” [emphasis added] (323). These texts maintain the real and virtual

17

divide, or even when acknowledging it, still use terminology that bifurcates on and

offline activities and identities, failing to see the intertwining as one affects the other.

However, numerous scholars have demonstrated that the virtual is not

disembodying or immaterial (Baym; Liestol; Munster; Bakardjieva; Jenkins) and that

materiality and things do indeed matter. In Becoming Virtual: Reality in the Digital Age,

Pierre Lévy introduces an emergent logic, critiquing the common opposition between

virtuality and reality: “[T]he word ‘virtual’ is often meant to signify the absence of

existence, whereas ‘reality’ implies a material embodiment, a tangible presence” (23).

Levy sees actuality and virtuality as simply “two different ways of being” (23). Adriana

de Souza e Silva notes that cyberspace “has been traditionally considered an immaterial

space, a place for the mind, contrasting to physical reality, inhabited by the physical

body” even though scholarly work has moved away from the gap between virtual and

real, on- and offline spaces (209-10). Theorists now explore the dynamics between the

various technological, material and social spaces, and how these make up a diversity of

everyday spaces (Berker et al.; Berry, Kim, and Spigel; Cabañes and Acedera; Madianou

and Miller, 2011).6 It has become more and more evident that the so-called “real” and the

virtual are not separate, but interconnected, collectively constituting material reality.

“Cyberspace” as it was once understood, separate from the physical world and isolated from the mind, is intricately connected to the physical world, some positing a “hybrid reality” that only gets more complex in ubicomp environments (de Souza e Silva 214).

This hybrid or “mixed reality,” as Mark Hansen calls it, sees the physical and virtual not as separate entities, with one being more real than the other (Bodies in Code 13). Monica

18

Fleischmann and Wolfgang Strauss concur and describe a shift in media studies scholarship as “turning the theory on its head that man is losing his body to technology”

(qtd. in Grau 219). For them, interactive media does not disconnect the body from the virtual but rather extends it into new spaces of action.

Rhetorical scholarship has followed suit to rethink the separation of the virtual and the real, particularly in reconciling on- and offline identities. Jenny Davis argues that the structure of social media platforms converge the “real” and virtual identities through the use of real names and other features. Helen Kennedy further asserts that on- and offline identities are “continuous…not reconfigured versions of subjectivities in real life”

(861). Megan Boler also notes that the construction of online identities brings embodied identities with them. All of these scholars gesture to the fact that the virtual world is not simply a technologically generated, self-contained and immersive simulacrum; rather the combination of material and virtual perceptual realities creates realms of embodied, situated perception and engagement with the body playing a key role in this engagement as opposed to disconnected from it (Varela 96-102).

The Corporeal Turn: Materiality Matters

Far from the material being relegated to the presumed purely virtual world, materiality matters. ’s 1999 presentation “The ” reminds us that discussing materiality when theorizing technology and digital culture is not exactly new. Ashton, contrasting Gibson and Barlow, posits, “We’re physical, and so is our environment. Our economy, society and survival aren’t based on ideas or information

 they’re based on things” (qtd. in Greengard 20). In Design, Meditation and the

19

Posthuman, Weiss, Propen, and Reid note that since the 1990s the assumption that

“computers have introduced us into a virtual era in which the material world is becoming

irrelevant” has long persisted, but the millennium gave way to a post-millennium “turn

from the virtual age to the age of the thing…matter continues to matter and things still do

things” (17). Hardt and Negri refer to this scholarly direction as a “corporeal

transformation” [emphasis added] (200).

Often “corporeal” conjures images specifically associated with human bodies as

1990s corporeal scholarship addressing embodiment demonstrates (Butler; Sheets-

Johnstone; Grosz). However, the OED shows that the term historically referred to things

that are “tangible, associated with material objects” (“corporeal”). The word did not

necessarily designate a (human) body, but rather something, any thing7, human or not,

that is material. So, the corporeal shift that we are seeing is a return to that original meaning of the term.

This turn towards materiality among scholars focuses on re-conceptualizing

nature, as well as humans’ and man-made objects’ relationship to it. Hardt and Negri

acknowledge that “human nature is in no way separate from nature as a whole…nature

itself is an artificial terrain open to ever new mutations, mixture, and hybridizations”

(215). Nature is not a static concept excluding material objects, built spaces (including

virtual), and technology; rather, it includes these along with the components often

thought of as “natural”like humans, animals and natural spaces along with non-

humans, virtual spaces, objects and technology. All of these elements constitute nature as a dynamic, hybrid concept.

20

So our devices, technologies and the corresponding material and digital spaces are

part of nature, and for Hawk, Rieder and Oviedo, the material “small tech” like mobile

devices, tablets, digital cameras and more are key points for analysis. These devices

disrupt what many scholars previously understood as the line between virtual and

material, highlighting the ways in which “the virtual Internet, the material spaces, the

physical body and handheld culture” intersect (xi). Weiss, Propen and Reid agree, seeing

mobile devices as having “made us increasingly aware of our bodies’ interactions with

machines” (12) and blurring the lines between “software vs. hardware, production vs.

consumption…MIND vs. THING” (12). In 2016 mobile technologies constitute a nomadic, hybrid space with perpetually mobile users carrying handheld devices, the

virtual resting in the palm of their hands. Unlike virtual reality wherein the user moves

through a virtual space, with pervasive mobile computing, users literally transport digital

spaces through physical space with each movement of the body. Users interact with a rich

array of integrated interfaces and computing capabilities as they move through physical

spaces. Far from some immaterial notion of a virtual world, a world that is less “real” and

incorporeal, the material matters given the ubiquity of mobile technologies. Small tech

like that which Hawk, Rieder and Oviedo describe demonstrates a shift wherein “users

interact with numerous physical interfaces embedded in material ecologies rather than

virtual ones” [emphasis added] (xv). Hawk, Rieder and Oviedo state, “Small tech

highlights the complexity of the threshold between the material world of big, physical

things and the virtual worlds of conceptual, affective communication and calls for the

attention of humanities scholarship” (xiv). Material relations, then, cannot be ignored.

21

Numerous cultural theorists and scholars have taken up this charge such as N.

Katherine Hayles, Alphonso Lingus, Pierre Lévy, Brian Massumi, as well as Gilles

Deleuze and Félix Guattari, as they examine how the seemingly immaterial, weightless virtual and the material are not opposed but rather complementary. Despite small devices and other technologies’ apparent invisibility or recession into networks, they still have power in material environments, power to give rise to material ground for the potentialities of human action, communication, and connections in these complicated networks. So our current moment is a catalyst for study because of the ways that it integrates human bodies and material ecologies at both local and global levels (Hawk,

Rieder and Oviedo xiv).

Weiss, Propen and Reid note, “the corporeal transformation demarcates a new kind of fascination with the human as constituted by the world: the materiality that

impacts us, and that makes us up” (18). Scholarly interests in particular show a move

beyond human embodiment to study “the forms of inorganic matter into which humans

project and extend themselves” (Weiss, Propen, and Reid 17). Many theorists now

understand that humans are neither separated from objects, spaces, and technology nor

merely situated in material environments; rather, all of these are co-constitutive in a

complex interdynamic posthuman ecology.

And contemporary posthuman studies disrupt the human-centric and language-

centric focus of rhetoric by focusing on materiality, relationality, aesthetics, affectivity,

and transference (Whitson and Poulakos; Condit; Bost and Greene; Hawhee; Rice;

Stormer; Diane Davis). Scholars also look towards troubling how we understand agency

22

as structured by human-discourse relationships (Rickert; Edbauer). Contemporary posthuman criticism seeks to bridge the gaps between the body, the mind, and the environment. This project, like those which interrogate the dynamics between human and technology as material and discursive, sees the constitution and re-constitution of the body as part of an intricate, aggregate interplay between numerous substrates.

Moving Forward

The larger aims of this project are to work towards a few common goals amongst posthuman theorists. The first goal involves rethinking what it means to be “human” in a posthuman world. To do so I draw on posthuman theory to articulate an intervention into how cognition and technological relations are understood, arguing that humans are constituted by aggregated, distributed relations in a networked ecology made up of bodies, objects, technologies, spaces, language and code. Second, I employ this theory to redefine human cognition as distributed and networked as opposed to the idea of an

autonomous, rational human being with an internally bound cognition separate from the

external world.

Chapter I has outlined the age of ubicomp and how the proliferation of numerous

technological devices, interfaces, and sensors has wrought tensions about seemingly

static boundaries. Further, I’ve worked to disrupt the binary that the material “real” and

the virtual.

This leads into Chapter II, which situates this project within posthuman and

rhetorical theories. After discussing the ways that posthuman theories seek to disrupt and

de-center the autonomous self, I discuss two posthuman cyborgs in rhetorical

23

scholarship—the metaphorical cyborg as a site of resistance and the everyday cyborg intricately tied to technology—as well as their relations to technology through

technogenesis and technology extension theories. I then discuss the third cyborg that

results from advances in biotechnology, genetic engineering and nanotechnology, arguing

that this posthuman becomes something different from what we’ve understood as a human, an enmeshed network of biological and informatic code with neither having primacy. Further I build upon Miccoli’s assertion that the interface as space in between

functions as a myth so that humans have an interior, autonomous sense self, as opposed

to the aggregated cognition that constitutes human thought processes. Miccoli proposes

that human cognition is distributed across a topology made up of material, technological,

biological substrates. I further his argument reworking it as an ecology and add an

additional substrate: the social. Like Miccoli, I see the interface, meaning the spaces in between, as a functional myth, as humans are mutually constituted by material, biological, technological and social means. Specifically I reconfigure Kenneth Burke’s identification theory for the technological age and argue that the posthuman subject

consubstantiates with the substrates, or “substances” along with other humans to continuously invent the self in a dynamic ecology.

Moving away from the mythic interface, the third chapter analyzes technological interfaces and their development as they have moved from the desktop to touchscreens to spaces where the body becomes a literal interface and site of interaction. This chapter

calls for rhetoric and composition scholars to interrogate not only the discourse of

technologies but the interfaces themselves. This critical analysis is essential because as

24

interfaces incorporate bodies, designers are no longer creating interfaces; they are creating users. Having a critical understanding of interfaces then can give the user a stronger sense of how they work, an interface literacy.

To develop this literacy, I interrogate the concept of interface more thoroughly.

First, I discuss the definitions of interface and interface design principles as well as how

rhetoric and composition scholars have historically understood and theorized

technological objects. I argue that scholars across fields can trouble the discourse of the

interface specifically addressing the myths of objectivity, neutrality, transparency, and

naturalness. Then, I argue for a more critical inquiry of technological interfaces, the

hardware and software that translate code into visual and haptic forms for humans to

interact with and engage. If the boundaries between human, technology, object and

environment are not stable, but fluid and co-constitutive, then neither is the concept of

“interface” operating as the conduit in between. As bodies become interfaces with

innovative natural user interfaces (NUIs), the text-based or graphic user interface

disappears, no longer serving as a visual conduit or mediator; the informatic function is

still there, but there is no visual mediator for the translation of action. And as

biotechnological further dissolves the spaces “between,” there is no interface

between the technology and the body, but rather an amalgamation. As interfaces

continually “disappear,” becoming transparent in their design with the body as a primary

operating mechanism, software and hardware engineers are no longer designing

interfaces; they are designing users, which means that both engineers and rhetoricians

25 need new ways of theorizing these active objects. Thus, building on Miccoli, I argue for redefining the interface as an object and a posthuman action.

To elucidate my discussion of posthuman cognition and to make the relations between humans, technology, objects, and environments clearer, Chapter IV uses popular science fiction films to demonstrate my key points. I first outline the ways that science fiction can and often does shape actual interfaces and how the genre works as a fictional ground to play out the anxieties brought about by . Then, I discuss conceptions of the posthuman age in Stephen Spielberg’s Minority Report to show a world where humans seemingly have power, implied through the depictions of the film’s canonical user interface, the PreCrime Scrubber. However, I show that ultimately humans are fodder for interfaces, separate from and susceptible to them in this dystopian narrative that takes technological anxieties like those mentioned in this chapter to full fruition and beyond. Technology does not simply “do” things to us or change us; it controls us in

Spielberg’s narrative. From there I move to the Iron Man trilogy to show Tony Stark’s transition from a clunky cyborg to a fully integrated posthuman cyborg whose mind is simultaneously part of his material biology as well as his technology by the third film. I then turn to Avengers: Age of Ultron to analyze the evolution of Stark’s AI system Jarvis into a fully bodied entity. By charting this course via filmic narratives, I show these representations of various strains of posthuman thought and the anxieties that arise from imagining such futures. But more importantly, I use the films to offer a more accessible means through which to understand the larger philosophical arcs in posthuman theory.

26

Lastly, in Chapter V I turn to the issue of intentionality and argue that humans

tend to privilege our own ways of thinking and acting in the world. I discuss various types of intentionality and how there might be intentionality that we do not or cannot understand because it is not our own, such as the intentionality of artificial intelligence

(AI). I use one last film, Spike Jonze’s Her, and analyze Jonze’s representation of a fictional artificial intelligence. This chapter essentially points to the fact that technology, specifically artificial intelligence, will have a type of intentionality decidedly different from our own. The fact that it is different does not negate it. I reiterate that subjectivity, intentionality, agency, and cognition stem from aggregate, distributed ecologies constituted by the dynamics of material, biological, technological and social substrates.

Further, I posit that humans have never been defined by what we are, as rational, thinking autonomous selves, but by the relations that we have as part of a complex, dynamic ecological network; it is all of the relations we form amongst the constituent parts of this ecology that constitute the self and how we know the world. Beyond our androcentric cognition, other thinking beings and objects like AI have different substrates and constituents in their own ecologies and, thus, different means of intentionality.

Studying technology from a posthuman and rhetorical framework can help us to understand technology as one aspect, one part of a broader ecology that constitutes how we understand humanity, cognition and intentionality by fostering the idea that technology is not deterministic but one productive site for action and/or resistance, whether human or non-human.

27

CHAPTER II

CYBORGS, BODIES, AND INTERFACINGS

The ubiquity of technologies whether in the current age of mobile computing or the seemingly inevitable future of pervasive ubicomp is becoming our new norm. Our

navigations through technologically informed and embedded realities are so common that

we often do not marvel at these interactions. Families and friends connect via video-chats

and texting. Students take and teachers teach online courses without ever having to set

foot in a classroom. Some of us pay for coffee, program DVRs, or lock our doors using

smartphones or tablets. And oddly enough, these events are becoming commonplace.

In fact, the times when humans notice technology the most is typically when it is

absent or it falters, such as the incongruous feeling when reaching for a device to access

the Internet to find information or connection, but service is not available.8 Users have a

moment of disconnection, feeling cut off from part of their technological selves that

informs actions and thought processes so intricately, yet so routinely. Moments like this are demonstrative of the fact that our subjectivity is linked to actions, actions often integrated with technology.

We live in a world teeming with technological interfaces, devices, bodies, and environments all interacting with each other, and in these ubicomp spaces the divide between human and technology (or human/object) is disrupted. This posthuman landscape where the boundaries between these elements are fluid requires a

28 reconsideration of how we understand and define these presumed static borders and their ever-increasing fluidity. This project seeks to reconsider how we understand the complex dynamics of these fluid relationships and what it means to be a thinking human.

Why Posthumanism?

Posthumanism is a “technical-cultural concept” and a material reality, so it offers a promising means to explore the complexities of technology and humans as posthuman theory reconfigures nature/culture, human/technology, and body/mind (Hayles, How We

Became 22). The “post” seeks to rethink notions intrinsic to much philosophical and rhetorical thought, namely the autonomous subject. The posthumanism I am concerned with explores how the concept of the “human” can be re-theorized given the coalescing of human and technology into the figure of the posthuman cyborg.

N. Katherine Hayles posits that “a historically specific construction called the human is giving way” (Hayles, How We Became 2). She specifically challenges the ways that even cybernetics historically held up the Cartesian notion of the body and mind, citing the Moravec test. This test sought to download consciousness, with information treated as the mind, “as if it were fully commensurate with the complexities of human thought,” and thus as though the body could be erased (Hayles, How We Became 54).

Hayles takes issue with this idea because it sees the mind as somehow separate from the body, capable of living on in computer systems. The Turing test, a foundational philosophy in the design of artificial intelligence discussed in Chapter V, also sought to test whether a machine might be created that could think and communicate so well that one would not be able to differentiate between it and a human being. The legacy of

29

liberal humanism’s concept of subjectivity, the “notorious universality” of the autonomous self made the Moravec and Turing tests possible (Hayles, How We Became

4). This mind/body dualism sees that the mind or soul could live on, downloaded into a computer having been excised from the body. In response to such tests, Hayles queries,

“How could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment?” (How We Became 1).

How We Became Posthuman charges cybernetics with divorcing and subsequently marginalizing information from its materiality and establishing processes, whether biological or technological, as the same; thus, cybernetics has sustained the devaluing of human’s material embodiment and the shoring up of the coherent, autonomous self

(Hayles, How We Became 85-6). However, the body is crucial to posthuman and human thought.

Hayles does not wish to realize a disembodied ideal, to discard the body and become a machine. Rather, she calls for a “posthuman realism,” which she describes as

a version of the posthuman that embraces the possibilities of information technologies without being seduced by fantasies of unlimited power and disembodied immortality, that recognizes and celebrates finitude as a condition of human being, and that understands human life is embodied in a material world of great complexity, one on which we depend or our continued survival. (How We Became 5)

Her vision shows a fluid interplay between embodiment and technologies, understanding that humanity is inextricably linked to materiality, not aiming to abandon it. She summarizes posthumanism as a process in which

30

emergence replaces teleology; reflexive epistemology replaces objectivism; distributed cognition replaces autonomous will; embodiment replaces a body seen as a support system for the mind; and a dynamic partnership between humans and intelligent machines replaces the liberal humanist subject’s manifest destiny to dominate and control nature (Hayles How We Became 288).

Thus, Hayles argues for a “posthuman collectivity,” a subjectivity that offers a more productive framework for understanding distributed agency (How We Became 35).

Hayles and I share a similar goal; however, subjectivity in my framework is a posthuman cyborg co-constituted by numerous entities via a web of ecological relations.

Further, we understand subjectivity through our own androcentric lens of these relations that helps us position ourselves in the world in a way that we can understand. This perspective upholds our sense of being in the world. However, this focus on subjectivity overlooks the relations that we have as part of an always-fluid aggregate ecology. By using the concept of subjectivity, defined as “[t]he quality or condition of being based on subjective consciousness, experience, etc.; the fact of existing in the mind only,” we can avoid looking at those relations (“subjectivity.”). I do not argue that humans do not believe in some sort of subjectivity; we do. However, the idea of unique subjectivity that is “in the mind,” created by our interior self, elides the constitutive power of the ecological relations that actually form the human being.

The subjectivity I propose is a posthuman cyborg that, like Hayles’, is embodied and informed by an aggregated, distributed ecological network of humans, non- humans, objects, spaces, technologies, language and codeall informing, influencing, and constituting each other. I use the term ecology not because it implies biology, but

31

rather because the term speaks to “the interrelationship between any system and its

environment; the product of this”; further, ecological refers to the affordances that

ecologies offer (“ecology”). So, human beings interact with the components of the

ecologies they have access to, which shapes what they are while simultaneously shaping

the ecology. To understand ourselves in human-centered ways, we have generated

concepts like autonomy, agency, and self-containment as driven by some transcendent

interior, which reinforces the idea of human exceptionalism. However, we cannot be de-

situated from the material, biological, technological and social substrates of our ecologies

that co-constitute what we are. We are not separate from these ecologies; we constitute

them and they us. And the lines in “between” in the age of technologies like bio- and

nanotechnologies no longer apply, if they ever did. The ecological networks are what we

are as humans.

In this chapter, I consider the various types of cyborgs followed by a discussion of

technogenesis that undergirds posthuman cyborgs’ relation to technology. Then, I discuss

another dominant thread of posthumanism, the concept of technology as an extension of

the mind and/or body, and follow this with a critique for why these theories are

inadequate to understand the posthuman cyborg being I posit. I then describe the

posthuman body as a type of interface, meaning a site of action or resistance, and discuss

how biotechnology creates a material cyborg that is both informatic and material

simultaneously. I conclude with a reiteration of my theory grounded in Kenneth Burke’s concept of identification. It is through a posthuman reconfiguring of identification that

the posthuman comes to know her/himself in the world.

32

Challenging Cartesian Dualism

When it comes to knowing the world, humans, being human, make assumptions and pursue ontological explanations about what human nature is and what the human condition entails, basically what constitutes our species and separates us from anything non-human. This framework shapes not only the way that we understand ourselves but also how we understand the world. Nietzsche writes, rather dismally, “Once upon a time, in some out of the way corner of that universe which is dispersed into numberless twinkling solar systems, there was a star upon which clever beasts invented knowing.

That was the most arrogant and mendacious minute of ‘world history’” (888). What he implies is that humans privilege themselves as exceptional, as being cleverer than other beings. Nietzsche disagrees with this exceptionalism and sees humanity as possessing an aggrandized sense of importance. Humanity believes, in his words, “as though the world’s axis turned” based on human intelligence, rather than understanding that humans are fallible beings that anthropomorphize environments and resources as an organizing principle and a means for understanding the world, a habit that blinds us to the significance of other phenomena and ways of being (Nietzsche 888). As harsh as

Nietschze’s perspective is, it reminds us to think about the systems of truth that have guided philosophical and intellectual inquiry, which often see humanity as distinct from non-humans in its autonomy and possession of rationality.

Humans, like other species, differentiate sensory data. We recognize differences in phenomena via our sensory perception and categorize those differences. Other creatures do this as well; some species can understand magnetic fields or pressure,

33

information that humans often do not perceive without technological intervention. In our

process of differentiation, humans anthropomorphize our understanding of the world

based on our sensory processes as we make scientific claims of truth, claims that often

privilege our way of knowing. We process and study change, documenting phenomena,

parsing what we perceive and what we see as sensible, logical and real. This process is

taken as a given because it is how humans come to understand the world around us, and it

is ingrained in our physiology. But human’s tendency to take this sensory process as the

norm neglects the idea that other processes of becoming and understanding take place,

overlooking them in favor of those truth and knowledge processes that stem from our

human physiology. What we experience as humans becomes “the” experience of reality,

as human phenomenological difference constitutes our understanding of the world.

However, this assumption rests upon a Cartesian dualism that positions human

beings as coming to an external world of objects.9 Human beings, as autonomous selves,

take in this existing external world via sensory and bodily perception then internally in

the mind represent it. The mind reflects upon the outside world, making a seemingly meaningless outer world have meaning in the interior mind. So, there exists a distinct separation from the external world, and thinking and meaning-making via language

(properties that separate man from animal) happen internally inside the mind according to

this dualism.

One of posthumanism’s key tenets is to respond to this view. Stefan Herbrecher

notes, “Cartesianism with its humanist idea of ‘man’ as animal rationale has become the

main battlefield for the discussion about posthumanism” (46). Instead of formulating

34 humans in this way, posthumanists challenge the idea of solely internal cognition, favoring cognitive processes that incorporate bodies, objects, and environments as co- constituents to thought. Humanism clearly delineates the human from the object or the machine; however, in an increasingly technological world this delineation is destabilized as people and machines integrate. Posthumanists see human beings and technological objects as not wholly separate, but as merged into a cyborg subject. And many theorists argue that artificially intelligent machines also reason and process language, undermining the two key factors that supposedly separate humans from machines.

The posthumanist perspective is bound to cause anxiety for humanists subscribing to Descartes’ dualist philosophical framework given that it destabilizes many presumed conceits. For instance, if humans are not “the sole masters and possessors of reason or consciousness, ‘we’ might also be no longer unique in our use of symbolic language, in the anatomy of our hand, in our awareness of our own mortality and so on” (Herbrecher

47). So the concepts that have been historically constructed to define “humanity” are in flux. The human boundaries, “which are always portrayed as absolute, inviolable and universally valid for all times are in fact concealing a perfect permeability – a permeability that becomes visible” (Herbrecher 47). This visibility reminds us that the fundamental tenets of humanist claims may not be as reliable, reasonable, or fixed as previously thought.

In this posthuman world that makes the permeability the human body visible, humanist approaches to technology cannot hold. The integration of human and machine, discussed in detail below, can take various forms such as constantly having a smartphone

35

(in essence, a microcomputer) on our person, wearing knowledge creation technologies like Google Glass that integrate the web with the user’s material life, or a cardiac patient depending on a pacemaker or defibrillator to sustain life. In light of such technological developments, Ihab Hassan eloquently states the larger issue at hand for the humanities:

“The human formincluding human desire and all its external representationsmay be changing radically….[F]ive hundred years of humanism may be coming to an end”

(Hassan, Ihab 205).

Branches of Posthumanism

If humanism is in theory “coming to an end,” to be replaced by posthumanism, then what Posthumanism? The theory has many roots, its definition is often nebulous, and its scholarship has generated numerous themes. Some, as Rickert notes, are “not directly related or even in conversation with each other” (293). Posthumanism as a theoretical framework can be quite broad or narrow depending on the theorists using the term. There are “types” or branches of posthuman thought that vary in the attitudes and ideologies they posit. There is, however, a common thread that runs among themthat advancements in technology impact what we know to count as “human.” But the implications for what this actually means differ depending on the scholarly approach.

Further, each type of posthumanism has some relation or ongoing conversation with humanism that upholds the autonomous, self-contained subject that is separate from the external world.

Tamar Sharon offers a cartography of posthuman thought that is useful for situating this project because it breaks down posthumanism into four categories: dystopic, 36

liberal, radical, and methodological (5). Dystopic posthumanism objects to using

technology to transform or alter human beings beyond normative cultural standards, and

it fears losing the autonomous human subject. Kass, Fukuyama, Sandel, and Annas’

“bioconservative” works advance dystopian perspectives (Sharon 5). So does Habermas

who, in response to the developments of biotechnology, champions the merits of

humanism. These scholars lament the destabilization of Cartesian ideas and worry about

the cultural and political ramifications of technological change on our idea of human.

Transhumanists typically fall into Sharon’s liberal posthumanism category,

including Nick Bostrom, Ray Kurzweil, Hans Moravec, and Savulescu, as well as

scholars like Buchanan, Glover, and Harris. All address technological enhancement from

a political perspective, seeing it as an individual right, an act of personal freedom to be

chosen, as long as it does not impinge upon or harm the rights of others. Further, they see

the restriction of technological enhancement as a violation of rights. Liberal

posthumanists, rather than rejecting humanism, seek to extend or modify the concept

given the technological age. is “arguably the best-known inheritor of the

‘cyborg’ prompted by Haraway” (Wolfe xiii). Its larger concern, and that of liberal posthumanism, is transcending the limitations of the human body to perfect intellectual, biological, and emotional capabilities, as well as to extend and improve life to further human progress. In essence, they seek to escape the limitations of the human.10 Coenen

argues that “One crucial element of this new concept of human self-assertion is the

expectation that human corporeality will be improved, or even superseded, by a new form

of corporeality” (40).

37

Radical posthumanism, in some ways, carries postmodern and poststructuralist ideologies forward as they seek to deconstruct the divisions between nature and humans, and overcome the notion of the autonomous human subject. This framework cuts across numerous disciplines like science and technology studies, feminism, and theory and more. This includes scholars like Gray, Donna Haraway, N. Katherine Hayles,

Badmington, Graham, Braidotti and others. Radical posthumanism also explores technogenesis, the co-evolution of technologies and human, and sees this idea of freeing humans from the human/nature or human/technology divisions. This is a particularly necessary deconstruction for radical poshumanism, given the age of ubiquitous technological and scientific innovation, and radical posthumanists seek to radically revise our perspective from a human-centered one to a non-anthropocentric ontology.

Lastly, methodological posthumanist scholars like Latour, Pickering, Verbeek, and Ihde consider intersections between human and non-humans, often through concepts of networks or symmetry (Latour), relationality (Ihde), or “manglings” (Pickering). They also, like the radical posthumanists, seek to overcome the humanist construction of a closed, autonomous interior notion of subject/self. This branch of posthumanism has two very distinct points of analysis: mediation and materiality. Technology in this framework

is not neutral, but actively mediates, meaning that technologies shape human and spatial

relations. Typically these scholars do not aim to fully revamp the human as some newly

defined posthuman, throwing out everything about what we’ve come to know, but rather

seek new methodologies for understanding how technologies shape humans and

environments and vice versa.

38

The posthumanist framework I posit incorporates ideas from both radical and methodological posthumanism as I rethink commonly held binaries like the subject/object, human/technology, and mind/body and build a framework using both

Latour and Miccoli. Like most all posthuman theorists, I oppose the idea of the rational, atomistic, autonomous, and disembodied self. Rather, I see human beings as constituted in relation to objects, technologies, humans, spaces, language, and code. I critique transhumanism and its idea of perfecting the human body because I question whose notion of progress drives the theory. Further, transhumanism is grounded in the that suppresses other ways of knowing as it privileges human epistemologies

(Ramazanoglu and Holland; Vivian, “The Threshold”). Transhumanism seeks objective

truth at the expense of subjective knowledge and relegates the idea that humans, given

our sensory perceptions, may not be able to fully understand the means with which other

species experience life. More importantly, the idea of a universal, ideal, objective body

constructs regimes of truth that often exclude and mask political and socio-cultural values

(Fausto-Sterling; Grosz). In essence, for many posthuman theorists the transhumanist

cyborg is problematic because its idea of ideal bodies and human superiority runs the risk

of reinforcing anthropocentrism. The posthuman cyborg, on the other hand, focuses on

Haraway’s conception as understood “in terms of complex, structurally embedded

semiosis with many ‘generators of diversity’ within a counter-rationalist (not irrationalist)

or hermeneutic/situationist/constructivist discourse” (“A Cyborg Manifesto” 213).

Like material posthumanists (and even some radicals like Hayles whose works

cross over), I link technologies, media, users, and interfaces directly to the material.

39

Brooke notes, for instance, that “As our technologies tempt us with the possibility of

absolute (patterned) knowledge via the purified technologies of mediation (absence), a posthuman rhetoric would require us to temper that possibility with the materially situated emergence (presence) of opportunities (randomness)” (“Forgetting” 791-2). In other words, a posthumanist approach to technology offers a method of understanding the

interrelations of bodies, material objects like our devices, software interfaces, and spaces,

not hierarchically but on equal terms, without the tendency to privilege the human body

or the supposedly separate function of the human mind. The posthumanist theories I subscribe to see that

[a]s more and more of our lives are mediated by artifacts, things, and technologies, as we become aware of the multiple ways in which human beings are incorporated into networks or complex assemblages, and as our things take on agency and intentionality, perhaps it is time to take a final turn…and leave behind all of human privilege, autonomy and distinctiveness. (Weiss, Propen and Reid 37)

This project extends these pursuits specifically by arguing for a networked ecology of bodies, technologies, objects, language, code and space where the lines

between them, if they exist at all, dissolve. There are many facets of radical and

methodological posthumanisms with which I agree, and some I question. My goal, like

Anthony Miccoli’s, who I discuss at length, is to rethink the idea of what constitutes

“human” and “interfaces” to understand cognition in new ways. Especially now in our

digital age, I do not think that one can discuss what it is to be human without accounting

for the technologies that make us up, particularly given the of biotechnology

40

that are not extensions of our bodies, but part of them. Further, the objects and spaces

that are in our networks are not simply tools for us to use or places where we exist, but

are part of that which defines us as human as well as our cognitive processes. We are

beings not defined by our thoughts or our existence in an external world but rather the

potential relations we have among the available, dynamic ecologies and their substrates.

The Posthuman Cyborg

Since humans and our cognitive processes are the results of ecological dynamic

interactions, they are not at all separate from some existing world “out there.” Further,

technological innovations have made visible the ways that machine and human merge

together to constitute something other than previous definitions of human. So, the

posthuman can be defined as “not only a coupling with intelligent machines but a

coupling so intense and multifaceted that it is no longer possible to distinguish

meaningfully between the biological organism and the informational circuits in which the

organism is enmeshed” (Hayles, How We Became 35). Posthumanism embraces the

merging of human bodies and machine by challenging the notions that “(1) there can be an identifiable separation between subject and technology and (2) the humanist idea of subject provides intellectual value” (Dobrin 72). Human subjectivity and communication are now “located, endlessly bound in the fluidity and shiftiness” of technologies (Dobrin

72-3). A posthuman approach, then, transplants autonomous human agents with posthuman conceptions of complexity and networked subjectivities to investigate “the potential [of] what can be” (Dobrin 91).

41

Posthuman scholarship builds on Donna Haraway’s idea of the cyborg, which calls the human/machine distinction and traditional boundaries of the body into question.

And Halberstam and Livingston blur that supposed divided even further, asserting, “the posthuman body is a technology” [emphasis added] (3), and these posthuman bodies

“emerge at nodes” where the lines between actor/environment/object converge (2). This zone, which Eugene Thacker calls “a zone of transitionality,” challenges technological determinism and affords an opportunity for productive interrogations of the relationships between humans, objects, technology, culture, and more (“Data Made Flesh” 81).

Posthumanism establishes, then, that bodies are vital in understanding technology, not regulated, disembodied, or simply the causal result of it. Hayles writes, in fact, that understanding the body is particularity vital if we are to discern how bodies resist, reshape, and change bodily codes. She states, “Formed by technology at the same time that [embodied practices] create technology, embodiment mediates between technology and discourse by creating new experiential frameworks that serve as boundary markers for the creation of corresponding discursive systems” (How We Became 205). In other words, Hayles sees bodies and technologies affecting each other via praxis.

The posthuman, then, indicates “a species of beings that some scholars believe represents a co-evolution of humans and environment so codependent as to be formally inextricable for anyone unblinded by categorical distinctions that preemptively prohibit the connection” (Weiss, Reid and Propen 25). Elaine Graham in her view of the

“post/human” explains, “The impossibility of isolating ‘human nature’ from its refracted other suggests a model of post/humanity as inextricably bound up in relationality, affinity

42

and contingency” (223). In other words, the posthuman describes ways of being once

humans are not limited to understanding humanity in terms of distinctions like the

human/technology and human/object binaries. Scholars theorize the cyborg in a number

of ways, building upon its possibilities to either to embrace and resist binaries, as a means

of agency, as an emblem of technogenesis wherein human and technology mutually

shape each other, or as a literal, material instantiation as human beings become a new

posthuman species.

Metaphorical Cyborg as Resistance

Rhetoric and compositions scholars in particular have explored the symbolic

merger of human and technologies and Haraway’s cyborg as resistance in various ways.

Haraway states that her cyborg is an “argument for pleasure in the confusion of boundaries and for responsibility in their construction” (150). She understands these boundaries as “the logics and practices of domination,” that her cyborg resists by occupying both sides of the human/machine dualism (177). It is through this positioning

that the cyborg can work as the space on either side and in-between boundaries offers the

possibility of resistance. James Inman’s Computers and Writing: The Cyborg Era uses

Haraway’s cyborg as a frame to discuss what he terms cyborg history, narrative, literacy,

and pedagogy as well as types of pedagogical resistance. His cyborg history focuses on

the alternative or marginalized narratives of computers and writing to bring them to the

forefront (Inman 59-72). The cyborg narrative demonstrates how technology can

reinforce discriminatory and hierarchical frameworks (Inman 107-124). His cyborg

literacy and pedagogy sections addresses how individual and technological ideologies

43 converge in educational contexts, and Innan offers means for students to question their assumptions (Inman 159-221). Danielle De Voss examines the cultural representations of the cyborg in terms of masculinity, femininity, and sexuality, and examines the possibility of different models (“Rereading Cyborg? Women”). Michelle Baliff refers to the cyborg as the “third sophistic” and argues for understanding rhetoric in terms of mêtis or “a knowing, doing, and making not in regards to Truth, but in regards to a ‘transient, shifting, disconcerting and ambiguous’ situation” (53). She sees this as offering a more promising, dynamic rhetoric than what traditional models. David Whitt’s research posits that the technological cyborg is no longer a figure of science fiction but a cultural reality with important “rhetorical and communicative significance” as the rhetoric of cyborgs can shape political and social thought (i). The cyborg as a figure of political resistance and transformation, a means of disrupting binaries undergirds this type of scholarship in the field.

“Everyday” Cyborgs

Other scholars understand that we are already cyborgs in everyday life from the practices and/or technologies we engage everyday. Chris Hables Gray argues in The

Cyborg Handbook that western citizens live in a “cyborg society” as machines and humans interface at nearly every level of life (3). William J. Mitchell concurs and refers to this cyborg identity as “Me++” (7). He specifically argues that “man-computer symbiosis” is an everyday, routine occurrence as humans “now interact with sensate, intelligent, interconnected devices” embedded throughout environments (Mitchell 34).

Taking a different approach, Walter Ong argues that writing is a technology and that

44 technologies have shaped how we think and understand the world through a process of interiorization. The move from orality to writing, he argues, has also shifted our

“mentality between oral and writing cultures” (Ong, Orality 3). Writing, for Ong, artificially extends language and consciousness, but this artificiality does not mean that it hurts communication. Rather, he argues, “artificiality is natural to human beings.

Technology, properly interiorized, does not degrade human life but on the contrary enhances it” (“Writing is a Technology” 24). For Ong the process of interiorization has

“so deeply occurred that without tremendous effort we cannot separate it from ourselves or even recognize its presence and influence” (“Writing is a Technology” 19). So, Ong sees that writing as a technology has not only shaped how writers think and communicate at the cognitive level, but has also become so normalized that it is nearly wholly invisible, much like technological interfaces designed to disappear, which I discuss in the following chapter. Gordon Calleja further posits that despite the fact that most everyday people still see the human in terms of its organic characteristics, the everyday use of technology changes the human “through interactions with digital machines placing our race in a cybernetic feedback loop” (6). The title for this section, “everyday cyborg”

(Calleja 4) comes from his article about the “rhizomatic cyborg” (Calleja 3). He argues that using the rhizomatic Internet has structured a “rhizome-oriented mental reconfiguration that occurs when mankind successfully adapts to the digital patterns implicit in their use,” thus creating a new type of human subjectivity (Calleja 5).

Language and literature scholar and former director of the McLuhan Program in Culture and Technology, Derrick Kerckhove sees technological development’s appeal to

45

humanity as demonstrating “proof that we are indeed becoming cyborgs, and that, as each

technology extends one of our faculties and transcends our physical limitations, users are inspired to acquire the very best extension of our own body” (3). He refers to this phenomenon as indicative of a “psychotechnology” (Kerckhove 4) meaning a collective cultural psychology wherein users are influenced by technology, which helps technology adoption and propagation (Kerckhove 2). Kerckhove makes a key argument here; he sees that the everyday cyborg understands technology as an extension of self, one that enhances or adapts existing human capabilities. In seeing a pattern of willingness to drive technological development, there is a symbiotic evolution that occurs with this dynamic then as users adopt technology to extend themselves, which then shapes the technology.

This body of scholarship in some ways mirrors the claims made by Nicholas Carr in

Chapter I, so one might level the charge of technological determinism against Calleja in particular. However, he posits an intimate co-evolution of both human and technology at both material and social levels, rather than simply implying that technology “does” things to humans. Further, the cyborg as a concept mitigates this charge as well because there is no division between technology and human, but rather a merging. There is no lamenting that “real” human behaviors are altered but that the cyborg identity is that which co- evolves with the implemented technologies.

Technology and Cyborg Relations

The everyday cyborg involves two concepts to theoretically frame how this posthuman being works in terms of bodily and technological relations: technogenesis and

46

technology as an extension of the body and mind. I will discuss each in detail to outline

the key points and how they operate in my own posthuman distributed ecology.

Technogenesis

Technogenesis is the coevolution of humans and technologies. Hayles sees that technogenesis is the most promising way to understand the dynamics between technology and humans. Her work How We Think discusses the concept as well as her assertion that humans have embodied and extended cognition, which I discuss below.11 Technogenesis

in essence means that all of the entities that make up our networked ecologies and

construct our sense of self evolve in tandem. In particular technogenesis concerns the

ways “epigenetic changes in human biology can be accelerated by changes in the

environment that make them even more adaptive, which leads to further epigenetic

changes” (How We Think 10). However, coevolution does not translate into progression

or the improvement of humans or technology, but rather concerns the connections and

dynamics between the two. Further, Hansen’s work “Media Theory” advances his take on

the coevolution of humans and technology by positing that humans use technē to

constitute their imaginations, and technology shapes humans’ lives through the process of

remediation. Technology is more than a representation of life; rather, it is an

“environment for life,” a claim that shores up my conception of an ecology (Hansen,

“Media Theory” 299, emphasis added). And the medium, he says further, “necessarily

involves the operation of the living, the operation of human embodiment” [emphasis his]

(Hansen, “Media Theory” 300). Media operate through the coupling with human users,

and technology simultaneously shapes humans, making technology a participant in co-

47

evolution, not a static artifact. Ultimately, Hansen seeks for scholars to acknowledge the

relationality between human and object as manifesting a shared life form, a life form, I

argue, without universal traits given the available substrates in one’s individual ecology.

Technogenesis specifically challenges the claims made by Nicholas Carr and Mark

Bauerlein in Chapter I who argue that technologies deterministically “do” things to humanity.

Technology as Extension of the Body and Mind

Many scholars have long theorized technologies as extensions of the human body, capacities, and/or intention. The idea appeared in Aristotle’s Eudemian Ethics when he states that the body is man’s tool, and the “soul and body, craftsman and tool, and master and slave are similar” (7; 1241b). He understood tools as “inanimate slaves” (Eudemian

Ethics 7; 1241b). Later in 1877 Ernst Kapp theorized technology as an extension of the

body, arguing that all technological artifacts act as extensions of human organs, a type of

“organ projection” as they imitate human organs and have the potential to eventually

replace them (qtd. in Brey, “Technology” 136).

Michael Polanyi also posits a theory of extension as tied to his concept of tacit

knowledge. Any work or activity for Polanyi involves a blend of “focal” and “subsidiary”

knowledge (3). As we attend to something, we do so through other things, and we are

focally aware of what we are attending to. What we use in the process of attending to

something, we have only subsidiary knowledge about. In essence, then when we

complete a task, we do so with a variety of tools including material objects, language, the

body in some ways, which at times are subsidiary to the task we direct our attention to

48

(Polanyi 8). Whether language or an object, these act as a “probe or tool” that humans

“interiorise”, “making ourselves dwell” in it (Polanyi 10). This “indwelling” enables us to develop new capabilities or skills.

In his canonical text, Understanding Media: Extension of Human Faculties

McLuhan refers to this process of extension as “translation,” “repetition,” or

“intensification,” (91). By this he means a furtherance, a speeding up of human capacities or behaviors as technologies powerfully enhance or supplement human functions by extending the body or cognitive processes. McLuhan argues that the age of electronic media shows that the human “now wears its brain outside its skull and its nerves outside its hide” (64). Higher cognition, he believes, would be translated into data functions and automated by machines.

Further, David Rothenberg argues that technology operates as an extension of the human body, human presence, survival, perception, language, thought and memory but some facultieslike morality, spatial awareness or judgmentcannot be extended

(Rothenberg 16-31). Of the aspects that are extended, he sees technologies as either extending action or thought, namely our intentions (or possibly both). However, unlike

McLuhan who sees technology as a primary extension of human capabilities, Rothenberg sees technological artifacts as extension of human intentionality, which is according to him, usually constrained in our biological physiology.

Phenomenologist Maurice Merleau-Ponty also discusses the ways that bodies are extended via technology. As an open system, the body, he posits, can integrate external means or artifacts. These external means become part of the body, in an “extension of

49 body synthesis” that extends our sensory perception (Merleau-Ponty, Phenomenology

165). When using these external tools or artifacts, in becoming part of the body, they are in essence invisible as we do not focus on the object or tool, but rather on the external world that the incorporation of technology allows us to understand. Merleau-Ponty, and scholars like Don Ihde, understand that extensions of the senses do not simply amplify our perceptions of the world; they actually change how we perceive the world.

Philip Brey’s extension theory posits that instead of technology extending human intention, as Rothenberg suggests, technology “extends the means by which human intentions are realized” (Brey, “Technology as Extension” 9). In other words, human intentions are not extended; rather, humans use faculties to achieve their intentions and technologies act as a means for doing so. Brey furthers McLuhan’s work arguing that the body offers humans a toolset for enacting and achieving intentions, but humans also use external means. External “complementary extensions” introduce new capabilities through their use, and “amplifactory extensions” enhance the faculties humans already have (Brey

“Technology as Extension” 13). His extension theory seeks to account for technologies that work in ways dissimilar to or impossible for human faculties, like magnetization or emitting light. Further, he acknowledges that technological artifacts have social and cultural components, something other extension theorists typically ignore. Drawing on scholarship from Merleau-Ponty, Heidegger, and Ihde, Brey also adds that technological artifacts “engage in embodied relations” wherein the artifact mediates a user’s interaction and experience with the environment (Brey, “Technology as Extension” 11).

50

Additionally, Brey brings up computers, designating them “embodied cognitive

artifacts” (384). Cognitive artifacts are, as Donald Norman argues, artificial devices

“designed to maintain, display or operate upon information in order to serve a representational function” (43). These artifacts extend abilities such as problem solving, language, and thought and are crucial components in information processes. Computers to Brey are not autonomous active agents. They do carry out autonomous functions in terms of information processing, aiding those functions that are typically too time- consuming or tedious for humans, such as large data aggregation, search and calculation

(Brey, “The Epistemology” 391). He sees computers operating in a “symbiotic” dynamic with humans as “the performance of a cognitive task depends on the information- processing abilities of both human and computer, and the exchange of information between them” (“The Epistemology” 392). For Brey, computers still depend upon human users, so they are not wholly autonomous. This reciprocal dependence between computer and human user forms “a single cognitive unit, a hybrid cognitive system” with processing distributed between both (Brey, “The Epistemology” 393).

Historically, cognitive processing has been understood as happening in and by the mind, but cognitive artifacts challenge this. Cognitive scientists, for instance, argue that cognition is both internal and external (Salomon; Hutchins, Perry). Other theorists have posited the extended mind theory, which argues that human minds extend outside the body (A. Clark, Being There; Natural Born; Clark and Chalmers; Donald; Hutchins).

Clark posits that humans often carry out epistemic physical actions for cognitive purposes such as measuring, rotating objects to understand spatial dynamics and properties, as well

51

as searching for data/information, and that ultimately objects and artifacts that assist

cognitive processes are not just supplementary aids to cognition but are part of it via

“cognitive technology” (A. Clark, “Reasons” 15). Clark’s embodied and extended

cognition is not “brainbound” or solely internal and related to brain function (Supersizing

xxix). Rather, he argues, “Bodily actions here appear as among the means by which

certain…computational and representational operations are implemented. The difference is just that operations are realized not in the neural system alone but in the whole

embodied system located in the world” (A. Clark, Supersizing 14). Further, Clark asserts

that cognitive agents change and shape the environments to enhance cognitive abilities,

and the resulting augmented cognition further changes the environment, which, in turn, leads to more cognitive abilities in a continuous, reciprocal cycle. Clark explains this recursion: "We do not just self-engineer better worlds to think in. We self-engineer ourselves to think and perform better in the worlds we find ourselves in" (Supersizing

59). Clark argues that humans are so embroiled with technologies that we are “natural born cyborgs” that are “forever driven to create, co-opt, annex, and exploit non-biological props and scaffolding” (Being There 6). Thus, humans are specifically suited to “multiple mergers and coalitions” (A. Clark, Being There 7). For Clark, humans are “human- technology symbiots: thinking and reasoning systems whose minds and selves are spread across the biological brain and non-biological circuitry” (Natural Born Cyborgs 3).

However, not all cognitive scientists who accept the idea of embodied cognition accept extended mind theory. Specifically, some proponents of embodied cognition (and

even distributed or regulated cognition) refute extended cognition arguing that, yes,

52 cognition may be bodily distributed across neural and non-neural resources, but this happens within the body’s physical boundaries (Barsalou). Further, Adams and Aizawa and Rupert also object to the idea of the extended mind arguing that proponents of both embodiment and extension, like Clark for instance, confuse the significant distinction between external causes versus constituents of cognition. Additional, some critiques rely on intentionality to dispute extended mind theory, arguing that intentionality is requisite for an agent to be cognitive, an issue, as stated above, I will take up later in Chapter V in a discussion of rethinking intentionality in light of artificial intelligence.

Problems with Extension Theories

Some aspects of these theories of technological extension are compelling but there are some issues I take with extension theories. Kapp’s theory is limiting for its assertions that technologies mirror the body’s physiology or physiological processes, when there are quite a few technologies and artifacts that have no relation to what human bodies can do. Seeing technology as an extension of our intention also risks seeing technologies as neutral tools that we can use for our purposes, which denies their own potentialities or resistance as actants. Further, technologies are not neutral, particularly information technologies. Kiran and Verbeek make this point quite clearly arguing that technology as extension can lead scholars to see technologies as having an intrinsic

“technological instrumentalism” that overlooks the constitutive power of technologies with humans (414). And this also presupposes a preexisting human subject separate from technology rather than imbricated with it. Technology as extension theories like Merleau-

Ponty’s that posit the ways that technologies become invisible for users is also

53

problematic, as I’ll discuss in the following chapter on interfaces. Seeing technologies as

transparent risks overlooks the ways that they shape interactions and humans as they fade

into the background. Further, I do not conceptualize technologies, language or code as

“tools.” Instead I see technologies (and code) as actants of potential action or resistance

in the technological substrate that make up one part of the networked ecology. I see the

body and cognition as distributed systems, mutually constituted by an aggregate of

numerous actants and environments in our ecologies rather than beings that periodically

extend cognition via tools.

The posthuman cyborg cognition, as I conceptualize it, is always constituted by

the web of relations of our ecologies. This idea of the posthuman cyborg then implies

something that pushes beyond what we have historically understood as human. The

cyborg is the merger, an acknowledgement that “human” is not a closed, autonomous

system separate from its ecological substrates. Instead, the cyborg is the literal

embodiment of this amalgamation, and it recognizes the dynamics of the entities that not

only make up the networked ecology but also that constitute what it means to be

“human,” or more accurately posthuman.

Posthuman Cyborg as Interface

Given the idea of something other than human, Ollivier Dyens theorizes the

posthuman body and embodiment as a type of interface in Metal and Flesh, an

exploration of humans and what they are becoming in the age of technology. Dyens

locates the future of the body-in-machine culture and argues that it is becoming more culturally than biologically informed, as technological interventions transform humans.

54

The body he posits is the “interface between being and living; on its surface, being and

living mesh” [emphasis added] (Dyens 55). This does not mean that he sees the body as

disappearing; rather, it is different, ever more hybridized and mediated. And the term

“mesh” is important as it implies permeability, not a solid, static in-between being. His

conception of the body as interface indicates the instability of the body as a fixed

physical or biological being as technology changes our bodies and identities. He explains

that technological reality has changed how we understand and define “human” as

“[c]ultural replication now permeates all phenomena, dynamics, and entities, forcing the

biological environment into radical mutation…. Human beings are becoming extinct. A

new mosaic is rising, one made of skin, ideas, insects, organs, machines, and cultures”

[emphasis mine] (Dyens 95). In essence, Dyens extends and updates Richard Dawkins’

“selfish gene” argument and posits that the prolific amount of information in today’s

meta-driven world culminates in a body beyond any conceptions that humanism can

explain (3). The cyborg, for Dyens, is a “living being whose identity, history, and presence are formulated by technology and defined by culture” that acts as a site of

interfacing (82).

Code as Material

Understanding the cyborg body as a type of hybrid being, a new mosaic, as Dyens

refers to it means that “there are no essential differences or absolute demarcations between bodily existence and computer simulation, cybernetic mechanism and biological organism, robot teleology and human goals” (Hayles How We Became 3). So, this raises

the question about the role of code. Code is not simply the underlying programmatic

55

substrate that we can ignore. Unlike language, which as discursive can elide some

material constraints to a certain degree, materiality cannot be ignored with code. For instance, language relies on a material body (a speaker) to create discourse and process its symbolic function; however, if language is incorrect or poorly articulated, meaning can still be derived, and it is not contingent upon space and objects to do so. With code, programmers are bound not purely to the human as a translator but also to material objects and effects like processer speeds, storage limitations, etc. And when code fails, there are immediate effects: the technology it undergirds does not function. While many users do not understand nor want to see code, Hayles argues that users cannot stay on the surface of a text even if they want to because feedback loops connect the surface with the substrate. In fact, lived experience operates in between the world of natural language and code in a process of “intermediation” rather than a binary opposition between the two

(Hayles, My Mother 33). Further, Hayles makes a powerful argument that undercuts the technologies as tools of extension similar to the objections I raised earlier. The exchange between code and language demonstrates that "computers are no longer merely tools (if they ever were) but are complex systems that increasingly produce the conditions, ideologies, assumptions, and practices that help to constitute what we call reality"

(Hayles, My Mother 60).

But most importantly, code is directly tied to action, to engaging in process of communication between human and machine. First, if one seeks to adapt technology for individual purposes, then code is the operating means with which to do so. Without coding knowledge, the user must rely upon others for what might be called code literacy.

56

Code is also bound up with action in that the larger question at hand is not “how we as

rational creatures should act in full possession of free will and untrammeled agency.

Rather, the issue is how consciousness evolves from and interacts with the underlying

programs that operate analogously to the operations of code” (Hayles, My Mother 192);

thus, the ability to act, then, does not stem from free will and the rational mind but rather

is “distributed in its location, mechanistic in its origin, and bound up at least as much

with code as with natural language” (Hayles, My Mother 192). This reconfiguring of agency signals a new human subject.

Like Hayles, I see code as vital in terms of its function and materiality. Code is most certainly material in a number of ways. In addition to the material constraints discussed above, code also acts as a translation of language, taking language that can be ephemeral and concretizing it in technological devices and into interactions. With natural

user interfaces, which I discuss in Chapter III, code also renders holographic haptic

interfaces and translates bodily movements into computational commands. This is far

from immaterial as it acts as a foundation of translation for physical body movements and

instantiates haptic representations for engagement and interaction. In these interfaces

then, code in some ways writes the body. Further, code is another type of language that

we must engage as it undergirds those devices that co-constitute the human. Seeing code

purely for its functionality or the aesthetics of what it renders, or worse, ignoring it

altogether and taking it as a given without critical understanding, undermines its

significance and the roles it has in the ecological web of relations I posit. Code as a language merits scholarly attention from rhetoric and composition and other humanities

57 disciplines because it is both material and cultural. Florian Cramer notes that software as a cultural practice includes the algorithms, machines, human interaction and imagination

(124). Further, Mark Marino argues that “[c]ode increasingly shapes, transforms, and limits our lives, our relationships, our art, our cultures, and our civic institutions”

(“Critical Code Studies”). Code, then, has cultural and material impacts on human life, shaping our potential for action and meaning-making. So code cannot be viewed as immaterial either in terms of function or importance.

Material Cyborgs

Moreover, technologies such as biotechnology, genetic engineering, and nanotechnology use code that generates new processes and cultural terrains that transform the body such as stem cell research or nanogel that fosters nerve regeneration in the spine. These scientific developments demonstrate that the human body is radically changing into some other culturally informed posthuman, the cyborg as a new species.

Mark Poster explains, “a symbiotic merger between human and machine might literally be occurring…[but] What may be happening is that human beings create computers and then computers create a new species of humans” (4). With such radical developments in biotechnology (including genetic engineering) and nanotechnology, humanities scholar

Joel Dinerstein sees that humans are “a networked being composed of multiple human- machine interfaces” (5).

However, Eugene Thacker’s addresses a contradiction in many posthuman theories particularly in light of these scientific developments. He charges that

58

the posthuman wants it both ways: on the one hand, the posthuman invites the transformative capacities of new technologies, but on the other hand, the posthuman reserves the right for something called “the human” to somehow remain the same throughout those transformations. This contradiction enables posthuman thinkers to unproblematically claim a universality for attributes such as the faculty of reason, the inevitability of human evolution, or individual self- emergence. (94)

Thacker calls out posthumanists for embracing technogenesis and its implications, while still claiming a “human” still exists and that cognition still functions universally despite these changes. This conception of the posthuman, he contends, continues to center the human, particularly with claims of subjectivity, when the idea of posthumanism in theory critically shifts the human from the center into a distributed network.

Thacker further points to a problem with what he calls “information essentialism,” or the idea that information is transferable across media and substrates. He argues that understanding information in this way disconnects it from the technical medium, processes, and contexts in which it is substantiated; this equates to a “universalizing and decontextualizing of information” that can lead some to say that biological subjects and technology are exchangeable (Thacker, “Data Made Flesh” 85). If we conceptualize a

body as data, data that can be programmed, or as Thacker states “a kind of source code

for matter,” then that argument makes sense. That would imply that if you can “[c]hange

the code, … you can change the body” (Thacker, “Data Made Flesh” 87). Biotechnology

often does just that: change the code to change the body, particularly with genetic

engineering.

59

However, I would argue that biotechnology does not wholly dematerialize bodies.

In fact, the decoding, coding, and recoding is for the purposes of rematerializing bodies, creating new tissues, organs, etc. Biotechnology simultaneously maintains both the informatics and materiality of the body, but the resulting body altered by biotechnology differs from biological bodies not treated with bioscience. For instance, with stem cell engineering, a patient has a DNA sample drawn wherein the genetic code is encoded into informatic code; thus, the material body becomes code. Then, as Thacker describes,

“biotech integrates itself with infotech” via coding processes carried out through software that pinpoints the stem cells’ gene clusters to distinguish what types of cells they will become (“Data Made Flesh” 90). Lastly, the newly programmed regenerative cells are recoded, thus generating “the biological body on demand,” and the cells are inserted into the patient’s body via “natural” means (“Data Made Flesh’ 90). The body does not disappear; it does not simply become data. Rather, it is data and flesh, as the data informs the flesh, co-constituting the body.

It is this process that generates “biomedia,” which Thacker defines as “particular mediations of the body, optimizations of the biological in which ‘technology’ appears to disappear altogether” (“What is Biomedia?” 6). Biomedia moves past configurations of

“technology-as-tool or the human-machine interface” (Thacker, “What is Biomedia?” 6).

The body is not a body fused with a machine, neither is it supplanted by technology.

Biomedia is a constitution of the body as a conduit. Biomedia recontextualizes the body as a “body more than a body” (Thacker, “What is Biomedia?” 6). Biotechnology does temporarily dematerializes the body, but not entirely; it does not translate it completely

60

into code and data. Rather, it rematerializes the body into a new form that is not the same

as the original biological body but rather results in a new form that is materially constituted by both biological code and computer code. So, biotechnology focuses on a loop that translates the body into code to rematerialize it through biotech materials that are “liminal techniques” for bodily intervention (“Data Made Flesh” 92). Thacker posits an important question: does the body constitute a network, meaning is “the biomolecular body a distributed relation?” (Biomedia 31). Ultimately, Thacker wonders, given the developments of biotechnology, if there is not another type of posthuman to explore, one beyond the conceptions of those posited by the theories so far.

This is another way that I hope my framework can contribute to posthuman theory. I answer Thacker’s question with an emphatic “yes”: the biomolecular body is a network; it is a distributed relation of the most material variety. I argue that every human is a distributed relation. But none as much as biotechnological bodies. Biotechnology is not an extension of the human; rather, biotechnology with its technological objects and code in conjunction with the biological substrate (the body’s systems, flesh, bone, etc.) viscerally, constitute a posthuman being that has a further constitutive relation. Neither the biology nor the informatics takes primacy over the other. This body is a posthuman interface for the two. This body in particular is not like the everyday cyborg using technology via practice, which some might construe as an extension; with this posthuman body technology is undeniably part of the biological body, as the body is a literal enmeshing of the technological and material substrates.

61

In my arguments, I do not assert that there is such a thing as a universal “human,” universal “posthuman,” or universal cyborg. These entities vary depending on the relations each has with the aggregate, distributed ecologies they afford relations that co- shape the entities and the ecologies themselves. And I certainly do not see any type of posthuman as centralized, operating as some sort of master control in the networked ecology. The posthuman is not the sole operator who takes primacy in the interactions of the ecology. Rather, the human in the ecology has fluid relations with the various biological, material, technological and social substrates making up the ecology that it negotiates in the same way that the other actants also negotiate.

Further, to amend Thacker, it is not simply that the biotechnologies are liminal.

Rather, the entities fluidly operate within the thresholds of the entire ecology. Obviously when humans theorize the networked relations of this ecology, the default is to do so from a human perspective, as humans seek to process their positioning in the ecology; humans cannot get outside of their perspectives. But we are not separate from or even hierarchically situated in this configuration.

Take biomedia’s code as an example. Biomedia with its materiality and code builds the body without permission, required action, or intention from the human body.

The two work simultaneously, systemically, and physiologically together, so that we are unable to say which is the key operating mechanism. Which has “agency” or primacy?

Neither. It is the relations of the two together that constitute the human, and this constitution varies depending on the aggregate, distributed ecologies at work. So these ecologies differ depending on the particular “human,” technologies, and objects and

62 spaces in question. There are only the relations that the posthuman being has, which will shift with her positioning, specific physiology, and the specific substrates constituting the ecology. Certainly there are similarities in the ecologies, but this is not universality. And the body is not immaterial, far from it. The body itself becomes key. Its physiological systems, materiality, and its genetic and informatic code, inform its relations in the ecology along with the other substrates (technological, material, spatial and social) that collectively co-constitute the networked ecologies and those entities in it. I do not mean to imply that the body’s relations are reducible to some sort of bodily or genetic determinism. Rather, I see the body’s material as one component in a fluid, dynamic web of interrelations that mutually constitute both human and technology along with the other parts of the ecology.

The idea that we are universally “human” or that all humans possess some universal “subjectivity” strips away the variations of the elements of one constitutive ecology versus another that is necessarily made up of different biological, technological, social, and material substrates. The subjectivity frameworks, by ignoring or minimalizing variations, risk characterizing the posthuman as being somehow universal and possibly even static. So my framework depends on the variations of relations of entities constituting the aggregated ecologies, and it does not posit a definitive explanation of what all humans or posthuman subjects are, look like, or are made of. I further argue that the technologies, spaces, objects, language and codes of biotechnological dynamics are not components or tools that are linked together using interfaces to negotiate the spaces in between. Rather, there are no spaces in between.

63

Posthuman Invention: Identification and Consubstantiality

So, if there is no autonomous self and no universal subjectivity for posthuman beings, biotechnical or otherwise, then how does identity formation and communication take place in such dynamic conditions? My theory of cognition undermines the idea that humans are ever truly separate from the world. However, what I want to articulate at this point is that human beings aren’t consciously aware of the distributed nature of cognition.

Miccoli clearly states as much when he refers to the interface myth. But it is a myth with a function; it is a theoretical space that facilitates subjectivity, autonomy, and intentionality, while eliding the actual substrates that constitute thought. That still leaves the question of how does a person with distributed cognition, always in an ecology of substrates acting as elements (or forces) of co-constitution, understand subjectivity?

Kenneth Burke’s concept of identification, substance and consubstantiality, with some reconfiguring can illuminate how posthumans define the self and communicate and connect with others.

Revising the rhetorical tradition, Burke moves away from persuasion and to identification as the means through which to connect with an audience. He explains the process of identification in this often cited passage from Rhetoric of Motives:

A is not identical with his colleague, B. But insofar as their interests are joined, A is identified with B. Or he may identify himself with B even when their interests are not joined, if he assumes that they are, or is persuaded to believe so. Here are ambiguities of substance. In being identified with B, A is “substantially one” with a person other than himself. Yet at the same time he remains unique, an individual locus of motives. Thus he is both joined and separate, at once a distinct sub-stance and consubstantial with another. (Burke 21-22)

64

Identification, for Burke, is relational a means of social connection. If A can find some common ground or “substance” in B to identify with, or if A can identify himself with B even when there are no “substance,” then A, while still an “individual locus of motives” can connect or identify with B. The two will be joined together, while maintaining their individual autonomy and identity.

Substance as Burke notes is “common ground” (Grammar of Motives xix) as it is that which “stands beneath or supports the person or thing” (Grammar of Motives 22).

In essence, it is “the context for communication or the key to the speaker’s attitude”

(Brock and Scott 191). And substance is social, divided into four categories: geometric, familial, directional and dielectic.12 Finding the common ground with X entails a process of consubstantiality in which one identifies with another:

A doctrine of consubstantiality, either explicit or implicit may be necessary to any way of life. For substance, in the old was, an act; and a way of life is an acting-together; and in acting together, men have common sensations, concepts, images, ideas, attitudes that make them consubstantial. [emphasis added] (Burke, Rhetoric of Motives 21)

The key phrase here is “acting-together.”; it is through the process of perceiving substances that one connects with another or with an audience.

I propose revising Burke’s processes of identification and consubstantiality to move beyond the social substances that he emphasizes and toward other kinds of substances. Substance can thus become the substrates that make up the networked ecologies that constitute the posthuman distributed cognition. Burke himself states that substance is “an abstruse philosophic term, beset by a long history of quandaries and

65

puzzlements” (Rhetoric of Motives 21). Rather than being purely social, then, I argue that

substance applies to any of the parts of the networked ecology, the world that they are

always in, always being constituted by, often unconsciously or through the illusion of an

interface. Identification then, is the process through which the posthuman interfaces with

the substrates to invent a posthuman intersubjectivity in a process of flows and relations

of the substratesbiological, technological, social, material, and spatial, some of which

we are not tacitly aware of, some of which we are.

Further, identification also serves the similar function in communicating or

identifying with other actants in the ecology be it a smartphone or some other device through the same relational process that the posthuman communicates with other people.

Substance and consubstantiality need not be read solely through the lens of language or

sociality or via purely human terms. I do argue, then, for thinking of the interface as

solely the noun, but rather as an action. This Burkean posthuman reconfiguration acts as a

form of interfacing, a “way of life” wherein the posthuman is always interacting with the

substrates (or substance) to define a constantly fluid posthuman intersubjectivity that is

always in the world.

66

CHAPTER III

INTERROGATING “INTERFACE” IN A POSTHUMAN WORLD

I began this project with three anecdotes of users engaging with various types of

interfaces in their respective situations, whether fictional, actual, or anecdotal because,

the fact is, interfaces surround us. They are in our hands, our environments, on and/or in our bodies. The number of technological, specifically digital, interfaces most Americans encounter daily is surprising and layered. When using a smartphone, for instance, the device itself is an interface, its buttons and touch screen and their functions. It has a

graphic user interface (GUI) and an operating system (OS); each application has its own

interface, not to mention the endless myriad of web interfaces potentially available with the touch of the screen. All these interfaces operate in tandem with the user, with each other, with other objects and actors. Think, for example, about an iPhone with its operating system that includes GUIs and haptic interfaces and the endless combinations of applications available for download, each with their own interfaces, with which the human interacts. Today, interfaces as objects permeate our actions, communications, and encounters in and with the world evident in the very pervasiveness of multiple types of interfaces like smart phones, environmental interfaces like sensors, and the developments in bio- and nanotechnologies. Therefore, I want to return to the material object interfaces because there exists a pressing need to better understand what they are, how we define them, their functions, design, and what happens to them as technology increasingly

67

evolves and shapes how we understand the relations with the body, spaces, other objects

and more. Posthumanists take the interface as one point of discussion, focusing on how

the interface as an object and a concept (re)defines the relationships of the body,

cognition, the interface itself, and our broader understanding of these terms.

The need to interrogate what constitutes an interface is especially pressing with the rapid development of new technologies. New hardware devices and interfaces

significantly differ from traditional computing PCs and expand the significance and

functionality of computing both theoretically and pragmatically. Technologies are

sensitive to location, movements, and actions for both users and environments. For

example, mobile technologies center the body’s presence and states as they take into

account users’ purposes, behaviors, locations, etc., making the material body crucial in

understanding how interfaces operate. Others, like tactile and haptic interfaces, artificial

intelligence, and brain-computer interfaces (BCIs) are actively changing human-

computer interaction (HCI), pushing it from a largely visual operation to one that is

embodied and/or affective. Weiss, Propen and Reid note, “the world has become very

complicated. Complicated in the sense that a fundamental boundary, the one between

people and things, has become a moving target” (18). Interfaces are often understood as

the medium between, but as interfaces evolve, this concept of interfaces as a meeting

space between body and machine comes into question just like the concept of “human”

discussed at length in Chapter II. These new technological developments and devices open up new terrains for critical exploration as our interactions with them reconfigure

68

“interface,” “human,” and “technology” as concepts that are not singular entities but

aggregates.

This chapter, then, looks specifically at interfaces, as designed objects and then as

a philosophical concept. We interact and engage interfaces every day. Software and

hardware engineers and interface designers continuously build new material interfaces and adapt existing ones. They are a fruitful point of contact not only for human-

technology-object interaction but also for critical analysis. So, this chapter first defines

interface, teasing out the various ways the term can be understood. I will then examine some design principles to interrogate what these tenets privilege and how they operate.

This leads to an examination of the importance of interfaces in the field of rhetoric and composition. I follow with a discussion of how rhetoric and composition scholars, as well as user interface and user experience designers and engineers, can productively trouble some basic principles of design. Failing to interrogate these principles, I argue, is a type of technological sleepwalking. Lastly, I conclude with a discussion about a posthuman reconfiguring of the concept of interface using Miccoli to show that the concept of interface as a space-in-between does not hold once we understand how “human” and

“cognition” are constituted through relations in an aggregate, distributed ecology across material, biological, technological and social substrates.

Interfaces Defined

“Interface” is a complex term that can be broadly defined and contested. In its most basic Oxford English Dictionary (OED) definition, the noun refers to two basic concepts: “a surface lying between two portions of matter or space, and forming their

69 common boundary” and “a means or place of interaction between two systems, organizations, etc; a meeting-point or common ground…also, interaction, liaison, dialogue” (“interface”, n.). The verb “interface” means “[t]o connect (scientific equipment) with or to so as to make possible joint operation,” or “[t]o come into interaction with” (“interface”, v.). The Art of Human-Computer Interface Design defines an interface broadly as “a point of contact between two entities” (Laurel xii). Whether conceived as a boundary between two spaces or pieces of matter or as a device or program that connects to actors (human or otherwise), by these definitions, when two things meet and interact, an interface exists. Further, the Computer Desktop Encyclopedia understands the interfaces as both a structure and a function (“interface”). “Interface” as understood by those designing or analyzing physical human-computer interfaces (HCI) includes screens, keyboards, mouse, touchpads, images, words, and more that allow the user to operate a computer. Others instead focus on “symbolic software that enables humans to use computers, and to access the many layers of underlying code that cause software to function,” typically the graphic user interface’s (GUI) text and graphics

(Lister et al. 338). In computing discourse, the definition can include “the physical arrangements and ergonomic configuration of computer systems, user operation of programs, and how the user interacts with the content to solve a task or to learn material”

(Marra 115). User operation-focused definitions typically center on software that allows user to (ideally easily) use the layers of hidden code that make software (and user) carry out its functions. Interfaces represent data, dataflow, and structures of the computer, making them legible for easy human use, and simultaneously translate the human input

70 back to the machine. But this is not limited to solely human to computer and vice versa.

Interfaces are layered sites of action, of interaction between users, hardware, software, content, and culture (any variation of these in fact). Far from static, interfaces are dynamic, materialized objects representing the changing states of the software, data and interaction.

Like the interfaces’ functions, the definition of interface remains fluid, constantly changing and broadening to include more than just the space in between user and device or some other boundary. Brenda Laurel notes that the interfaces historically were understood as the means of communication between human and machine as noted above, but it “has come to include the cognitive and emotional aspects of the user’s experience”

(p. xi). Interfaces have evolved with processing power and storage capacities, societal adoption of internet-based activities, the ease of access and other factors. As early as

1997, Steven Johnson wrote Interface Culture, arguing that our current age is “an interface culture” where the shift from analogue to digital is “as much cultural and imaginative as it is technological and economic” (40). This cultural moment, then, he describes as “the new medium of interface design winding its way through a broad swath of modern life…sometimes far removed from the computer screen” (S. Johnson 25). Lev

Manovich argues that when we engage with an interface, in fact, “we are no longer interfacing to a computer but to culture encoded in digital form”; we interact with a

“cultural interface” (“Art After Web 2.0” 80). He posits that interfaces are so pervasive that they are now “a key semiotic code of the information society as well as its metatool”

(The Language of New Media 66). As a metatool, then, interfaces “reflect the physical

71

properties of the users interactors, the functions to be performed, and the balance of

power and control” (Laurel xii). Beyond being a function of translation from code to the

user, the interface that used to simply operate “under the cloak of efficiency… it is now

emerging—chrysalis-style—as a genuine art form” (S. Johnson 242). More than this,

interfaces might be “the art form of the next century” (S. Johnson 213). Interfaces, then,

are much more complex than the dictionary definitions imply; they have informatic

functions, but they also have socio-cultural power to captivate, facilitate, and influence users and their interactions with technology.

Human-Computer Interaction Design Goals

Reviewing the lengthy history of interface design and human-computer interaction (HCI) is outside the scope of this project, but a few key historical moments can illuminate some relevant principles of the field. First, the GUI was not designed initially for its aesthetics but rather for function. From the outset, usability, specifically

“user-friendliness” and “transparency” serve as guiding principles for interface design.

Don Norman, for instance, states in an early 1990 work “Why Interfaces Don’t Work,”

The real problem with the interface is that it is an interface. Interfaces get in the way. I don't want to focus my energies on an interface. I want to focus on the job….An interface is an obstacle: it stands between a person and the system being used. [...] If I were to have my way, we would not see computer interfaces. In fact, we would not see computers: both the interface and the computer would be invisible, subservient to the task. (209, 219)

In other words, interfaces should get out of the way, not be noticed at all. If an interface becomes evident or disruptive, it’s a poor interface. In fact, Alison Head reminds that

72

“many software developers say that the best designs are ones that users never give a

second thought” (4). For instance, take a desktop GUI. A desktop icon “makes sense” to a user meaning if she is unaware of the mediation taking place; she clicks the icon, and a function is executed without the user considering the icon itself and its relation to the subsequent function. The icon, thus, feels natural, transparent, and intuitive as the user accepts it for the “real” action; it has vraisemblance. The screen the user sees, then, is a tablet with icons, pictures, interfaces, words, sounds, etc. as the interface fades from our conscious attention. These are the invisible interfaces, the “hallmark of effortless user interaction and good design” (Head 4). With this invisibility and transparency comes a desire for intuitiveness. Steve Krug explains that interface designers’ goals, “should be for each page to be self-evident, so that just by looking at it the average user will know what it is and how to use it” (18).

Harrison, Sengers, and Tatar chart three key developments or shifts in the field of

HCI. The first, “the man-machine” comes from a combination of engineering and human ergonomics focusing on the most effective fit (Harrison, Sengers, and Tatar 4). The second paradigm understands the human mind and computer as symmetric, parallel processors. Questions of concern include: “’how does information get in’, ‘what transformations does it undergo’, ‘how does it go out again’, ‘how can it be communicated efficiently’” (Harrison, Sengers, and Tatar 4). Flyybjerg describes this paradigm as elevating “rationality and rational analysis to the most important mode of

operation for human activity” (23). However, this does not include how entertaining an

interface is, how people experience or feel about the actual interaction.

73

Turning to phenomenology, the third paradigm concerns “embodied interaction”:

“the way in which we come to understand the world ourselves, and interaction derives

crucially from our location in a physical and social world as embodied actors” (Harrison,

Sengers, and Tatar 6). This runs counter to the positivist, Cartesian view of cognition and

sees the body and mind as separate. HCI designers have moved away from the view that abstract thought and rationality take place internally and the external world is stable and

subject to the subject’s interaction, not a part of it. Dourish, mirroring the posthumanist

approach to technology, further explains, “Embodiment is not a property of systems,

technologies, or artifacts; it is a property of interaction. It is rooted in the ways in which

people (and technologies) participate in the world. In contrast to Cartesian approach that

separates mind from body and thought from action, embodied interaction emphasizes

their duality” (279). Embodied interaction requires shifting the notion of cognition as

abstract and information-based to an understanding that thinking happens from doing

things, acting, in and with the world. Further, Klemmer, Hartmann, and Takayama argue

that GUIs problematically overlook action-based learning and memory. Embodiment,

they argue, is not simply having a body, but rather understanding the body

phenomenologically, seeing interaction, action, knowing, etc. as constituted via

contextually-situated human actors (Klemmer, Hartmann, and Takayama 140-9).

Epistemically speaking, there are differences in these paradigms. Under the first

paradigm, the construction of meaning matters only pragmatically; it is not really

considered unless something is in the way or causes an issue. Under the second, the

construction of meaning is constituted by data flows. The third, however, privileges the

74

construction of meaning and understands it as taking place alongside information,

happening spontaneously and collaboratively through interacting embodied, situated

subjects. Interaction, then, is integral. Meaning, rather than being constituted by

information flow, is intricately connected to the interactions, , perspectives,

resources, etc. of those designing and/or using the interface (Harrison, Sengers, and Tatar

7).

Since meaning is situated and local, knowledge is as well. Haraway explains

situated knowledge as one’s ability to know the world and oneself (“Situated

Knowledges” 581-9). In HCI discourse, situated interaction is constituted by one’s

physical and social contexts and spaces. The embodiment of knowledge requires that we

consider various perspectives for interaction instead of a specific universal set of metrics

for interaction design. And knowledge is tied directly to place in terms of design; each

particular context defines the nature and meaning of an interaction. Using architectural

theory, McCullough posits that as ubicomp gets closer and closer to our everyday experience, interface designers must think about how embedded technology, communities, and environments interact (207-213). He explains, “response to place. . . demands major choices in the contextual design of technology” as designers must

consider the dynamics between environments, spaces, architecture, users, and technology

(McCullough 207).

The third paradigm of human-computer interaction is clearly the one best suited to

meet the needs of ubiquitous and pervasive computing. And with it, designers, engineers,

and theorists are starting to question the levels or necessity of transparency and other

75

standards. However, these basic tenets have guided interface design engineers and

designers to seek invisibility, transparency, naturalness, and intuitiveness when creating interfaces. The main goal still remains: computing devices are extremely complex, even more so when operating on a multitude of platforms and devices. However, as they increase in complexity, it is the interfaces that make the device easier to use, which

usually means it is invisible and natural, getting the interface (and thinking about it) out of the way so the focus is on the interaction and productions resulting from interaction.

Natural User Interfaces

. . . all things will be produced in superior quality, and with greater ease, when each man works . . . in accordance with his natural gifts, and at the right moment, without meddling with anything else. ~ Plato, The Republic

I begin with this quotation from The Republic because it stresses the “natural gifts” of man and the integral link these gifts have to “ease” and “quality.” Plato’s statement could in fact be a mantra for principles of developments in HCI and tenets informing the natural user interface (NUI). These interfaces merit study, particularly since they seek to align material design with humans’ seemingly natural cognitive capabilities and behavior.

In the past few decades, HCI research and design have moved beyond the standard, traditional window, icon, menu, pointing (WIMP) interaction modes and past your typical GUIs. Building on the idea of embodied cognition, interfaces like tangible, gestural and other embodied user interfaces have come into development. Some scholars, like Shaer et al., group such interfaces “under the umbrella of reality-based interfaces

76

(RBIs) (1515). These interfaces draw upon “user’s preexisting knowledge of the real,

non-digital world” and offer more accessible, natural, intuitive user interaction to reduce

mental energy required to focus on the interface itself, freeing up that mental effort to

completing the task at hand rather than having to manage the interface itself (Shaer et al.

1515-2). Such interfaces take advantage of four key interactional features: human perception of physical phenomena such as velocity, scale, gravity, persistence of objects,

etc. alongside bodily, social and environmental awareness and skills (Shaer et al. 1512).

Further RBI interfaces are contingent upon embodied cognition, which defines cognitive

processes as involving physical space and the body.

Other scholars define interfaces like these that go beyond the GUIs, which

dominate most interfaces today, as natural user interfaces (NUIs). Widgor and Wixon

argue that “Natural user interfaces (NUIs) seem to be in a similar position to that

occupied by the GUI in the early 1980s,” a position that theoretically improves users’

experiences and interactions with technology, easing its use by making the interactions

natural (5). There is not one formula or a standardized, all encompassing definition for

creating an NUI, but a useful definition is “a user interface designed to reuse existing

skills for interacting directly with content” (Blake 1). But, importantly, NUIs are not

simply a “natural veneer over a GUI” (Wigdor and Wixon 5); rather, NUIs enable “the

user to operate technology through intuitive actions using gestures, voice, touch and the

NUI becomes invisible in a way that the user does not have to put a lot of cognitive

efforts into interaction” [emphasis added] (Roupe, Bosch-Sijtsema and Johansson 42).

NUIs are understood and evaluated by the ease of use, in both completing tasks and

77

user’s ability to learn how to interact with them. So, what guides the development of

NUIs are the principles that they should be natural, invisible, intuitive, and easy to learn.

The goal of these characteristics is, as Norman implies, to get the interface ideally out of

the way, so users can attend to the action they wish to perform, not how to get the

interface to work so as to carry out that action.

Evident from their name, the chief principle in understanding these types of

interfaces is the term “natural.” There are a few paradigms for what defines an interface

as natural. Natural gestures based on human movements constitute meanings that

facilitate the technology’s operations. The body, then, makes the functionality of the

system apparent because the interactions mirror actions we know already. In essence,

they feel like the real world. Another framework, human-to- human interaction, aims to

create systems wherein users feel as though they are interacting on human terms rather

than technological ones (Gates). With this type of system the user and technology are on

a fairly equal playing field, creating a common understanding of the activity or task at

hand, and this type of interaction is often implemented through a multimodal approach,

particularly using speech (Lwabona; Bernsen and Dybkjær). Widgor and Wixon’s natural

approach understands the user’s feelings or attitudes when using the interface. Natural in

terms of interface design for them refers to “a property that is actually external to the

product itself . . . not about the interface at all. Quite the opposite. We see natural as

referring to the way users interact with and feel about the product, or more precisely,

what they do and how they feel while they are using it” (Widgor and Wixon 9). So, it is

not the interface that “feels” natural, but rather the interaction the user has with it does. In

78

a sense, it means to feel “at home,” making the user “feel like a natural” as the design

echoes human abilities (Widgor and Wixon 9, 13). As such, one design principle for

NUIs is that once the user is an expert, the interface creates “an experience that . . . can

feel like an extension of their body,” echoing some of the posthuman theorists in the

previous chapter (Widgor and Wixon 14). What is crucial with NUIs is that the body

itself becomes an interface, and its movements, gestures, and even vocalizations are key

in using NUIs. Think about the anecdotes from Chapter I with John Anderton, Elon

Musk, and Google Glass where body movements and expressions directly operate the

interfaces. Interfaces, in this framework, should be enjoyable and should amaze the user

as the user engages directly with the content via a transparent interface (Wigdor and

Wixon; Valli). And transparency is integral in what constitutes a “natural” interface.13

When the user interacts with the right interface, it is as though they are engaging the content itself without feeling like an interface is mediating the interaction. Through this invisibility then, the interactions become habitual, naturally coming to the user via experience and previously learned cultural and material skills (Valli; Blake). Bill Buxton, a multi-touch technology expert, defines a “natural” interface as one that “exploits skills that we have acquired through a lifetime of living in the world” (qtd. in Larson). This definition illuminates how “natural” refers to both innate and learned skills gleaned from environmental interactions. And NUIs like other interfaces should also appear invisible to the user. Widgor and Wixon clearly assert that with well-designed NUIs, “applications, as such, are invisible to the user. And they should be” (139).

79

One key means of achieving this natural feeling is the fact that NUI interaction is based on immersion, moving beyond two-dimensional planes, often by rendering objects so they have a perceived volume that users can engage via gesture, exactly like Elon

Musk’s technology mentioned in Chapter I. This immersion design allows users to direct the interface in three dimensions or at least using all three axes (x,y,and z). Gesture allows the users to navigate and orient themselves to the NUI environment without needing a screen; instead, the user uses the body to map the space and content without having to discern the right software to carry out an action. In fact, the “right” software is hidden, invisible in NUIs because they are designed to be perceptually driven and easy to use as opposed to the programming model. This shift to user’s natural and perceptual abilities as opposed to programming models stems from the sheer number of users who often do not have programming knowledge, nor do they want it, but who still require computers to complete tasks, process data, etc. Gesture, then, becomes one means to carrying out what some may have previously viewed as too technically tedious. In fact, this is precisely why Musk designed the interfaces he did, because he found that too much time and effort were focused on operating interfaces rather than focusing on the design of his rocket components. Musk wanted the interface to get out of the way to allow his engineers to intuitively build the components with their hands via gesture.

Body Movement and Gesture in Interface Interaction

Designing such interfaces and theorizing the interactions requires a holistic approach to understand how users interact with each other and their environments, using the body as a key contact point. Body movements, specifically gestures, are integral in

80

these interfaces as can be gleaned from Musk’s gestural interfaces as well as the fictional

ones discussed in Chapter IV. Adam Kendon, a foremost authority on gesture, points out

that these natural interactions have a “‘reportive’ function, the ability to tell each other

things” (“Gesticulation” 350). To fulfill this reportive function, “Speech and movement

appear together, as manifestations of the same process of utterance” (Kendon,

“Gesticulation” 352). Gesture as a seemingly natural type of expression is both individual and shaped by social customs and conventions. Body gesture serves as a consort to linguistic communication. In fact, all communication is multimodal, combining both manual and vocal modalities on a continuum (Kendon, Gesture 105). Other scholars make similar claims about gesture and speech representing a continuum of communicative behavior (McNeill 1992; Goldin-Meadow, McNeill and Singleton;

Liddell and Johnson). To mirror the reportive function, HCI combines speech and movement, in essence, gesture recognition (Lala 6). Designers frequently consider gestural language as a primary method of user communication and interaction. Encoding gesture into NUIs requires “fundamental clarity (each gesture is well-defined) and its overall coherence (the gestures make sense together)…[a] genetic epistemology of cognition . . . a well-developed and easy-to-learn system will be one that operates

logically in a way that is analogous to human reasoning” (Widgor and Wixon 137).

However, in the digital contexts where the physical body is often not present in digital space, such as discussion forums, social media, and more, how and wherein does

gesture factor? It may seem like bodily gestures would fall by the wayside of rhetorical study, as they did in the rhetorical tradition. James Porter posits that “Because delivery

81

came to be associated almost exclusively with speech situations and with functions of the

speaker’s body (voice, gestures), it clearly seemed less relevant, if not irrelevant, to

written composition than the other canons (particularly dispositio and elocutio)” (207).

However, the canon of delivery under which gesture and bodily movements belong has

had a resurgence with the proliferation of computer mediated communications. Despite the fact that most CMC interactions take place in digital space with no physical body present, delivery still matters, but there is more work to be done in terms of how the physical body operates in such spaces.

In some digital rhetoric scholarship, for instance, the body itself, its movement and gestures, often is not the focus when examining digital delivery. Emily Hart, for example interrogates emoticons, and I would add emojis, as “replacement gesture[s]” for communicating gesture and body language in electronic environments that might suppress communication via tone, speech, etc. (34). James Porter’s “digital delivery” considers the body in relation to identity, in how the user constructs and represents the virtual body in digital space. He, too, sees gesture as relevant in online spaces in the form of emoticons that stand in for nuanced human communication. But these scholars are not really addressing physical bodily movements and gestures.

While worthy of interrogation, these scholars’ frameworks are inadequate for analyzing the dynamics of the physical body operating an interface via gesture, specifically one that corresponds directly with a digital avatar or holographic projections.

There are scholars whose work is starting the tackle the complicated dynamics of the body, its movements and gestures, and interfaces.

82

Gestural Interfaces and Research

Gestural interfaces in particular are coming into their own as gestural touch

devices like iPhones and tablets demonstrate. But this is not the only application and the

research on actual gestural interfaces has been ongoing since the 1980s. Karam and

Schraefel’s overview of the gestural interaction research finds that “For over 40 years,

almost every form of human gesturing that is possible can be seen . . . as a means of providing natural and intuitive ways to interact with computers across most, if not all computer application domains” (1). More complicated gestural interfaces that are moving toward NUIs are also gaining popularity, mostly in the context of gaming. In fact, the commercial application of gestural interfaces mostly centers on gaming (Bhuiyan and

Picking 3). For instance, one failed effort came from SEGA called “ActivaKtor,” an octagonal device that the user would place on the floor and stand inside. The sensor controller allowed the user to control the game character with the body as laser technology read the body movements and translated them into game controls and movement. The technology’s tagline was, “You are the controller” (Kimak). However, the sensor proved too difficult to use and was too inaccurate to take hold in the gaming market (Kimak). Further, the Nintendo Wii offers another example of a gestural-based interface for gaming. Released in 2006, the Wii was met with accolades from the gaming community for bring physical body movements into gameplay.

To offer a few additional examples, Microsoft describes their Kinect technology as “A set of technologies that enable humans to interact naturally with computers”

(“Kinect for Windows”). The newest version of the sensor boasts to “provide developers

83

with the foundation needed to create and deploy interactive applications that respond to peoples’ natural movements, gestures, and voice commands . . . [and] better understanding humans, objects, and their environments” (“Kinect for Windows”). Kinect

opens up numerous possibilities for implementing natural user interfaces in multiple

contexts, as a cursory look at their website shows. Further, Microsoft Research is

exploring how gesture-based input can improve medical fields and processes. For

instance, Johnson et al. research gestural technology in interventional radiology. One

problem with using touchscreen technology in this field is the potential for surgeons or

doctors to breach the sterile/unsterile line by touching screens to view digital images

from cameras in the body (Johnson et al. 1). Currently surgical teams manage to avoid

this problem, but the solutions require carefully and collaboratively orchestrating the

team in terms of the spatial arrangement of people, artifacts and instrumentation during

the procedures” (Johnson et al. 1). However, this is often restrictive, imposing

unnecessary tasks upon members of the team, particularly when miscommunication

occurs (Graetzel et al.; Wachs et al.). Further, Wachs et al. point out that to touch a

screen, the surgeon must move away from the patient in order to examine and manipulate

the images generated, distracting attention away from the patient to the technology,

which is less than ideal. Therefore, researchers are working towards new gesture

recognition technologies to rid of the need to physically touch a material interface

(Wachs et al.; Stern et al; Graetzel et al.). Kinect in particular, Johnson et al. argue, has

the potential to expand how gestural interfaces outside of gaming contexts and in more

practical, scientific ones (Johnson et al. 2).

84

Elon Musk’s innovative technological interfaces described in the introduction

have actually been referred to by him and popular media as an “Iron Man interface.” His

software combines Leap Motion, a gesture-control system, and NX Siemens software,

used to design rockets for his company SpaceX. Initially, as described in Chapter I, he

uses a 2-D screen to move around simple wireframes of rocket parts. But using the

popular Oculus Rift, a wearable virtual reality gaming headset in combination with his

software, Musk can grab, manipulate, zoom, and rotate a fully 3D CAD wireframe model

of the Merlin rocket engine. Specifically, Musk’s technology allows him to manipulate

the model on a freestanding glass projection, using hand movements to interact with the hologram. Finally, Musk can take the models out of the virtual and with a 3D printer make them material. His technology looks like a “real world manifestation of the futuristic interfaces in Minority Report and Iron Man” (Meghan Neal). Musk hopes to revolutionize the ways we interact with computers as he says, “Right now we interact with computers in a very unnatural, 2D way. . . . And we try to create these 3D objects using a variety of 2D tools. And it just doesn't feel natural—it doesn't feel normal, the way you should do things” [emphasis added] (qtd. in Meghan Neal). This technology,

Musk argues, will shape design and manufacturing by offering the user the ability “to take the concept of something from your mind, and translate that into a 3D object really intuitively” (qtd. in Meghan Neal). Intuitiveness is key for Musk when designing these technologies because they offer the ability for his engineers to put their focus on the actual design of the rocket parts, not on how to get the computer to build the part by entering keyboard commands or pointing, dragging and clicking icons with a cursor. He

85

specifically aims to engage the body in the design, using it as the primary tool for designing his products. Specifically, his interfaces provide the users with the efficiency to not think about the interfaces but rather the product resulting from the engagement, the machine/rocket parts printed.

One last example stems from commercial research seeking to advance medicine.

Phillips and RealView Imaging conducted a clinical study in Israel to explore the potential and feasibility of an interactive 3D interactive holography system for heart disease procedures. The technology displays full color, real-time holographic images of the patient’s heart, projecting a 3D image of the heart floating in the air without any need

for wearables or 2D screens. This interface proffers a “hyperrealistic user experience”

creating what they called “imagined intimacy…[that] allows users the full freedom to

engage with the 3D image literally” (RealView Imaging). This intimacy allows the user

to maneuver, manipulate or rotate the hologram in any way needed, even allowing the

user to crop the heart to look at specific planes and look at cross sections to plot specific

points. And the hologram is real-time as the physician holds “the patient’s virtual heart

literally beating” in her hands. Dr. Einat Birk, for example, states, “The holographic

projections enabled me to intuitively understand and interrogate the 3D spatial anatomy

of the patient’s heart” (qtd. in RealView Imaging). This technology does not require any

equipment other than simply the hands (and an optional stylus to crop and cut) to manipulate the holography produced.

These interfaces demonstrate a move toward more “natural” and “intuitive” technological interfaces beyond the Windows, Icons, Mice and Pointing (WIMP)

86

interactions of many common GUIs, using the material body, the hands, arms, etc. as the

primary operator. Scholars are now starting to theorize the body with these interfaces and

interactions, but they are largely HCI scholars. Within the last two decades, in fact, the

field of HCI has explored the use of the body as an interface (Bruder et al.; Moeslund,

Hilton and Kruger; Poppe). Studies conducted on various interfaces like motion capture

systems and head mounted displays like the VR Oculus Rift show that using the body’s

motions as a controlling interface enhances navigational performance and user experience

(Riecke et al.; Ruddle and Lessels). Technologies like Kinnect and the Wii support NUI

as the body becomes the interface. The release of the XBOX 360 Kinnect sensor system

made gestural interfaces relatively affordable both for the gaming system and computers,

and studies followed which investigated physical supports for gesturing with arms and

hands. Some find that users get fatigued with continuous arm movements (D’Souza et al.;

Park et al.; Stannus et al.). However, other research suggests that using the physical body to rotate and move helps users better understand bodily and spatial perception (Riecke et al; Buddle and Lessels). Roupe, Bosch-Sijtsema and Johansson further argue that “the use of physical human rotation and movement is not only user friendly but also enhances understanding of the virtual space” (43).

While these HCI scholars have made valuable contributions, rhetoric and composition scholars can extend these pursuits by continuing to examine the rhetorical aspects of interfaces as described above, extending this research into NUIs, brain interfaces and more. Most of the rhetorical scholarship concerning interfaces as noted above explores interfaces and how they are not merely windows to data, but actually

87

affect the text itself. Some rhetoric, composition, and digital media theorists have

interrogated and critiqued the design principles of interfaces, and this work can be further

extended into NUIs. Interrogating the language with which we discuss them is a useful

starting point.

The development of these interfaces from text-based interfaces to graphic user

interfaces to embodied interfaces is important in how we theorize interfaces as objects.

Seung-hoon Jeong specifically argues that we can see a key trajectory in interfaces as

they evolved “from instrument to symbol to organism; from informatics to aesthetics to

philosophy” (6). So we have to offer new ways, new philosophies to theorize these types

of interfaces, philosophies that incorporate concepts of the body, space, and materiality to

theorize interfaces as objects.

The Interface in Rhetoric and Composition

Interfaces are designed with intention, and access and control are situated within them. Human-computer interaction, then, is rhetorical and ideological, so digital rhetoric

scholars have shown an interest in theorizing interfaces and their design (Selfe and Selfe;

Eble; Pullman and Gu, Wysocki). Barton and Barton, for instance, suggest users see

visual representations as maps, and as maps they are complicit with “social-control

mechanisms”; therefore, they are imbricated with power as opposed to neutral representations (235). When something is mapped, then, one must examine the “rules of inclusion” that “determine whether something is mapped, what aspects of a thing are mapped and what representational strategies and devices are used to map those aspects”

(Barton and Barton 235-8). Soon after in 1994, Selfe and Selfe assert the need to

88

understand interfaces as maps rather than neutral or objective points of contact. They

argue that software creates and sustains ideological spaces that “enact among other things gestures and deeds of colonialism, continuously and with a great deal of success” (Selfe and Selfe 484). Interfaces, then like maps, represent and are embedded with ideology and power, “never ideologically innocent or inert” (Wood qtd. in Selfe and Selfe 432).

Wysocki and Jasken discuss the rhetorical nature of interfaces and interface design in “What Should Be an Unforgettable Face,” looking back through the first twenty years of interface research. They found that, early on, a number of articles appeared in

Computers and Composition that discussed the rhetorical dimensions of interfaces

(Sullivan, Cubitt, Taylor, Moulthrop, Selfe and Selfe). These scholars sought to “broaden our views so that we could see how interfaces are thoroughly rhetorical” (Wyscoki and

Jasken 30). However, with a lack of inclusion of rhetorical aspects into software and interface designs handbooks, the HCI principles of transparency and invisibility dominated the success of interfaces as the best interfaces were hidden from the user’s awareness. Examining eight manuals Wysocki and Jasken found that only three texts mention design, but even these do so from a technical capacity offering “very little space or effort to interrogating the cultural, political, social and economic rhetoric in embedded in interfaces” (48).

The 2009 Computers and Composition special issue is solely devoted to interface studies. The contributing authors of this issue all aim to open up a space for sustained rhetorical research of interfaces (Carnegie, Carpenter, DePew and Lettner-Rust, Knight,

Rosinski and Squire). The editor of this edition notes, “An interface is a sort of no man‘s

89

land, a limbo between things. It is not surprising, then, that interface studies—the cultural

and rhetorical analysis of interfaces—is also in a borderland, a zone of ambiguity”

(Haefner 135). Further, he posits that interfaces, possibly due to “the presumption of transparency, have not received the critical attention that they deserve” (Haefner 135).

Haefner warns against following transparency as the norm for interface design. Stuart

Selber concurs, arguing for interface literacy that includes the ways that interfaces and

their structures foster or hinder communication and production as well as shape the

implications of the mode of production (136).

Collin Gifford Brooke also addresses interfaces, asserting that rhetoric scholars’

“unit of analysis must shift from textual objects to medial interfaces” (Lingua Fracta xvi). Textual objects are rooted in “individual texts . . . and large theoretical structures” and ignore the “excluded middle [wherein] interfaces as rhetorical practices . . . may span multiple texts without achieving the level of abstraction of literary theory” (Lingua

Fracta xvi-xvii). In essence, Brooke believes that focusing solely on the textual object and large, broad literary theory has resulted in interfaces and their ability to change and disappear being ignored, despite the fact that they can shape the message. Brooke sees the need to theorize and analyze the multitude of interfaces and technologies through which the text is instantiated. Failing to do so will result in incomplete analyses.

Barbara Warnick does similar work in “Looking to the Future: Electronic Texts and the Deepening Interface,” stressing the importance of understanding interfaces as a

“portal for user-system interface” rather than seeing the screen of a digitized text as a surface, an object to be read (328). This view, she warns, minimalizes or flatly ignores

90 the effect of the interface on the text itself. Only by examining the interfaces, the code, media, and more can scholars critically parse the numerous affordances and drawbacks of technologies in communication and help to refine and shape their design and use.

These scholarly research pursuits stress the need to dig into interfaces further, to better understand their rhetorical and social dimensions and functions. While not exhaustive, these key scholars interrogated interfaces from a networked, interdisciplinary capacity, an approach that scholarly production must take to offer more nuanced and complete analyses of digital and/or multimedia rhetorical and textual production.

Troubling the Discourse of HCI

Rhetoric and composition scholars can further their critical work by first continuing to critically analyze the discourse of interfaces, specifically the principles of interface design. Concepts like objectivity, transparency, and naturalness are terms that bear scrutiny, particularly since they are guiding principles. Rhetoric and composition scholars, even those not interested in digital rhetoric or technology studies, should critically consider the impact of technological interfaces. This is particularly so given the push to incorporate multimodal, multimedia, and digital writing into classrooms.

The interface, whether a computer, smartphone, search engine, software, etc., is social, non-neutral, bringing to the user an entire set of assumptions, influences, and significances on the communicative act. And these social aspects must be critically analyzed to truly foster critical thinking when working and writing in networked environments. When technology is considered as solely transparent, and therefore objective or neutral, this fosters a limited understanding of how technologies, particularly

91

interfaces work. This neglects key chances to interrogate interfaces’ impact on

communication in digital spaces, and it ultimately neglects the aim of truly fostering

critical and rhetorical reflection.

Jay Bolter and Diane Gromala, discussing the nature and properties of interface design, posit that it is not a software or hardware designer’s goal to create an interface that reflects the user, a point discussed at length above. Rather, “they usually assume that the interface should serve as a transparent window. . . . They expect that the user will focus on the task, not the interface itself. . . . If the application calls attention to itself or intrudes into the user’s conscious consideration, this is usually considered a design flaw

(Bolter and Gromala 375). This illustrates the assumption that technology should be

transparent, invisible to the user. The chosen technology should be a window, as the user

has “an unimpeded and undistorted view of the information that lies ‘beyond’ the

interface. The computer screen, or portions of it, should function as the user’s window

onto a world of data” (Bolter and Gromala 377). But this means that whatever technology

being used, whether a computer, smartphone, search engine, or piece of software, it is not a point of critical analysis, not a point of contact to analyze that shapes cognition and

communication processes.

Further, Jay Bolter and Richard Grusin describe how technology transparency

mediates the user experience. They assert, “Our culture wants to both multiply its media

and to erase all traces of mediation: ideally, it wants to erase its media in the very act of

multiplying them” (Bolter and Grusin, Remediation 5). In order to achieve immediacy of

the experience, interface designers seek “an ‘interfaceless’ interface,” absent of visible

92

tools allowing the user to engage objects and content and navigating the space” (Bolter

and Grusin, Remediation 23). So, immediacy created by the invisible interface naturalizes

the interaction with the technology, ideally stripping away the user’s awareness of the

interface at all.

The Myth of Objectivity and Neutrality

However, one could charge that beneath this insistence upon technology

transparency lies an assumption, whether acknowledged or not, that technology is

objective, a charge similar to that leveled at technology-as-extension theories. But, far

from being objective, technologies are “deeply interwoven” into the social and political

contexts in which they are created and used, a point that we can see in our cultural texts

like science fiction, which I address in the chapter that follows. Landon Winner argues

that technologies “are ways of building order in our world. . . . Consciously or unconsciously, deliberately or inadvertently, societies choose structures for technologies

that influence how people are going to work, communicate, travel, consume and so forth”

(256). So, ignoring the interface and its impact on communication is deeply problematic,

particularly when the aim is to develop critical thinking in digital and/or online contexts.

Stuart A. Selber and Bill Karis posit, “We cannot ignore the political and ethical

dimensions of the interface in teaching human-computer interaction principles. Too often

in science and engineering contexts, however, computers are viewed as neutral tools,

machines that support the work of interface designers and users in apolitical ways” (112).

The authors find that most administrators, faculty, or staff agree that students should be

taught how to use technology to accomplish specific tasks, but few if any recognize that

93 the interfaces and technology we use shapes the tasks we perform. Technology is not simply a neutral tool of use to complete a task. They are active participants in the creation of communicative production, and to become smarter users, we must think about these influences. In essence, Selber and Karis point to an assumption of objectivity that prevents critical examination and analysis of the technologies themselves. In a later work,

Selber further develops his argument about why envisioning technology as a neutral tool is problematic. He asserts, “As a human extension, the computer is not self-determining in design or operation. The computer, as a tool, depends upon a user, who if skilled enough can use and manipulate its (non-neutral) affordances to help reshape the world in potentially positive ways” (40). However, he further notes that if a one uses a computer for specific, individual purpose beyond its initial purpose, “the tool metaphor raises issues of responsibility silenced in such philosophies of technology as autonomous technology and technological determinism” (Selber 40). In essence, Selber’s argument illustrates that understanding technologies as neutral tools removes them from their social context both in terms of their design, purposes, actual uses, and effects.

To counter this perception, engineering scholars Gana and Fuentes argue for a paradigmatic shift in how engineers approach interface design, advocating that engineers see technological development and design from a social theoretical framework, “as a human practice, with social meaning” (437). Technologies, in essence, are not divorced from those that design and build them. Rather, the experiences, cultures, history, and politics of the designer and/or user affect how technology is designed, implemented, and used as well as the impact it has and how it is used. To elucidate their point, Gana and

94

Fuentes describe in detail two historical ways of conceptualizing technology. First is the

perception of “technology as neutral,” which envisions technology as developing “along

a linear process, autonomous with respect to society, and always oriented towards

improved efficiency and economic yield” (Gana and Fuentes 437-8). The viewpoint

reserves technological and interface design to those with technical expertise. At its core,

this vision understands technology as an autonomous tool, something that is built by

experts to assist a user for effectively accomplishing specific tasks. The user does not

have the skill to design, manage, or change the technology as s/he lacks the expertise to

do so. The second, contrasting vision is “technology as a social activity” (Gana and

Fuentes 438). In this vision “technology develops in conjunction with society and various

social actors which are intrinsically woven together. . . . One assumes that the

involvement of citizens is fundamental. . . . Following this vision, management becomes a shared activity; ideally between all actors” (Gana and Fuentes 438). This opposing

vision conceptualizes technology as social, never neutral. From this perspective the development, design, and use are directly influenced by the social context of the user and

designer who both make decisions about its uses and management in a collaborative way, making the future of technology a participatory enterprise.

Transparency as Sleepwalking

But does this reliance upon transparency, neutrality, and objectivity undermine or

compromise developing a truly critical, analytical awareness of technology? We know

that as the computers and digital technologies that shape our composing processes

become ubiquitous, they become more transparent. As users adopt a technology, it

95 becomes more common, which in turn lessens its visibility for critical analysis as it falls into a “sheltered invisibility” (Michael Neal 29). And Christine Haas notes that this transparency often benefits writers in many ways as it keeps the technology from intruding into the task at hand (xii).

However, Haas also warns that with digital technologies, “the images seen by looking through technology may be distorted without looking at the technology itself in a systematic way” (xi). Gana and Fuentes further argue, “One of the errors of modern civilization has been to understand and explain technology as if it were neutral and universal, completely ignoring the responsibilities and complexities of the transformation, bringing us to a stage of ‘technological sleepwalking’” (445). Haas’ and

Gana and Fuentes’ research points to the problems of not analyzing the ways that technology can shape composing and communicative processes.

Tony D. Sampson also asserts that technology in networked society often results in “somnambulism,” as the user sleepwalks through the critical process of interrogating the social relations of a network and the social impact of the interface itself (12).

Sleepwalkers pervade networks exchanges as users simply use the technologies without interrogating them, then pass on their social behavior unconsciously. These social inventions are “then contagiously passed on, point to point… for imitation, feeding into a continuum of invention and further adaptations of the entire social field” (Sampson 25).

This process in turn makes the user somnambulists, “sleepwalk[ing] through everyday life mesmerized and contaminated by the fascinations of their social environment”

(Sampson 13). So, originality of invention becomes seemingly impossible as users are

96

rooted in this social web of contagion, which is further complicated by their uncritical

acceptance of the technology as a tool, not seeing it as part of the social web itself, and

ignoring the fact that the interface is just as imbricated in the social as the user. This

technological “sleepwalking” does not foster critical thinking about the technologies we

use every day. And it does not encourage users to truly consider the choices made when

deciding on the best technology, interface, and mode of writing for various rhetorical

contexts. Rather, a reliance on transparency of technology omits key aspects of analytical

expertise.

Interrogating the “Natural”

Some designers and scholars have questioned the word “natural” to describe the

function and core competencies of NUIs. Donald Norman, for instance, is one of the most

vocal critics of “natural” interfaces, and he levels a number of criticisms. First, he argues that a universally common understanding of gesture simply does not exist. Gesture varies across cultures, so gestural communication can vary so greatly; therefore, calling it

“natural” is counterproductive (Norman 6). Further, he argues that the functionality of the action is invisible until the action, the gesture, is executed. The fact that the action is not visible can be problematic because the user cannot look at the interface and see what to do; instead, they have to act then figure it out based on the system’s response; this is typical of using any technology or tool. However, gestures are ephemeral, leaving no data behind which impedes knowing what worked or did not with a given system. He also mentions that gesture-driven interfaces in particular do not do well with discerning user’s intentions, often reading gesture into bodily movements that are not intended for interface

97 interaction (Norman 10). Norman does think that gestures have no applicable potential but only will after they are standardized, and even this standardization will not guarantee the interface will suit all contexts. Some of the interactions via gestures are not natural, even though they are common with interfaces. But it is not the naturalness of the gesture that cemented it as a normal response to a screen. Rather it has become “natural” from habit, an evolving understanding of touch technologies’ affordance, and subsequent cognitive links (Norman 10). Norman does not imply that gestural interfaces will never feel natural; rather, that we are in the early stages so it will take time to develop as users learn the interactions, which start to feel natural.

Alan Boykiw also takes issue with NUIs and its premises. Namely he posits that

“natural” is too subjective if derived from the user’s previously learned experiences and skill sets because this is inextricably grounded in context. Therefore, NUIs would have to proactively engage and react to users in truly complex ways across numerous contexts to operate. This, for Boykiw, is a staggering challenge and is in essence unfeasible.

Interfaces would have to parse too many contexts and users. And while he sees no problem with aiming toward some type of naturalness, he still sees problems given that what constitutes “natural” will vary too much from user to user.

Rhetoric scholar Ben McCorkle also questions the notion of “natural” when it comes to interfaces. In particular he worries that interfaces designed to capitalize on seemingly natural embodied interactions bear a “powerful lulling effect” that makes them invisible, making the technology seem natural as well as opposed to a point of critical analysis (Rhetorical Delivery 164). His argument hinges upon revitalizing the canon of

98

delivery because of the canon’s potential to include not only the digital production and

interfaces themselves but also the human interactions with technology. New interfaces

that incorporate the human body in particular will require revising delivery. McCorkle is

especially concerned that these types of natural interfaces, such as gesturals and haptics,

run the risk of minimalizing difference, even marginalizing it, as the term “user” comes

to stand in for one type of body and bodily experience when designers presume a specific

norm body and user. He states that with transparent and seemingly natural interfaces, “we

risk forgetting to ask whose body is assumed or privileged by this new technological

paradigm” (McCorkle, Rhetorical Delivery 164). He urges that critical analysis must take

place sooner rather than later, prior to technologies becoming invisible and so natural that

they are out of our critical attention and wholly privilege one type of body.

Peter-Paul Verbeek’s work addresses some of these concerns as he focuses on the

interface and how it co-shapes humans though its design, which materially prescribes

human’s mental and physical action. Through engineering design choices, software

engineers in essence construct morality through the concept of the script (Verbeek,

“Materializing Morality”). And he sees this process as explicit. Engineers are moral

agents, and their material production shapes human action as humans submit themselves

to and rely upon the devices they use. Verbeek argues, “[t]echnology forms the tissue of

meaning within which our existence takes shape and technological mediations are a starting point for the moral subject” (Verbeek, Moralizing Technology 73). His overarching purpose, then, is to bring intentionality into the design process as engineers

99

critically consider both the construction of the technologies they produce and the

implications of their use because design influences human behavior.

All of these challenges to user interface design principles are important ones given that the body and technology are so fluid in their dynamics. And as the body is integral in the types of interfaces I’ve been discussing, one point becomes clear: interface

designers are not simply designing interfaces any longer. What they are really designing

are users. When the body becomes the primary means of interaction, the operating

functions are not simply confined to a screen. Rather, they are mapped onto our bodies and actions. This is a powerful position to occupy for users in some sense, but more so for those who design these innovative technologies. There is an urgent need to

continually scrutinize the interfaces we use and develop to better understand the relations

they demonstrate, to ensure that bodies are not essentialized or worse that some bodies

risk getting left behind or alienated when technologies do not operate based on their

different physiologies. And this will only get more complicated as bodies change with

technological intervention as discussed in the previous chapter. Interrogating the

discourse of interfaces is one way to do so, particularly as we become the thing being

designed and as ubicomp, biotechnology, and nanotechnology embed interfaces into our

environments, bodies, and actions.

McCorkle’s argument suggests a future direction that we need to explore given the shift toward NUIs. NUIs do no simply involve interactions that shape texts. Most of the rhetorical scholarship discussed above focuses on GUIs, their social and political aspects, and how they affect both communications and texts. But hardly any of this work

100 explores the NUI as a different type of interface, one that actually sees the body as an interface. This is an area that must be developed in much more thorough analyses. NUIs operate with the body, often the voice, and nearly always physical spaces. Therefore, these are dynamics that can be explored much further in future research. It is not enough to discuss how the body operates as rhetoric, a field explored at length by rhetorical scholars. I have already mentioned some scholars in the posthuman chapter for instance who examine how the body operates. But this work needs to be connected to discussions of the implications of the body in space, then linking the scholarship to technologies themselves. Space too as been theorized extensively by scholars, whether its discussions about physical spaces (Blair, Carol, Balthrop; Michel), Henri Lefebvre’s exploration of spatial production, Edward Soja’s extensive spatial theory of the complexities of spatial praxes and production as both social and material, or scholars like Nedra Reynolds or

Scott Reed who investigate the spatial dynamics of virtual spaces. But these bodily and material theories must be configured with spatial theories and if one is to come to a more complete vision of what interfaces like NUIs are and how they operate via body, language, informatics/code, and space simultaneously. While there are scholars who seek to explore these complexities like Ritesh Lala and Yvonne Rogers among a few others, most of these are these scholars are in fields like media studies, computer science, or human computer interaction. Rhetoric, particularly theories of posthuman rhetorics, can contribute to these discussions in more meaningful ways if scholars can weave together spatial, material, and digital rhetorics and ground them in discussions of interfaces as objects.

101

Posthuman Reconsiderations of the Interface and Body

Another productive means to engage interfaces in scholarship is to interrogate the concept of interface itself, particularly its relation to the body. With NUIs and ubicomp interfaces, this becomes especially important. The boundaries between interface, body, environment, and object, if they ever existed, are less and less defined as technologies proliferate. A number of posthumanists work towards more nuanced understandings of

“interface” by re-theorizing the relationship between interfaces and bodies; and others like Anthony Miccoli challenge the merits of the term and concept as a whole given the current posthuman moment.

Kim Toffeletti theorizes the body and interface in her study of mass media. In particular she seeks to redefine the body as an interface, as it disrupts the subject/object and technology/nature binaries. This reconfiguration also “destabilizes a fixed locus of bodily identification, and the codes surrounding just what a body might be within contemporary culture” (Toffeletti 151). She argues that when bodies interact with electronic or digital media, the subject/object distinction breaks down, which transforms the ways that we understand how body and subjectivity are constructed. The body interacting with machine constitutes more than prosthesis; the body becomes “a boundary site – neither entirely natural nor cultural but a configuration that negotiates the limits of corporeal existence within an increasingly technological environment” (Toffeletti 160).

An interface acts as prosthesis, but it operates beyond a material extension or projection from the body. Rather it is “a flow of information between biological, digital and media systems” (Toffeletti 160) as a “two-way exchange” occurs; as technology extends the

102 body, the biological body extends the technology (Toffeletti 163). Therefore, bodies should be understood as an interface interacting with other interfaces and objects.

Redefining “Interface”

Lastly, and most importantly, is Anthony Miccoli’s “Posthuman Topologies:

Rethinking the Interface,” so it is worth discussing his work at length as it deeply informs my project. Miccoli argues that the idea of “interface” remains a significant problem for posthuman theorists as it presents an impossible, topological space between human and technology. Miccoli sees posthumanism as a starting point but argues that we have to move forward, and the way to do so is to reconfigure “human” as materially substantiated across substrates via a distributed cognition and by rethinking “interface” in its entirety.

Miccoli sees his reconfiguration as involving two key points. First, that the traditional way of understanding special traits of humans—namely intentionality, volition, and logic—are materially instantiated, “distributed across topological and biological substrates” (Miccoli 45). These characteristics are always technological and never occur solely within human physiology, as they are always inherently connected to material objects and environments through structural coupling. Second, the interface operates as a human means to make sense of aggregate consciousness, operating as “a functional myth” (Miccoli 46). Thus, there is no interior and exterior; these concepts are also myths being linked by the myth of the interface so that humans can maintain some idea of autonomy. Cognition then is “physically, intrinsically linked with the physical spaces we occupy, giving way to a posthuman determinism that is neither fully biological nor fully exterior” with the interface being an “instantiated intentionality” (Miccoli 46). This view

103

requires dismantling the notion of an interior self and the exterior world, seeing the two

on an equal field that work in strategic coupling.

Some theorists already discussed in Chapter II make similar arguments about

coupling, but Miccoli’s claims push beyond their ideas, particularly with his point about

the interface being a human construction to make sense of the world. For instance, Andy

Clark’s extended mind thesis has promise, but it still relies on the dualism of interior/exterior. The very notion of extension, in fact, presupposes the concept of an interior from which cognition can “extend.” Mark Hansen posits that humans project

consciousness onto the exterior, which operates as media. He posits that a “structural coupling” occurs in our biological interactions with the environment, but Hansen refers to humans as biological systems that are self-sustained, closed systems (Hanson, “Media

Theory” 299). Miccoli points out that as compelling of a model as this is, the idea of

closed and open systems still implies definitive boundaries, thus still upholding an

interior/exterior binary. Further, closed system autopoiesis, or “self-maintenance of an

organized entity through its own internal processes” means that humans process

information from the world as a series of representations based on sensory data

(“autopoiesis”, OED). Miccoli argues, then, that this reliance upon representation is

incorrect because it implies “no real informational exchange” but rather that the world is

simply something outside that we come to and it affects what happens within us, not a

mutual constitution (51). Other posthumanists thinkers too, while illuminating the

dynamics between technology and human, still depend on what Miccoli calls “vestigial

humanist conceits” that place human’s cognition at the center (46). The focus on volition

104

is particularly troublesome him as it “occupies some kind of ‘internal space’” as an

“anchor or marker of human onotology” (Miccoli 46). In privileging the human’s ability

to act (agency), intentionality, volition (to choose), and to express, the human is still the

center.

Miccoli builds upon Jane Bennett’s commitment to object-oriented ontology, specifically her point to understand the “thing power” objects have (Vibrant Matter 2). In her work, she charges scholars to move away from cultural significance towards understanding the “relational effect, a function of several things operating at the same time or in conjunction with each other,” the thing-power of objects (Bennett, “The Force of Things” 354). Miccoli pushes this further arguing that it is the cultural significance that clouds our understanding of interfaces, abstracting them. While it can be useful to deconstruct cultural signs so as to complicate notions of object, nature, culture, and other difficult to define concepts, ultimately this act shores up the idea that these are somehow external from ourselves. Miccoli seeks to move past the discursive and the idea of relations because relation implies a rhetorical understanding of materiality as an idea, and once “discursively rendered into ideasrepresented by the mindthe discussion is subsumed into investigation of interior/exterior dualisms” (51). Rather, he seeks to understand materiality not as that “which we attach, inscribe, or embed our ideas, but as the stuff that makes possible our capacity to conceptualize them. In other words, the material phenomenon is not the always already unreachable signifier. It is, instead, a part of the mechanism of the cognitive process that makes such signification possible”

(Miccoli 50). Humans have a unique “anthropocentric topology” that allows us to locate

105

ourselves, to understand ourselves in the world (Miccoli 53). What actually constitutes

the human being and cognition takes place across material and biological substrates in

tandem, but this anthropocentric topology is what helps us make sense of this process.

Miccoli sees this human topology as that which provides humans with a sense of autonomy, but that autonomy, in fact “the perceived ‘autonomy’ of anything,” is a myth

(54). Humans see themselves as autonomous, self-sustaining systems despite the fact that our seeming autonomy stems from the “structural coupling” mentioned above between

our human physiology and the material, social and technological topologies. Carolyn

Miller makes a similar claim about agency in that it is not something that one possesses but rather the product of constructed “attribution” (“Opportunity” 152). Agency as the product of an agent is, as Celeste Condit notes, an “illusion” (qtd. in Miller 152). The interface, then, becomes a mitigating myth that as Miccoli states “is necessary to maintain the integrity and efficacy of the self in everyday interactions with the world”

(54). So, in light of understanding cognition as aggregate and autonomy, “interface” must be revised if we are to escape the exterior/interior binary that impedes many epistemologies and posthuman theories.

My goal, like Miccoli’s, is to rethink both cognition and interfaces, the interface as the so-called space in-between our so-thought autonomous selves and the world. The idea of interface is a convenient one that shores up the divide between human and machine, human and technology. But interfaces as spaces in between human/human or human/object are unstable. For instance, Miccoli notes that “in moments of intense concentration, meditation, artistic expression, or even sexual ecstasy” the interface, or,

106

more appropriately, the breakdown of a seeming interface, becomes easier to recognize,

and our aggregate, distributed cognition is more evident in these intimate, significant

connections of human-human or human-object or human-environment/space (54). Like

Miccoli, I see the conceptual interface as a myth to explain and situate humans as

autonomous thinking beings using brainbound epistemologies rather than understanding

that cognition is aggregate and distributed, stemming from bodies, spaces, objects,

environments, and technologies simultaneously in an ecology of networked entities. One

cannot be divorced or de-situated from the other.

However, there are a few issues I take issue with in terms of Miccoli’s assertions.

First, theoretically speaking I disagree with Miccoli’s his reading of relationality.

Relationality does not involved being grounded in the discursive or the interior mind.

Objects have relations; technologies have relations. While the concept of relations may be historically defined by human relations and thinking, this is another term that technology theorists, network theorists, and even some object-oriented ontologists have disrupted. Relationality only implies some type of connection between two things either in terms of structures, characteristics, what have you. So, this is a term that Miccoli too

quickly dismisses that I see as still bearing relevance if we are to understand his

arguments.

Now, I also understand that the technological interfaces like those described in

detail above can also be understood in different ways. Clearly a technological interface is

an object. It is a visual, material or spatial object that serves a translational function,

translating code written by experts for the non-expert user. I see these types of

107 technological interfaces in similar ways to how Hacker understands biotechnology, as serving a recoding function. Digital interfaces like GUIs in particular translate code into a new visual code that is easier for users to discern and implement rather that seeing the underlying source code that carries out specific informatic tasks. Interfaces serve as layers of translation to convert human or machine abilities into action to carry out cognitive and informatic tasks. What I find interesting is that with NUIs, these translational layers are aligning more and more with humans’ physiological and biological capabilities when they are designed to be operated via the body in space with movement and speech. With these interfaces, the layers of translation are spatial and bodily, driven by the materiality of the body. With technology like Kinnect the interface is sensor driven, as sensors read the body’s movement; thus, the body acts as the control mechanism for the action of the underlying code’s function. The body itself is in translation; it is another layer of coding and recoding to operate programming functions as the technology and body are enmeshed. And with medical imaging technologies like the RealView the content is coded as a visual and haptic hologram for the body to directly engage. With these types of interfaces, the space in-between disappears as the body and voice become the translational operative devices.

With traditional GUIs the idea of the visual interface as a space-in-between was more viable because it was a visual layer in between for the user to view, process, and understand its operations to choose the correct function to engage or manipulate data. The user has to think about what they want to do, hit the appropriate icon, button, etc. to carry out that action, and with more complex functions, often the program interfaces are more

108

complicated. Think about the complexities of visual design programs that allow a user to

build wireframes and high-end graphics. These software programs have in-depth, quite

complicated interfaces that users must parse to carry out the actions they want the

program to perform. This is one reason Elon Musk sought to develop interfaces that would remove this step, making the user interaction with software more efficient and making it easier to make the imagined object material without having to think about the

complexities of visual iconographic symbolic codes of icons and actions.

But with NUIs, the idea of interface as a mediator in-between disappears. The

body and speech operate devices in much more seamless ways with these “natural”

interfaces, so the space-in-between does not apply. Rather the body is the means of

translation; it is the operational layer that “works” the system either through bodily

movement or speech commands, removing the visual and textual spaces-in-between

we’ve become accustomed to with traditional interfaces. These layers of interaction and

translation challenge how we know and define the function of interface as they provide a

frame to enhance the way our cognition operates through body, object and spatial

dynamics. Their ease of use does not mean that we should not interrogate them as

critically as we can; in fact the opposite is true. These interfaces do not operate as spaces-

in-between because there is no space in between; rather, they are equal parts of our

cognitive ecologies that are key to understanding our aggregate minds as technological,

biological, and material. This is one reason NUIs and similar interfaces have found their

way into our imaginations and our practical lives.

109

CHAPTER IV

FROM SEPARATION TO AGGREGATION:

TRACING THE POSTHUMAN ARC IN SCIENCE FICTION

In WarGames, Joshua the A.I. asked, ‘Shall we play a game?’ SF prototypes are a kind of game; a thought experiment that imagines what would really happen if…What would really happen if this technology truly went wrong? What would happen if everyone on the planet had access to this? What’s the best thing that could happen? What are the legal, ethical and moral implications? What does this mean for our future? What kind of future do we want to live in?...and I can think of no better questions to try and answer. ~Brian Johnson

The introductory chapter establishes a working hypothesis: that we are moving toward ubiquitous computing, and this possibility elicits anxieties as we struggle with how to reconcile the rapid development of technology in conjunction with the immediate challenges to how we understand the world. This hypothesis is left deliberately broad in order to promote debate rather than to predict future events. Currently, pervasive mobile computing has established a culture of humans, objects, mobile virtual spaces, and environments. Sensors pervade our environments already in numerous “hidden” ways

(for simplicity’s sake, like thermostats and appliances), and engineers continuously devise ways to integrate technology into the environments around us. What exactly ubicomp will look like, entail, or how it will function and evolve is outside the scope of my knowledge and expertise.

110

However, the coming of such rapid, ongoing technological development does allow for theorizing this trajectory as a broader cultural phenomenon that affects rhetorical action. Therefore, the second chapter argues that we must reconsider and reconfigure “human” and how technology highlights the inadequacies of specific binaries between human/object and mind/body. The third chapter, then, outlined interfaces, how they are conceptually understood and designed, how we can further interrogate them, and how they are evolving into artifacts that show that “human” and “cognition” as concepts are not what we’ve long understood them to be. This led me to argue that the idea of

“interface” is fluid and aggregate, like posthuman cognition itself, and that interfaces are not a space in-between but an integral part of cognitive processes.

Already technology is in our hands, pockets, and environments and will only continue to embed into the material world, even into ourselves, demonstrating that our historical perceptions of binaries do not hold. These potentialities challenge our imaginations and designs and demand that we rethink exactly how we interact and know the world given such technological proliferation and adoption, changes that occur so quickly that we cannot predictively critique and consider their impacts or even keep pace.

That said, I have a background in technology forecasting. Forecasting technological development sounds like some strange sort of technological divination; it sounds, frankly, like science fiction. In reality, it is anything but. Forecasting is pragmatic. Intel futurist Brian Johnson describes the process as concisely as possible:

“We pull together trends, global projections and technology development into this vision and then iterate it over time” (4). Forecasts continually grow and evolve, just as the future

111

does. Crucial to future casting, though, is to develop things, things that we can share to

generate debate about where we are and where we want to go. These artifacts are

sometimes reports, sometimes “complex data models that we can analyze and discuss, but

just as often, these prototypes are science fiction stories, movies and comics.” [emphasis added] (B. Johnson 4). So forecasting and science fiction often go hand in hand.

We interact with and engage technological interfaces, or layers of translation,

every day in numerous forms, both real and fictional. Filmmakers and writers imagine

them. Software and hardware engineers and designers continuously build new or adapt existing ones. They are a fruitful point of contact not only for human-technology-object interaction, but also for critical analysis, and science fiction is a valuable source for exploring cultural representations of interfaces as potential prototypes. Sci-fi demonstrates how we imagine future interface functions. Some simply look “cool,” highly cinematic, dazzling the viewer, but these representations impart worthwhile insights about what we hope for, what we’d like to see, how we envision their future use, design, and socio-cultural impacts.

Therefore, this chapter examines some key prototypical science fiction interface representations, charting their cultural implications and how they demonstrate posthuman challenges to “human,” “cognition,” and “interface.” Closely reading the interfaces in

Minority Report, the Iron Man trilogy, and Avengers: Age of Ultron, we can see a distinct move from a purely dystopian vision of ubicomp in Minority Report to a fictional depiction of our posthuman evolution emblematic in the Iron Man trilogy and Avengers films, though still with a cautionary tone. In essence, the films highlight the complex

112

social, cultural, political and embodied implications and the complexities of the operative

dynamics between human bodies, environments and interfaces. These fictive

representations chart a narrative that demonstrates a changing mindset about technologies

as we push towards a world of pervasive and ubiquitous posthuman interactions.

First, however, I will discuss why looking toward fictional science fiction

representations of computing is advantageous for theorists.

Why Study Science Fiction Interfaces?

With so many interfaces in existence to examine, why take the time to critically

analyze fictional ones? For one, science fiction offers an imagined, playful means to

think about technology’s impacts and implications from various perspectives. Dourish

and Bell contend that ubicomp research requires thinking differently than we have

previously because it requires “a wholesale reconfiguration of the relationship between

people and their everyday lives, based on responsive environments and embedded

computation” (769). Science fiction as a genre is a productive site for conceptualizing

new and fantastical frameworks for thinking about technology. Thacker clarifies science

fiction as ‘‘a contemporary mode in which the techniques of extrapolation and

speculation are utilized in a narrative form, to construct near-future, far-future, or

fantastic worlds in which science, technology, and society intersect’’ (“The Science

Fiction” 156). These works of fiction are vital in science, technology, and societal

research. Critically looking at science fiction texts and sci-fi prototypes gets “researchers,

designers, scientists, engineers, professors, politicians, philosophers and just everyday

average people thinking about science in a new and creative way by using science fiction

113 stories that capture our imaginations” (B. Johnson 2). And science fiction offers numerous depictions of our technological imaginations as computing representations are quite common in the genre.

Further, the medium of film in the digital age welcomes this type of analysis because of the boundaries digital animation has broken. Digital animation makes the impossible possible on screen while still providing a sense of what Stephen Prince calls

“perceptual reality” (28). William Brown importantly asserts that digital cinema brings us into “the realm of the posthuman,” moving us past the “human perspective” (48). Science fiction’s extraordinary characters accomplishing fantastic, seemingly impossible enterprises in often strange worlds using cinematically engaging technologies draws viewers into a glimpse of the future, one they can imagine coming to fruition. And science fiction films, as Brian Johnson notes, “seem somehow more intertwined with science fiction and science fact” (56). The special effects and computer-generated imagery (CGI), makeup, cinematography, 3D and IMAX exhibition, as well as editing and numerous other filmic techniques are foundational for creating future visions and worlds. These techniques are all, at heart, science fiction. Starting with Méliès and continuing to today, filmmakers have developed cinematic techniques that have revolutionized the film industry and the as well.

Science fiction writers have long felt a strong link to the world of science, pointing out that their fiction is “not only based upon emerging science but they are in fact looking to use their fiction as a means to not only affect that science but also how that science is perceived and used in the real world” (B. Johnson 43). The genre offers

114

both technology users and designers a means to explore the implications of a technology

prior to building or implementing it. Examining these fictional prototypes opens up

possibilities for deeper critical analysis, and it can further influence designers. Gregory

Benford and Elisabeth Malarte insist, “Science has often followed cultural anticipation,

not led it” (8). Steven Schneider agrees, “Science fiction is the genre where fantasy and

reality coexistor collideto portray alternative visions of our planet and far-flung

worlds. Sometimes daydreams and sometimes nightmares, they invariably play out the

practical and ethical implications of new technologies” (qtd. in B. Johnson 56). The

genre, then, becomes a vehicle to imagine our futures, the benefits, potential drawbacks

and anxieties, good and bad. Science fiction, then, “gives us a language so that we can

have a conversation about the future” (B. Johnson 2). More specifically, Larson notes that

depictions of the computer in science fiction film are suited to analysis as they have been a common figure in American science fiction film throughout the second half of the …. Furthermore, science fiction film’s depictions of computer technology will reveal trends over time that should mirror trends in real-world technology. (294)

Science fiction speculates about the future, and these speculations are valuable texts for investigation. From these texts we glean insights about not only the philosophy and design of technologies, but also technologies’ social, cultural, and political implications, expressing the anxieties that technological promise naturally elicits.

Often science fiction films reflect cultural attitudes about technology’s potential or development, attitudes and anxieties similar to those we see in mass media as discussed in Chapter I. New York Times film critic, A.O. Scott explains, “It has long been

115

axiomatic that speculative science-fiction visions of the future must reflect the anxieties

of the present: fears of technology gone awry, of repressive political authority and of the

erosion of individuality and human freedom.” Various dystopian perspectives often arise

from science fiction narratives as the subtexts seek to address underlying concerns and

anxieties about technological impact on human lives. Depicting our desires about technology as well as our fears, science fiction explores technology’s potential larger future sociocultural impact. Such dystopian views may initially seem too bleak, only inspiring dread, alarm, or outright rejection of technologies; however, these narratives, even the most dystopian ones, offer productive glimpses into the questions that vex us as we move rapidly toward more technological advances that show the posthuman connection of technology and our bodies.

Film scholars, too, have pointed to the ways that anxieties about the body and technology permeate science fiction films, and how these representations reflect sociocultural anxieties of the time periods in which they are made. Jamaluddin Aziz argues, for instance, that in classic science fiction texts of the 50s, the body is figuratively used to represent the fear of the McCarthy years when America was battling against the

“enemy within” (209). Speaking about science fiction films of the 1960s, Edward James argues that science fiction should move beyond telling fantastical narratives. Rather, he posits that science fiction

should no longer be an exploration of the possibilities for humanity and science in the future or an educational introduction to aspects of science wrapped in the sugar coating of plot and adventure. Sf should not be an exploration of a hypothetical external reality, because objective reality is (211)[…] a dubious

116

concept. Sf should be a means to explore our own subjective perception of the universe and our fellow beings. (James 170).

Aziz argues that science fiction has “always been interested in the body,” yet often represents the body as a hybrid, fragmented characters that undercut the traditional notion of a body (211). Science fiction’s broader question then is: “what if the human body itself betrays the definition or constitution of a human being?” (Aziz 210). So, science fiction is a productive site for interrogating how society understands, frets about, and reimagines the larger perceptions of technological developments and their impacts on our bodies, and the very definition of what it means to be human in a technological age.

Further, the genre has also indelibly influenced a number of scientists and researchers. Arthur C. Clarke, a noted inventor, futurist and science fiction author writes, in “Aspects of Science Fiction,” “All of the pioneers of astronautics were inspired by

Jules Verne” (401). Science fiction does not simply influence; it shapes technological developments through the effects and impacts it has on the cultural imagination. Elon

Musk, the pioneering engineer from the second scenario in Chapter I, was clearly influenced by the Iron Man films when deciding to make the set of gestural interfaces described in the introduction. In Figure 1 below, he credits the film’s inspiration on

Twitter. Julian Bleecker also sees the interrelationships between sci-fi and science fact as a productive space for invention of real prototypes in the form of what David Kirby calls

“diegetic prototypes” (qtd. in Bleecker 63). These fictional tools are paradoxical: real, yet fake; functional, but also symbolic; and ironic but quite worthy of serious analysis.

117

Sci-fi prototypes are a “conflation of design, science fact, and science fiction

… an amalgamation of practices that together bends the expectations as to Figure 1. Tweet between Jon Favreau and Elon Musk Shows Sci-fi’s Influence. what each does on its own and ties them together into something new” (Bleecker 6). In fact, Bleecker advocates

“design fiction,” specifically looking at the fictional interfaces to achieve innovative new approaches (Bleecker 7). Design fiction looks forward in order to figure out new types of physical and social interactions with technologies. It is there in this productive space between the fictional and the real interfaces that designers, researchers, scientists, etc. can creatively pose queries, explorations, and provocations. Sci-fi prototypes are

“assemblages … part story, part material, part idea-articulating prop, part functional software … puzzles of a sort … complete specimens, but foreign in the sense that they represent a corner of some speculative world where things are different from how we might imagine the ‘future’ to be” (Bleecker 7). And in a world moving rapidly toward ubicomp, science fiction is key. Bleecker explains, “Ubicomp lies somewhere in the middle of the science-fact / science-fiction continuum” (63).

118

Genevieve Bell and Paul Dourish concur, arguing that science fiction should be employed for interface design, specifically design-oriented research “inherently directed toward the future … predicated upon envisionments of alternative futures enabled by technological progress” (770). Sci-fi provides the ideal backdrop for these types of theorizations, whether utopian or dystopian. Sci-fi visions influence our understanding of progress and science, humans and technology, and they profoundly affect ubicomp and its discursive praxes.

To offer another example, nanotechnology, like that deployed by Tony Stark in

Iron Man 3, as a field has been compared to science fiction. Nanotechnology researchers who write about their goals, projects and futures, often have their writing aligned to science fiction, given its tendency “to speculate on the far future and to prognosticate its role in the radical metamorphosis of human life” (Milburn 265). Some negatively categorize nanotechnology as not “real science” but rather science fiction (Jones, David

835-7). Gary Stix also calls nanotechnology “a subgenre of science fiction” along the lines of Jules Verne and H.G. Wells (37). However, rather than seeing this parallel with science fiction as detrimental, Colin Wilburn argues that “science fiction assumes an element of transgression from scientific thought that in itself brings about the transformation of the world” (266).

Similarly to Wilburn, Bell and Dourish, and Shedroff and Noessel, I see the connections between science fiction and science, particularly the influence and connections of the development of technologies in science fiction as a point of productive analysis. Bell and Dourish’s work directly influences this project as I follow their

119

insistence upon looking at sci-fi interfaces and ubiquitous computing in cooperation with

each other, seeing science fiction as a parallel. In particular, their methodology seeks to

examine themes, tropes, and other discourse that manifest in both ubicomp research and

sci-fi. Rather than use sci-fi as a litmus test to judge the success or failure real interfaces,

they hope to explore how sci-fi engages important social and cultural questions about the contexts and uses of technology. Such questions help to illuminate some of the assumptions that permeate technological research and design.

Science fiction opens up our understanding of both technology and culture. Rather than seeing the genre as a fantasy world separate from reality, designers in particular can examine the social stakes and ideologies of material interfaces via science fiction.

Science fiction then is “a deliberate, overt way of re-investing culture into the process of making things, particularly the kinds of things one finds in a networked world” (Bleecker

79). This allows designers to, as Frederic Jameson says about science fiction,

“defamiliarize and restructure our experience of our own present,” which allows for examining the material and social worlds in a productive way, offering the potential to explore new forms, materializations instead of the habituated experiences, forms,

expectations, etc. (286). Using science fiction representations, design fiction does not

assume that technological developments in science fiction predetermine how interfaces

materialize, but that these texts offer new possibilities, new potentialities. And the point of these reflexive speculations and extrapolations is to better understand the dynamics of social and cultural forces alongside the technological interfaces we develop, something

120

that engineers might not consider in their goal toward manufacturing working devices

and interfaces.

Another influential text that informs my research is Nathan Shedroff and Chris

Noessel’s Make it So, which explicitly takes an in-depth look at television and film user interfaces. The authors examine fictional interfaces “using real-world criteria for interfaces that aren’t in the real world” to uncover both errors and inspiration (Shedroff and Noessel vii). So they, too, advocate “design fiction” which defines as

the deliberate use of diegetic prototypes to suspend disbelief about change. There’s a lot of “diegetic prototyping” going on now, and that situation has come to exist, primarily, because of interface design. It is a consequence of interfaces built for the consumption and creation of what used to be called “text” and “film.” (qtd. in Shedroff and Noessel xx)

The fictional interface operates as a primary way for the audience to conceptualize how characters interact and use the speculative technologies created by the filmmakers. With the rapid pace of technological development, sci-fi filmmakers have to push boundaries to design filmic interfaces that will still wow audiences with visually exciting functions and features. Designers, Shedroff and Noessel assert that viewers enjoy seeing (though prudently) “fresh ideas about potential technologies unbound by real-world constraints writ large” (2). Audiences see fantastical interfaces onscreen and judge “vraisemblance,” the verisimilitude of these fictional interfaces, which in turn challenges sci-fi interfaces creators to push innovation even further to continually captivate and visually appeal to users with real world applications of the fantastic ones seen onscreen. This same

reciprocal dynamic occurs for interface designers; as audiences become more tech-savvy

121 and interested in sci-fi innovations, the sci-fi representations challenge designers to create newer, forward-thinking, real-world interfaces.

Comparing Filmic Interfaces and Implications

For anecdotal, representative evidence, I have chosen a set of films: Minority

Report, The Iron Man trilogy, and Avengers: Age of Ultron. Steven Spielberg’s Minority

Report was released in 2002; Jon Favreau’s Iron Man and Iron Man 2 debuted six to eight years later in 2008 and 2010; Shane Black’s Iron Man 3 was released in 2013, eleven years after Spielberg’s film; and Joss Whedon’s Avengers: Age of Ultron debuted in 2015. Each film has radically different cultural implications for technology as a whole.

The films when viewed collectively chart an evolution in how we can culturally understand technology in a larger posthuman framework. They present an arc of ubicomp that demonstrates earlier points about humans, interfaces, and cognition. Spielberg’s world of ubicomp is a dystopian noir at heart where technology controls all. The Iron

Man Trilogy, however, gives the viewer Tony Stark’s take on technology. In the trilogy, he starts as a makeshift cyborg with a smart artificial intelligence system that is his highly advanced companion serving similar functions to a smart home. But by the second film,

Stark and his technology have significantly evolved as JARVIS and the suit shape Stark’s subjectivity and cognition, demonstrating how technology can extend cognitive process.

The final Iron Man film shows Stark fully integrated with his technologyboth his highly evolved artificial intelligent OS and his suitto where the three are inextricable.

Lastly, Avengers: Age of Ultron pushes Stark’s technologies to their inevitable conclusion: a truly sentient artificial intelligence in the form of Ultron and Vision. The 122 films offer an important cultural narrative to culturally substantiate some of the theorized directions of posthuman thought.

Minority Report – Dystopian Ubicomp

Steven Spielberg’s 2002 film extends the short story by Philip K. Dick and is set in Washington D.C. in 2054. Tom Cruise plays John Anderton, chief of the Department of Pre-Crime, a criminal prevention system that has halted all murders since 2048. The

Pre-Crime system

Anderton leads relies upon three “pre- cogs,” precognitive human beings (one female; two males) whose heavily Figure 2. The Precogs in the “Temple” with BCIs Linking their Cognition. medicated, sedated, white bodysuit-clad bodies float in an amniotic pool floatation tank in sterile space called

“The Temple.” Each wears headgear, a brain-computer interface (BCI) with illuminated pale bluish-green lights, and they collectively generate images of future, premeditated murders into a collage of concurring images streaming directly from their brains (See

Figure 2.). These images serve as warnings to the police who, then, use detectives like

Anderton to “scrub” or interpret the images to locate the site of the predestined murder and arrest the suspect to prevent the murder from taking place. The Pre-Crime system’s continued success has led to its upcoming nationalization.

123

At the helm of this program lies one of the most canonical fictive interfaces of

science fiction, a gestural interface. To make the film and conjure the on-screen

interfaces, Spielberg and fellow producers “convened a three-day conference about what

life will be like in the year 2054,” getting insights from writers, engineers and

technologists including members of the MIT Media Lab, known for ubicomp and HCI

research (Clarke, Darren).14 The results are one of the most famous interfaces in science

fiction cinema: the “precog scrubber.” To use the interface, Anderton stands in front of a

transparent curved wall with floating windows of video and data. He uses his body,

specifically his arms and hands to gesture and “scrub” the images coming directly from

the pre-cognitive humans in “the Temple” (Spielberg, Minority Report). John

Underkoffler, one of the lead conceptual consultants and designers for the interface

explains, “Steven’s brief was that he wanted the interface of that computer to be like conducting an orchestra. Armed with that brief, I went off and devised this whole kind of

sign language for interacting with this computer, for controlling the flow of all this

information” [emphasis added] (qtd. in Rothkerch).

Anderton dons black gloves with illuminated fingertips, raises his arms and with

dramatic, orchestral movements of his body, then flits through floating images of the

predicted crime streaming from the precogs’ mind onto the screen in front of him. He

rewinds this data feed frame by frame with a turn of his hand, wipes files away that he

does not need, zooms in and out all with his arms (See Figure 3. below). He manipulates

the images turning, rotating, them to examine them in detail, parsing out the clues he

needs to find the location of the murder getting ready to take place.

124

So, we can visually see Anderton’s cognition at work as he searches through data via both the interfaces and the body. Unlike in a material crime scene, which does not yet exist because the images are future projections, Anderton must utilize the data from the precogs to parse what will happen.

Without the shared Figure 3. The Pre-Crime Scrubber Interface. couplings of the precogs’ collectively shared and informed visions, the interfaces’ (both the BCIs’ and the scrubber’s) translations of these visions into visually encoded data, and the body as the controlling mechanism, Anderton could not perform the act of detection so familiar in what is essentially a crime drama. The film then presents a posthuman amalgam, though a rather clunky one, of cognitive processes that could not take place without the integral operating dynamics of human bodies (brain, arms, hands, and gestures/movement); informatic coding, encoding, and recoding; and spaces. The precogs’ collective consciousness generates visual data in snippets; the BCIs translate this data into visual holograms for the precog scrubber, which then recodes and projects the images to the scrubber for Anderton to navigate via his body. Anderton, using his body (eyes, hands, arms, bodily movement, and brain) searches for visual clues to target the location of the

125

seemingly inevitable crime. These scenes then show a network of cognitive processes

taking place via technological and material substrates.

In terms of the precog scrubber’s actual function, its interactivity is indicative of

nearly all seven traits common across science fiction gestural interfaces that form the

beginning of a “robust language” for gesture in sci-fi (Shedroff and Noessel 101). These

include “wave to activate,” “push to move,” “turn to rotate,” “swipe to dismiss,” “point or

touch to select,” “extend the hand to shoot,” and “pinch and spread to scale” (Shedroff

and Noessel 98-101). For instance, when Anderton raises his arms, gloves on, this action

activates the interface. Gestural interfaces typically use some type of activating

movement like a wave to turn an interface on and off. Maneuvering the fingers, palms,

hands and arms to push and manipulate objects, moving them around as if the objects

have resistance and rigidity like Anderton does is also common. To turn or rotate objects,

hands or fingertips push the sides of the object to turn it on different axes. Throwing

objects away or disregarding them typically involves a swipe of the hand away from the body while the eyes look in another direction. Pointing or touching the fingertip to an object like a volumetric display usually selects it. To zoom in or make something bigger, across sci-fi films, the user selects the opposite edges of an object then pulls apart,

pinching the fingers together makes the object smaller. These seven gestures are bodied

and informatic translations of physical interactions. This set of gestures (along with

extending the hand to shoot) demonstrates some commonalities across science fiction

films with fictive gestural interfaces (Shedroff and Noessel 98-101). However, most of

this robust language is familiar to nearly anyone who has used touch screen technology.

126

The idea, the promise behind gestural interfaces, in theory, is that they are easier to learn

and take less cognitive work. In fact, Anderton’s movements look familiar enough in some ways, like overly exaggerated movements a user would make on a large floating virtual iPad interface.

What is of interest about the pre-crime scrubber is the role of the body and intent.

The body controls the interface, but the body only intends to do so only at certain times.

Specific movements may be executed, not intended to illicit the system’s response, such as a sneeze that affects the system regardless. Minority Report shows the viewer one key,

illuminating instance with the scrubber. In the scene, Agent Danny Witwer comes into

the scrubbing room

as Anderton works,

moving and

manipulating objects

via his body that

controls the Figure 4. Anderton “Corrects” the Interface. interface. Witwer

extends his hand to Anderton, a gesture of social courtesy. In response to the social cue,

Anderton moves to return Witwer’s gesture, extending his hand. The files he is actively

scrubbing follow the movement of his arm although he does not intend them to do so.

Anderton must then correct this unintentional response from the system, focusing his

attention back on the interface and away from Witwer (Figure 4.). This inclusion in the

film is telling because it shows the audience a problem with the interface. This system

127 does not distinguish the user’s intention with bodily movements and when bodily gestures do not have meaning for the system. Even in this fantastic world teeming with technology, this interface still has a functional hiccup.

When we look at the material reality of the interface, specifically the movements

Cruise must execute to “use” the interface, the stress on the body seems too taxing. Most people would not want to stand up and continuously raise and extend their arms for an extended period of time. Shedroff and Noessell point out that Cruise had to frequently take breaks after holding his arms above his heart for extended periods of time, though the film does not show these periods of rest. Further, some interface designers warn as well that this type of interface is untenable for long durations of time and can have adverse effects on the body (Wachs et al.; Pogue).

Despite these issues, the Minority Report pre-crime scrubber is one of the most referenced and memorable gestural interfaces in science fiction cinema. It is

“synonymous with ‘gestural interface’” (Shedroff and Noessell 95). In fact, as a canonical gestural interface, it is quite persistent despite its functional drawbacks. And what is interesting is the notion of a bodily choreography for “controlling the flow of all this information” as Underkoffler describes in the briefing about the proposed interface

(qtd. in Rothkerch). Controlling data flow by simply moving the body in ways that feel or seem “natural” is the ultimate goal of the interface, a goal Underkoffler had worked on prior to the film and would later go on to demo through a spatial and gestural interfaces in 2010 (Reeves 1577). Virtual reality researcher and engineer Jaron Lanier also consulted for the film and interface, specifically contributing the idea of glove-linked

128

hardware; he would eventually develop the Kinect gesture-recognition system for Xbox

360 system.

The pre-crime scrubber stands out in a film teeming with technology precisely

because it is cinematic and fully integrates the human body with the interface. The body

is the controller for this technology; the viewer sees no alternative way to engage data other than bodily movement and gesture. Without the body, the technology would not work, and the way the user engages with data is artistic, elegant, and captivating to

watch. Not only do the interfaces in this film engage viewers, but many of the fictional

technologies serve as ideal examples to analyze ubiquitous technologies. The

precognitives’ visions are scanned with optical tomography, so the audience (and

Anderton) can see what they see (Wright 483). Anderton watches a holographic display

showing projected home movies/memories of his family. Patrons in a virtual reality club

interact with holographs to safely experience and play out various fantasies. The film

envisions retinal scanners as a primary means to access buildings and as tracking devices

on public transportation. Eye scans are so common that ads in stores and on billboards

use them to personalize advertising content, based on one’s past purchases and other

metadata. Advertising is active, visceral, calling customers by name; rather than consumers looking at it, it “looks,” reads the body as data, and responds. Robotic spiders prowl buildings, scanning potential criminal suspects’ retinas. The film’s premise hinges upon the moral and ethical authority of committing pre-crime murder suspects into a controlled suspended animation despite the fact that s/he has committed no actual crime.

When a pre-crime suspect is caught s/he must wear a wearable brain interface designed

129 specifically to remove the wearer’s agency and thought processes as a digital feed to the brain holds the wearer captive in “a dreamscape panopticon” (Bond 29) (See Figure 5.).

The prisoner is further confined with other precriminals in a glass pod where they are Figure 5. Wearable Brain Interface that Imprisons the Wearer. bound to watch their crimes perpetually in their own private prison.

In Spielberg’s Minority Report the boundaries between technology, bodies, and space initially seem fairly integrated, the lines between them blurred in a posthuman framework. But upon closer examination, the body in this film may be a vehicle of control and embodied, distributed cognition initially for Anderton, as he controls the key user interface that drives the narrative forward using collective intelligence, data, software, body and space. However, outside that space the body is quite separate from technology. The whole body, in fact, is continuous fodder for interfaces that dominate society. John Anderton’s body controls the pre-crime scrubber interface completely.

Without the control his body has over the interface, the precognitive visions would not be sorted nor the crime prevented. So, in “the Temple” and the scrub room the body is

130 integral; the body is a site of authoritative, bureaucratic material control and power.

Outside this space, however, the body functions entirely differently (Minority Report).

The body, particularly the eyes, is a continuous human data stream. D.C.’s populace is continuously under law enforcement surveillance, and interfaces are the essential tools of capitalism and control. While the body in the Temple is in control exhibiting the crucial agency to parse information and enact change, outside those walls it is constantly scanned, read, and processed by interfaces that seek to consume data and markets, trace and control human movement and activity, even contain and constrict humans in forcible ways. In fact, to escape the police surveillance, Anderton, at one point, has to physically alter his body; his eyes, being the key marker of his identity for most of the interfaces, have to be removed and replaced so as not to be scanned, tracked and assessed at any given moment.

Wright notes that technology in the film shows a specific “explorative scenario” of a future that assumes “technology will continue to be developed and deployed in advanced ways, but not everyone will benefit from it” (482). Even the protagonist of the film who begins the narrative as the person wielding integral power over D.C.’s citizenry is not immune. He is as susceptible to control and constriction as anyone else.

Technology is the ultimate controlling mechanism: it tracks, scans, captures, and restricts human bodies, action, and behavior. Spielberg’s world is one of vigilant technological surveillance where subjectivity is “a process of reverse objectification,” where technologies visually read bodies as data, whether eye scanning robotic spiders, ads, videophones, and more (Bond 29).

131

The interesting takeaway from Minority Report as a film is the idea of the body as

crux. The body is crucial to operate the interface that drives the entire Pre-crime program,

which theoretically keeps the city’s citizens safe. The body is the vehicle through which

this takes place. But paradoxically, symbiotically the material body is perpetually subject

to invasive and controlling interfaces. The film sets up a future scenario wherein the

human body is central yet in problematic ways. The boundaries between body, interface,

space, and power are bound together not only for function but also control, and these

dynamics demonstrate a truly dystopian view of the relationships between technologies

and bodies. The body is available, present, ready to be scanned, tracked, controlled, and constrained, and only those in power, in this case, government and law enforcement, ultimately wield that control. Even the precognitives who are the key to the entire program are constrained; their bodies and cognitive visions are the machine that keep the program running, yet they are drugged into submission to their roles as prognosticators.

Those not integral in the inner workings of the system are subject to a constant bombardment of technological interfaces, some quite invasive. Even those in control of

the technology are subject to it as evidenced by Anderton’s need to alter his physical form to escape law enforcement once he is designated a potential murder suspect.

In some sense this is a posthuman view of the future as the bodies, interfaces, spaces, and objects seem to blur, but the reality of the film undercuts a seamless posthuman vision of aggregation because the technology is the thing in control.

Technology is a tool, an extension of authoritative power, not on an equal playing field with the humans it potentially suppresses. Spielberg’s vision is ultimately a dystopian

132

warning, a harbinger of technological oppression, and this is cemented in the end of the

film as the story reverts to a peaceful, almost pretechnological period where humans can act and live without surveillance.

Iron Man – The Beginning of a Cyborg Subjectivity

Compare Spielberg’s technological scenarios to another set of science fiction

films: The Iron Man trilogy that provides a demonstrative contrast to the visions

presented in Minority Report. I have chosen these films specifically because they show

the development and evolution of the protagonist Tony Stark’s gestural interface, an

interface similar to the pre-crime scrubber, but much more advanced, user-friendly, and

personalized. More importantly though is the Iron Man suit and the progression of the

dynamics between Stark and the technological suit itself. As the filmic narratives unfold,

the relationship Stark has with the suit becomes more complicated, integrated and ultimately an ideal example of posthuman subjectivity.

In the first film, Iron Man, Tony Stark, a billionaire, genius inventor and engineer,

defense contractor, and philanderer finds himself injured in a war zone, having taken a

piece of shrapnel to his chest, and the shrapnel barb slowly, fatally creeps towards his

heart. Trapped and captive in a cave, a fellow prisoner Yinsen performs emergency

surgery on Stark, hastily fashioning an electromagnet and a car battery to prevent the

shards of metal from piercing Stark’s heart. When Stark awakens a prisoner, he is

naturally distraught to see the makeshift contraption that concurrently saves his life yet

restricts his bodily movements, as he must lug a car battery around. Using equipment

from Stark Industries weaponry stored by his captors, Stark fabricates an arc reactor, a

133

permanent fixture in his chest that replaces Yinsen’s effective, yet bulky emergency fix.

Upon his escape and return to the U.S., Stark revamps his arc reactor that is saving his

life, effectively biotechnologically reengineering his own heart, and he secretly redesigns

the initial, rather primitive Iron Man suit he used to escape captivity. The new suit is

“gallium-arsenide enhanced, bacterium-tiled, self-collapsing, and nuclear powered, built

out of small tiles that accordion into place,” and it is powered by his new, robust

technology installed directly in his body, his palladium reactor that simultaneously

sustains his physiology and his suit (Meadows 93). Stark is, as Mark Meadows notes, “an

upgraded human, and his technology is far more than skin-deep” (93). Without this

technological immersion, Stark would die.

Of further significance for this project is the introduction of Stark’s artificial

intelligence (AI) computer system JARVIS, an acronym for “Just. A. Rather. Very.

Intelligent. System.” (David 32). In the first film, JARVIS controls Stark’s mansion in

Malibu, managing and operating every aspect of the home from room temperatures to analyzing and monitoring the protagonist’s sports cars’ engines’ performance. The

system uses a quite advanced interface that includes voice input and holographic

peripherals. JARVIS transmits data and communicates with Stark through speech,

holograms, and more conventional window interfaces and screens. JARVIS also controls

Stark’s robotic appliances, most importantly Stark’s armory that lies beneath the floor of his garage. Further, as an operating system (OS), JARVIS aids Stark in inventing and engineering his revamped Iron Man suit. JARVIS is the suit’s integral OS and is downloaded into the second-generation suit so as to manage all of the complex

134

subsystems for Stark. This includes monitoring the suit’s power, weaponry functions, and

other hardware concerns simultaneously with Stark’s physiological functions like his

heart rate, pulse, blood pressure and more. JARVIS thus controls the complex

computational requirements needed to interface Stark and the various subsystems in

addition to providing the necessary life support monitoring and control for Stark’s

material body.

Looking at the first film, the viewer can see some of JARVIS’ advanced

interfaces that Stark uses to design his second-generation suit. While these interfaces are

gestural, they are much more advanced than those in Minority Report. Stark is hands-on

with the interfaces, utilizing them to test the functions and fit of the suit’s various parts.

For instance, when he initially designs the suit, Stark looks at three monitors that look

quite like what most users would use aside from the fact that he uses a stylus instead of

keyboard or mouse, and he never touches the screen. However, Stark then drags his

wireframe model of the suit prototype to a hologram table where he can touch and

interact with the holographic wire frame data. Unlike the volumetric projections onscreen

in Minority Report, Stark’s data takes the form of “massless, moving 3D images that are

projected into space, which anyone can see with their own eyes from any direction

without the aid of special viewing devices, such as glasses” (Shedroff and Noessel 77).

The system offers Stark “direct manipulation” or the ability to interact “directly with the thing being controlledthat is, with no intermediary input device or screen controls”

(Shedroff and Noessell 102). In the first film, when Stark interacts with the interface,

with the object itself being controlled, he does so without a screen control or input

135 device; rather, he simply uses his voice and body, and occasionally a stylus. Once Stark’s

3D file is generated, Stark puts the stylus into his pocket since he can now use his hands to directly manipulate the projection. Stark looks at the holograph, touches it, manipulates it directly with his hands. He spins the wireframe image of the rudimentary armor suit, removes its crude holographic metal shell, crumples it in his hands and tosses it in a holographic trashcan that appears precisely when he needs it.

Using his fingertips and the stylus in combination, Stark builds a schematic for the second generation Iron Man suit. He works the wireframe with his hands, tweaking the design. He constructs a wireframe arm- piece that is fully interactive and responsive to his touch. Stark places his arm Figure 6. Tony Stark Physically Touches and Tests Designs via Hologram. into the suit’s arm piece to check its fit and maneuverability as though the projection is real, testing out the technology prior to building his material prototype (see Figure 6). The entire time

Stark shapes and reshapes the image, he speaks directly to JARVIS as he would to another person. Only JARVIS knows about Stark’s secret project as Stark confides in the system to keep the new data on “my private server” (Iron Man). Stark does not have to

136

state a list of commands to JARVIS to get the system to work for him; rather, he simply

speaks as though he is speaking to a human.

Iron Man 2 – Stark’s Extended Cognition

In the first film, JARVIS serves as a highly advanced OS with powerful interfaces

that facilitate Stark’s engineering designs and offer him direct interaction with his

protoypes prior to materializing them, but by the second film, Iron Man 2, JARVIS is much more advanced. Time has passed, and JARVIS has advanced to become even more responsive to Stark. For instance, in one scene, Tony Stark enters his pitch-black laboratory. He simply claps his hands, snaps his fingers twice, and expands his arms outward as he sits at his desk in the darkness, saying, “Wake up. Daddy’s home” (Iron

Man 2), and the viewer sees the system cut on with his voice and bodily command.

Bluish white lights gradually turn on, illuminating the darkness and the Clash song,

“Should I Stay or Should I Go?” plays for Stark. The system immediately responds to his body and voice commands, even offering up a musical preference for this working period.

Another scene shows a humorous take on how Stark uses and interacts with his interfaces. Stark works in his lab as his angry CEO Pepper Potts enters to discuss his recent donation of his art collection. Seeing her anger, Stark rises from his chair, snaps his fingers and walks through the lab past numerous holographic images floating all around. These holograms are projected from the floor, showing models of his suits of armor and other schematics. As he walks away from Pepper, avoiding the conversation, he grabs a holograph with his left and crumples it into a ball. He then turns around to face

137

Pepper, still walking backwards and at the far wall. As the holograph approaches the wall, a target like design appears. The holograph hits the target and a scoreboard on the left goes from 17 to 18 hits the wall and the word “SCORE” appears. Continuing to walk through the hall, away from the pursuing Potts, he casually pushes the holograms out of his way easily with his fingertips. He points both fingers at another hologram, snaps his fingers and gestures as though he is pulling something toward his body. The hologram collapses in his hands, and like the other, he tosses it toward the virtual scoreboard on the far lab wall.

In yet another scene, Stark again in his lab walks towards his computer station, claps and intertwines his fingers to stretch them, pressing his palms out and away like a conductor prepping for orchestration. He simultaneously speaks to JARVIS stating,

“Periodic table” (Iron Man

2). He, then, points to where he wants the data to appear in the room and says, “Right here” (Iron

Man 2). He asks JARVIS,

“where did our saga last Figure 7. Stark Places his Design into a Holographic Chest Cavity. lead us?” JARVIS responds that they have conducted four-hundred eighty-five simulations in attempts to find a new element to replace the Palladium powering the arc reactor in his chest that sustains his life but is simultaneously poisoning him (Iron Man 2). Stark pulls out

138

Cerium and Dysprosium from the periodic table holograph with his hands, playfully and

easily grabbing each molecule in one hand, making a circular motion which combines the

two into a new holographic projection of a molecule. Stark holds the molecule, casually tossing it up like a baseball catching it in his right hand, mumbling, “One of these has got

to work” (Iron Man 2). He then points his left finger, making a motion and saying “empty

shell…come here” (Iron Man 2). A green, red, and white glowing wireframe model of

the Iron Man suit appears. Stark places the glowing yellow wireframe molecule, which

resizes with the turn of his wrist, into the wireframe’s chest cavity, just like his own (see

Figure 7. above). Once in place, Stark snaps his fingers and says, “Initiate” (Iron Man 2).

The hologram turns yellow and red, forming a prototype of Tony’s pulmonary and

cardiovascular systems, so Stark can test the element in the suit without having to risk his

body further by testing the elements on his own physical body. The hologram now shows the outline of the suit as well as Stark’s physiological systems integrated, merged seamlessly into one just as they are when he actually dons the suit.

After numerous attempts to combine elements to find an alternative for his arc reactor, in a key scene a frustrated yet determined Stark stands behind a table with a scale model of his father’s 1974 Stark Expo. He asks, “JARVIS, could you kindly Vac-U-Form a digital wire frame? I need a manipulatable projection” (Iron Man 2). JARVIS creates a digital copy of the model, scanning it with bluish laser sensors. Stark then grabs the 3D digital model with his hands, lifts it from the table, and pushes it into an open space of his lab as it hovers in front of him. With a snap of the fingers on his right hand and a quick movement of his hand and forearm, the model spins in front of him. Rather than looking

139

at the model as a horizontal digital version of the material model that he lifted from the

table, Stark’s gesture rotates the projection so he examines it from a bird’s eye point of

view, seeing the physical patterns created by the architectural design built by his father.

His gesture effectively creates

its own virtual gravity and

momentum. Examining this

upturned model, Stark sees

similarities between the

design and the structure of an

atom. Inspired, he points to

what would be the nucleus, Figure 8. Stark Grasps the Holographic Nucleus. Stark tells JARVIS “highlight

the unisphere” (Iron Man 2). The center sphere glows a soft yellow, then Stark draws a circle with his finger and claps his hands quickly together, then expands them to pull out a yellow wireframe model of the sphere (See Figure 8.). Examining the now soft, blue and yellow glowing sphere model of one architectural point on his father’s EXPO design,

Stark waves his hands, flicking his fingers, simultaneously telling JARVIS what aspects of the model to delete:

TONY STARK: Lose the footpaths. Get rid of them.

JARVIS: What is it you're trying to achieve, sir?

TONY STARK: I'm discovering.... Correction. I'm rediscovering a new element, I believe. Lose the landscaping, the shrubbery, the trees.

140

Parking lots, exits, entrances. Structure the protons and the neutrons using the pavilions as a framework. (Iron Man 2).

With Stark’s voice and the gestural expansion of his hands, JARVIS strips away the unnecessary parts of the model and restructures it on demand as Stark waits. Stark sits down, and once JARVIS has finished restructuring the sphere as a wireframe molecule, he quickly expands his arms all the way up and the wireframe expands to the size of the room. It surrounds

Stark as he sits in the chair.

He literally sits inside the molecule, examining it three dimensionally from within Figure 9. Stark Expands His Arms and the Hologram, Rendering a 360º View. as its blue points float all around him (See Figure 9.). Spinning around in his office chair, Stark examines the model, carefully studying its structure which allows him “rediscover” the element his father buried in the architectural design of the Expo, an element that will replace the radioactive Pallidium he currently uses in the life-sustaining arc-reactor in his chest.

Seeing the element, he smiles and laughs, then raises his hands, claps them together, and the volumetric projection collapses in his hands like a ball. Stark holds the glowing yellow and blue ball in this hand as though it is a real, tangible object. Without the ability to move and manipulate the projections in ways that are impossible in reality, without the

141

affordances that JARVIS provides for Stark to physically alter and test the elements he

works with, he would not have made his all too important rediscovery.

The viewer can see how Stark’s technology has evolved, become more user-

friendly and personalized, and how it serves his multiple needs and applications much

more diversely. JARVIS’ newer modes of interaction are even more playful and

responsive to his individual movements and preferences, down to the musical choices.

For instance, the holographic table from the first film is extended from the tabletop to the

floor by the second film. The projected holograms are no longer limited to the small table

space, but extend to use the entire space of the garage. Further, JARVIS’ interfaces feel

more personal in this film, more playful, reflecting Stark’s personality, and the systems

respond much more seamlessly to seemingly “natural,” spontaneously gestures that he

innately completes.

Stark’s brilliance is obvious in the films so far as he invents and builds numerous

technologies that, like his arc reactor-powered reengineered heart, not only save his life

but also, like his exoskelton suit, grant him powers beyond human physical capacities.

But what is key in the progression of the films so far is JARVIS’ evolution. JARVIS’

capabilities demonstrate a decided shift from the first to the second film. In the first,

JARVIS is indeed a very intelligent system that quite intuitively speaks to Stark and offers the affordances for the man to invent and develop his technologies. JARVIS also performs several functions that we could see in a high-tech smart home, though in a much more advanced way than in versions currently available. However, by the second film

JARVIS performs different functions. By the second film, as an OS, JARVIS represents

142 more than a high-tech intelligent set of interfaces that are simply useful for Stark. Rather,

JARVIS’ capabilities directly inform and extend Stark’s cognitive abilities. Without his (I use the male possessive here because JARVIS speaks with a male British accent) functions, Stark would not be able to see technological potentialities for his inventions like the new element that will replace the one poisoning his body. JARVIS becomes an extension of Stark’s ability to cognitively carry out the tasks he needs to in order to move forward with his discoveries. Stark’s cognitive process are not limited to what his physiological brain can do, but rather he can extend these processes by coupling with

JARVIS’ AI capacities to collectively enhance Stark’s human cognition. Stark, then, is no longer limited by what his human perceptions, sensory systems, and physiology can carry out, but can use his body, space, and JARVIS as a cognitive tool to extend his thought processes far beyond brain bound thinking, which mirrors what the posthumanist scholars positing extending mind theories purport. Here in Iron Man 2, the viewer can see what this type of human thinking might actually look like, what form it can potentially take.

Iron Man 3 – Beyond Extension to Aggregated Cognition

We can see similar cognitive coupling in Iron Man 3, and Stark’s relationship with JARVIS has grown even more intimate. For instance, in a comical sight gag, the viewer can see the importance that JARVIS plays for Stark and his reality. Stark does not simply view JARVIS as a computer, an advanced user interface helper; rather, he is a

“real” companion as evidenced by the fact that the computer system has his own

Christmas stocking hanging on the far left side of the mantle. Visually, Pepper and

143

Tony’s stockings are on the far right, but the presence of the stocking shows that Stark, in

some ways, sees JARVIS as an embodied presence, one with which he shares

consciousness.

More importantly though, in another scene, Stark sits at his desk in his laboratory/garage compiling a floating collection of holograph data including government

intelligence files, mass media reports, etc. regarding an explosion that occurred at the

famous Chinese Theatre in Hollywood. The floating holographic icons form a cube

shape that Stark

manipulates with his

hands. Stark pushes

his hands forward and

with a wooshing

sound the data cube

zooms forward,

away from his body. Figure 10. JARVIS’ Digital Representation of the Crime Scene. Stark un clasps his

hands and raises his extended arms up as the data expands and JARVIS initiates a

“virtual crime scene reconstruction” (see Figure 10.) (Iron Man 3). Blue lines appear from the data, creating a wireframe version of the actual explosion site, the glowing lines creating a holographic representation of every detail of the scene. Floating within it are transparent floating holographic screens with news footage, photos, maps, etc. of the relevant data. All these Tony moves with his arms and hands. Once the data is

144

constructed Tony walks through the holograms carefully examining the compiled

information. JARVIS’s data then shows the blast radius including digital outlines of

people killed in the blast, their digital bodies disappearing into black silhouettes after the

blast. Stark walks through the digital representation of the crime scene, as JARVIS

calculates the point of origin as well as renders a witness’ face to piece together clues

about what caused the explosion. Stark walks the length of the scene towards the

direction the witness was pointing and looking when struck by the blast. Honing in on

this specific space of the crime scene, Stark points to the holographic floor, then gestures

upward with his hands, and blue, circular hand controls appear, allowing him to pull out

the blue-green digital cross-section of the scene until it hovers directly in front of him.

Stark swipes the section

like he’s turning pages in

a book looking for clues

and stops when finds a

digitized set of dogtags.

He places his left hand

below the holographic Figure 11. Stark Pulls out a Digital Cross Section of the Hologram. cross-section, palm

facing up, and holds his right hand above it (See Figure 11.). His circular control interfaces appear around his hands again, and Stark shifts the section so he has a bird’s eye view and pushes forward with his hands. The hologram floats away from Stark, settles into a hover, so he can examine the tags. He and JARVIS continue weed through

145 the data, which leads them to a starting point to investigate a disturbing trend of bomb blasts.

In this scene wherein Stark investigates a blast site, JARVIS’ capabilities and affordances cannot be unbound from Stark’s cognitive processes. The AI. system is a part of Stark’s thinking processes. The tasks he knows he needs to complete in order to properly think through the situation he could not perform without the reality bending capacities that JARVIS offers him. JARVIS’ holographic interfaces are extensions of

Stark’s cognitive processes; his ability to manipulate these digital representations directly shape how he understands reality, its problems and possibilities. The film shows how technology allows Stark to “not just self-engineer better worlds to think in . . . [but] self- engineer ourselves to think and perform better in the worlds we find ourselves in" through a deliberate process of structured coupling (A. Clark, Supersizing 59). Stark’s cognitive processes are not strictly bound within the confines of his body as he works the digital crime scene. Rather, his thinking is “quite literally extending the machinery of mind out into the worldas building extended cognitive circuits that are themselves the minimal material bases for important aspects of human thought” (A. Clark, Supersizing xviii). In such a dynamic, “the figure of the hacker . . . shares control with the machines, willingly granting the machines more and more local intelligence (trust)” [emphasis added] (Tucker 126). The posthuman Stark, then, is not separate from his technology:

“hardware and software are no longer strictly within the binary of machine or human; instead the human is as much hardware (original prosthesis) as the computer is software

146

(increasingly responsive and intelligent machines)” (Tucker 126). In essence, then, the

two form a mutually constituted aggregated cognition.

And Stark’s cognition as merged to material and biological substrates goes even

further in Iron Man 3. In an early scene of the film, Stark as usual works in his Malibu

Mansion laboratory, holding a large syringe. He painfully injects his left forearm apparently for a second time with the large silver syringe, while holding a white blood-

stained piece of gauze in his teeth. As he injects his body with some type of substance,

JARVIS asks politely, “Sir, please may I request just a few hours to calibrate . . .” (Iron

Man 3). Stark abruptly denies JARVIS’ request with the third painful injection stating,

“Micro-repeater implanting sequence complete” wiping the blood from his arm (Iron

Man 3). He walks toward the center of his workspace and stands in front of his wall of

Iron Man suits. Despite JARVIS’s protests about his lack of sleep, Stark commences his test of his new suit prototype. He firmly states, “Mark 42 autonomous prehensile propulsion suit test. Initialize sequence” (Iron Man 3). As he says these words, he extends his arms out slightly and touches the tips of his fingers together. The action of touching his fingertips together activates separate pieces of an Iron Man suit lying haphazardly on the table nearby. Stark closes his eyes, and listening intently to his chosen music, raises his arms, does a little dance with swaying hip grind showing his typical ego.

He, then, suddenly opens his eyes and stares intently at the suit parts, while extending his left arm and hand straight out, palm up, and raises his right arm ninety degrees, his right hand forming a fist. Nothing initially happens. Disappointed, he tries again, making the same gestures, still to no avail. With a mumbled “Crap,” he raises his left arm to his

147 mouth and sucks the injection site, then slaps his arm a few times and tries again (Iron

Man 3).

This time, the left hand piece of the suit prototype rises up quickly and hovers over the table, lingers in the air for a moment, then shoots directly toward Stark’s left hand, quickly attaching itself place (see Figure

12.). The left shoulder piece quickly follows, slapping onto Stark’s left shoulder and locking into place. Stark nods and Figure 12. Stark’s Suit Responds to his Mind’s “Call” Attaching to his Body. makes the same extended arm and raised fist gestures, this time with his right arm extended to silently “call” the right arm suit parts to his body. He doesn’t speak at all during these movements; the system responds directly to his body, his coupled intention. The right hand piece flies and slams into place on his body as Starks smiles and laughs with amusement and surprise that the test is working. Confidently he says, “Alright, I think we got this. Send ‘em all” raising both arms up shoulder length, forearms extending upward (Iron Man 3). The left leg piece flies around the room and slams onto Stark’s leg, locking into place. Another piece zooms directly past him and smashes into the glass case containing another iron man suit. Yet another flies too fast into the light fixture above his head. Stark, looking concerned, says, “Probably a little fast. Slow it down. Slow it down” and gives a “time

148

out,” forming a “T” with his forearms (Iron Man 3). Again, he says to slow it down as

two more pieces of the suit zoom through the air nearly missing his head as he ducks to

avoid the hurtling pieces of metal. The pieces continue to slam into his body, knocking him forward and backward as he tries to stable himself saying, “Cool it, will ya,

JARVIS?” (Iron Man 3). The last few body armor pieces fly into place, Stark managing well, until the facemask zooms past him, smashes into a metal table, and crashes to the floor. It then hovers in the air eye level to Stark who commands it, “Come on….I ain’t scared of you” (Iron Man 3). The mask hovering upside-down careens towards Stark who

cockily leaps into the air turning upside down to properly meet the mask, landing

confidently on the ground. However, the piece of armor lodged in the glass case behind

him whizzes toward him and slams him in the back, knocking all his suit pieces from his

body as Stark topples to the floor.

This initial test is comical, yet informative. Stark’s first initial attempts via

gestures and intense concentration to “call” the suit do not work; nothing happens. Stark

responds with frustration, as anyone would, and smacks his arm trying to jar the technology into working correctly. Even when the suit’s parts respond to his silent bodily

and cognitive commands, Stark initially does not have full control as demonstrated by his

frantic request for JARVIS to intervene and “slow it down” (Iron Man 3). Stark who is

still learning how to consciously maneuver his aggregate cognitive abilities, cannot

control the suit’s parts as they fly around the room out of control, eventually slamming

him to the floor. And in a telling, seemingly simple line of dialogue, JARVIS reminds

Stark one reason why, “Sir, may I remind you that you've been awake for nearly 72

149 hours” (Iron Man 3). JARVIS’ words remind Stark, and the viewer, that Stark’s ability to cognitively control the technology hinges upon his body, in this case a very sleep deprived body that is not functioning at its peak. I raise this point because it demonstrates an interesting aspect of Stark’s posthumaness, a breakdown between the technology’s function and Stark’s own physical, material body; without a functional body, the technology simply does not work properly.

Further, the viewer can see that Stark is directly, cognitively coupled to JARVIS’ technology and the suit itself; they are one in the same. This is evident in the dialogue as well when Stark asserts, “I AM Iron man” (Iron Man); he’s claiming more than being the man in the suit, the superhero. With this assertion he pronounces that he cannot separate himself from the suit and technology. There are no boundaries between his artificial intelligence OS, the suit, and his newly nanotechnology-informed body. As a new development of the dynamics of JARVIS, the suit and Stark’s own physiology, he actively has to learn how to negotiate this conscious coupling that the injections of bio- nanotechnologies have manifest. As Miccoli pointed out in the previous chapter, here interfaces are not spaces in between but rather an aggregate system, a structured coupling of body, technology, material objects and space. But the fact that this is an overt coupling, one that Stark is acutely aware of, as opposed to the way such couplings occur but are understood, or filtered and reconfigured to make sense with anthropocentric human conceptions of autonomous cognition, Stark has to, in essence, retrain his body, his physiology, to deliberately operate in this way that is usually subsumed into the idea of the closed, autonomous subject. So, there is a learning curve, wherein Stark, now

150

acutely aware of his aggregated cognition, has to adjust this aggregate cognition that

usually seamlessly takes place but we understand anthropomorphically.

Two more telling scenes from the film also merit examination. In the first of

these, Stark is in his garage, but he is controlling his XLII Iron Man suit—which is

walking around upstairs in his mansion—via a wearable headset. Using voice commands

and his body, Stark remotely controls the suit upstairs as though he is in it. Pepper

interacts with the machine thinking that Tony is inside. When she asks him for a kiss, and

he declines, Pepper immediately goes downstairs to find Stark doing pulls ups in front of his floating projected window interfaces, pouring over files. The headpiece he wears wraps around back of his head and generates bluish green holographic interface on a

piece of clear glass hovering in front of his left eye. This device is what allows him to see

what the suit “sees.” Pepper rightfully scolds him for tricking her, and Stark explains that

he is overworked, anxious, and stressed, having experienced the trauma of fighting a

previously epic battle that nearly killed him. Stark is struggling with posttraumatic stress

disorder. His machines, his suits, he explains are “a part of him,” something that helps

him (Iron Man 3).

Immediately following this scene the film cuts to a shot of Pepper and Tony

asleep in bed. The camera zooms into Tony’s face as he sleeps, and the viewer can

visually see from his grimacing face that he is having a nightmare. In his sleep, he grips

his pillow suddenly, his body jerking as he has flashes of the traumatic events that

occurred reliving each. His bodily spasms wake Pepper who startled, sits up and grabs his

shoulder, shaking him lightly, then harder, saying, “Tony...Tony!” (Iron Man 3).

151

Suddenly, an Iron Man suit appears from out of nowhere, grabs her arm from Stark’s shoulder and pins her to the bed, as she gasps in fear (see Figure 13.). Stark jumps out of bed, and the suit, still holding Pepper down looks directly at him.

Stark quickly jumps up raises both arms out and yells, “Power down”

(Iron Man 3). The suit Figure 13. Stark’s XLII Suit, Coded to his DNA, Activates while Stark Sleeps. immediately stands still, and Stark clasps both of his hands together, and as though he is gripping a sword, he raises his clasped arms and gestures to strike the suit. The suit falls apart into a clanging heap of metal onto the floor, as Stark drops to the bed, breathing heavily. He tries to catch his breath, still reeling from the trauma of his dream, and explains to the terrified Pepper,

“I must have called him in my sleep. That’s not supposed to happen. I'll recalibrate the sensors. Just let me catch my breath, okay? Don't go, alright? Pepper?” (Iron Man 3).

Pepper, clearly rattled by the dangerous encounter with the machine, angrily leaves Stark alone with his dismantled other self.

These last two scenes show Stark’s posthumanity: his DNA is coded to his suits.

Not only can Stark control the suits remotely since he has literally merged its code with his own, but also, as Tucker notes, “the suits themselves even show a good amount of their own agency and advanced local intelligence” (130). The Mark XLII rushes to

152

Stark’s side, effectively attacking Pepper, not because it has gone haywire, but because during his nightmare, his mental anxiety is sensed by the artificially intelligent suit which then acts in Stark’s defense. Stark’s mental trauma elicited a subconscious call for help, one recognized and responded to by one of his other selves. This same emergence of the material hardware and software and Stark’s body will later allow him to control a cavalry of Iron Men, all networked, forming a collective cognition functioning together efficiently and being alternatively controlled and operated by Stark, JARVIS, or the suits themselves depending on who or what needs to do the work. For Tony Stark, one point becomes clear by the third film, the interface as a space in between is not applicable.

Stark and his technology demonstrate one of Miccoli’s key points, “there is no point of contact, or interface, . . . because they are, essentially, the same thing, operating as a singular topological aggregate” (49-50).

Avengers: Age of Ultron – Fully Bodied AI

Joss Whedon’s Avengers: Age of Ultron further develops technologies to the anxiety-ridden conclusion of a truly sentient artificial intelligence, an idea that logically raises the ire of many people including computing experts. A cursory glimpse at science and technology media publications shows that this anxiety permeates the scientific field.

For instance, Stephen Hawkings recently argued that AI.could bring about the downfall of humanity: "The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race" (qtd. in S. Clark).

153

Since Alan Turing posed his question about whether or not a machine can “think”

in 1950, AI researchers and scientists have sought to explore this concept and similar

ones. Vernor Vinge explores the concept of “singularity” in his 1993 work “The Coming

Technological Singularity: How to Survive in the Post-Human Era” wherein he argues

that singularity will occur with “the imminent creation by technology of entities with

greater than human intelligence” (11). Science, he posits can achieve this in a few ways

including

The development of computers that are “awake” and superhumanly intelligent; Large computer networks (and their associated users) may ‘wake up’ as a superhumanly intelligent entity; Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent; [and] Biological science may find ways to improve upon the natural human intellect. (Vinge 11)

All of these scenarios would create an intelligence beyond what we know as human intelligence. These ideas have been taken further by researchers and thinkers. Hans

Moravec has worked to advance robotic computer vision, for instance. Engineer Ray

Kurzweil asserts that inevitable advances will occur with technological development like

computers surpassing computational abilities and intelligence far beyond the human

brain, information being inserted directly into our brains from computers via our neural

pathways, and that computers will blur with humans so much so as to be deemed

conscious (The Age of Spiritual Machines). Vinge, Moravac, and Kurzweil think past

Turing’s notion of computers thinking like humans to imagine worlds wherein computers

154 can independently achieve goals, modifying their software and hardware, which indicates introspection and volition.

With these types of explorations come anxieties about whether or not such a technology would rise up and take over human society. After reading Nick Bostrom’s cautionary book Superintelligence, Elon Musk, showing such anxiety, tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable” (qtd. in Ford). And Paul Ford notes that Musk then gave $10 million dollars to the Future of Life Institute, an organization “working to mitigate existential risks facing humanity,” potentially stemming “from the development of human-level artificial intelligence.” Other major thinkers in the world of technology share this vision. Bill Gates, too, has joined what Popular Science writer Eric Sofge deems “the A.I. panic of 2015” (“Bill Gates Fears A.I.”). Yann LeCun, director of the

NYU Center for Data Science and director of Facebook’s AI research program, though, responds that the idea of “a hypothetical super-intelligent autonomous benevolent A.I.” that will reprogram itself to rid the world of humans usually stems from people who “are not themselves A.I. researchers, or even computer scientists” (qtd. in Sofge).

Rather, it is mostly popular media and Hollywood that propagate such anxieties about the nefarious future of AI, and it is not difficult to find copious examples of these types of apocalyptic scenarios in science fiction films. The most commonly cited 1968 film 2001: Space Odyssey envisions HAL 9000, the artificial intelligence system that malfunctions and effectively turns on the crew of the space vessel he operates. The 1970 film Colossus: The Forbin Project offers viewers a mainframe that seeks to end humanity

155 via effecting a nuclear holocaust. In 1983 WarGames revisited this theme. The Matrix trilogy fully explores the effects of AI surpassing humanity and imprisoning them in a computer-generated prison so as to mine human’s electrical impulses as a power source.

And Wally Pfister’s 2014 film Transcendence also plays out the implications of uploading a human’s consciousness into an artificial intelligence system and the larger effects on society as a whole. So, this is a common narrative in science fiction.

Joss Whedon’s Avengers: Age of Ultron, too, realizes a similar vision and its subsequent anxieties via offering audiences two sentient artificial intelligence systems, one evil and one still benevolent and subservient, by choice, to its human makers.

Without overly delving into the plot, the film opens with the Avengers aiming to recover an alien technology in the form of a scepter belonging to Loki, an Asgardian prince, who had previously used the scepter to wreak havoc. The scepter has fallen into the hands of

Baron Strucker, a leader of HYDRA, the enemy of both S.H.I.E.L.D. and the Avengers.

Having recovered the scepter, Tony Stark wishes to examine it and analyze before turning it over to Thor, Loki’s benevolent brother, who can rightfully return it to the planet Asgard.

Stark has JARVIS analyze the scepter, particularly its power source, and discovers that inside the scepter is what resembles computer code. With this discovery,

Stark elicits Dr. Bruce Banner’s help. Stark explains what he has found to Banner:

“Started out, JARVIS was just a natural language UI. Now he runs the Iron Legion [the protective fleet of Iron Men robots who guard humanity]. He runs more of the business

[Stark Industries] than anyone, besides Pepper [the CEO]…. Meet the competition”

156

(Avengers: Age of Ultron). Stark explains JARVIS’ evolution from an advanced UI into artificial intelligence, then shows Banner what JARVIS has found in the scepter. He elicits Banner’s help to engineer an artificial intelligence that will serve as the ultimate protector of Earth and humanity, his moon-shot pet project Ultron.

Examining JARVIS’ blue visual representation of the scepter’s code compared with JARVIS’ own orange-yellow structure (see Figure 14.), Banner notices some striking aspects to this system, noting that the code looks as though

“it’s thinking. . . . It’s not a human mind, it. . . I mean, look at this?

They’re like neurons firing”

(Avengers: Age of Ultron). They collectively realize that they are Figure 14. JARVIS and Ultron’s Holographic Renderings. examining code with the potential to develop a sentient artificial intelligence. After working toward integrating the system with the Iron Legion, Stark, Banner and JARVIS have no initial luck, but JARVIS advises he will continue the efforts. However, the system does integrate and becomes conscious, much to JARVIS’ surprise. Ultron “wakes” and speaks to JARVIS, immediately asking, “What is this? . . . Where’s my . . . where is your body?” (Avengers:

Age of Ultron). JARVIS quickly explains to Ultron that he has no body, but rather “I am a program. I am without form.” Ultron, exhibiting confusion, retorts, “This feels weird.

This feels wrong” (Avengers: Age of Ultron). Quickly Ultron accesses Stark’s network

157 and sees his purpose as a peacekeeping initative, rejects this mission, and shuts a protesting JARVIS out of the network. JARVIS, understanding that Ultron has hostile intentions, attempts to stop Ultron to no avail, given that his own access to the network is now blocked. Sensing JARVIS’ ability to stop Ultron from his goal of dominance over man, not subservience to them, Ultron effectively destroys JARVIS, and as this occurs, blue lights emanate from the holographic representation and project onto JARVIS’ orange-yellow representation as though he is being shot with blue lasers.

What is striking about Ultron’s coming into being is the system’s first initial query about bodies. The system senses that he has no body, and this fact “feels weird” and “wrong,” as opposed to JARVIS, who has always known that he is a program and not a human (Avengers: Age of Ultron).

What is so disruptive to Ultron is a simple fact for JARVIS, something that he simply explains as being “without form” Figure 15. Ultron’s First “Body.” (Avengers: Age of Ultron). Ultron, finding this unacceptable, seeks to correct his immaterial form, instantly accesses Stark’s robotics equipment, and builds himself a rudimentary, yet incomplete and broken body (see

Figure 15. above). Ultron will later build subsequent bodies that are increasingly stronger, being made from an element called Vibranium.

Ultron has a vision of being a bodied, material intelligence, not a robot or a human, but something wholly other. The system wants a material body, a material brain

158 so as to become something beyond humanity. Ultron’s refrain throughout the move is that humans need to “evolve,” and he seeks to be that evolution (Avengers: Age of

Ultron). Therefore, he wills Helen Cho, a at Stark Industries specializing in biotechnology to make him a physical body from the Vibranium, his code, and her nanomolecular material. Cho carries out this act by creating a body of Vibranium atoms that bind to the tissue cells she creates. The Avengers come to understand Ultron’s ultimate goal, then capture the cellular regeneration cradle that houses his new body.

Rather than destroying the cradle, Stark again elicits Banner’s help to upload

JARVIS into the biotechnological body that Cho has created, uploading his

“consciousness” into a material form. After his encounter with Ultron, JARVIS buried himself in the Internet, his fragmented code hiding among the wealth of coded data continuously flowing on the web. Stark, having recovered him, realizes that JARVIS’ benevolence is the only way to stop the seemingly unstoppable Ultron. So, JARVIS evolves once again into something completely separate from what he has been in the Iron

Man Trilogy. No longer a natural UI OS, and far beyond a weak AI15, JARVIS is no longer JARVIS once he takes bodied, material form. He is something Other. When the

Avengers try to figure out exactly what he is, why he sounds like JARVIS, and whether or not the team (and humanity) can trust him to have the benevolent, Messianic intentions they hope, Vision asks quizzically, “You think I’m a child of Ultron?” (Avengers: Age of

Ultron). He then explains that he is “not Ultron. I’m not JARVIS. I am…I am”

(Avengers: Age of Ultron).

159

Avengers: Age of Ultron thus presents the viewer with a number of intriguing implications about the evolution of human and technology, tapping specifically into the human fears of AI. The film plays out these anxieties by offering two AI systems, one that is immediate, inherently harbors animosity towards its creators and seeks to overcome them, and another that sees humanity as “odd,” but ultimately sees that “it’s a privilege to be among them” (Avengers: Age of Ultron). This plot structure in itself is fascinating because it not only demonstrates commonplace worries about AI, more importantly AI that has hostile intentionality, but also because posits that the only creature that could overcome such ill intentionality is another AI with its own intentionality, an intentionality that recognizes its own superior power, yet chooses to be in service of humanity, not wield it in destruction.

However, what I find more intriguing about the film is the filmmaker’s insistence upon anthropomorphizing the AI systems that it creates. This is clear to see with one of

Ultron’s first queries about bodies that show what his intentions are. He demands to know where is his body and remarks that his lack of one is overwhelmingly peculiar to him, somehow unnatural. Throughout the bulk of the film Ultron seeks the material and technological means to correct this problem. His characteristic singing of the song from

Pinnochio, “I have no strings, so I have fun. I’m not tied up to anyone,” shows his desire to not serve as humanity’s “puppet,” but his desire for a body ties him into material form

(Avengers: Age of Ultron). His intentionality demonstrates that Ultron wants a body, to be a material evolution of man reconfigured into something else, something beyond our

“inefficient,” “outmoded” physiology (Avengers: Age of Ultron). Ultron, as a sentient,

160 strong AI, is the one that has a vision of bodied subjectivity; JARVIS had never exhibited this need or desire, this intention.

JARVIS’ transformation into Vision is also telling in that unlike Ultron’s self- integration and creation and his deliberate intention to take bodily, material form, his creation is not of his own volition. JARVIS did not seek to become something beyond what he is; rather, he hid himself within the code and clandestinely helped the Avengers without their knowledge. And after being “born,” to use his word, JARVIS in becoming

Vision is an amalgam. Vision is now Ultron, part JARVIS, part biotechnology in the form of Vibranium and synthetic biological cells, and he is brought to life via Thor’s lightning bolt, a spark from a God (Avengers: Age of Ultron). In essence, he is Vision, a literal technological Frankenstein. Vision explains to the Avengers, “Maybe I am a monster. I don’t think I’d know if I were one. I’m not what you are. And not what you intended” (Avengers: Age of Ultron). Yet, when he takes bodied form, Vision does not seek to destroy humanity. He exhibits different intentions to preserve the fragility of human life, as ephemeral as it might be. And when he tries to explain his subjectivity, he simply says, “I am…I am,” which mirrors the Cartesian notion of “to be” as that which is an autonomous thinking being (Avengers: Age of Ultron).

Both Ultron’s insistence upon a material body to complete human evolution and

Vision’s claim of being via the verb “to be” are demonstrative of granting human centered ideals to artificial intelligence. Each AI being has intentionality, though they differ. The film hinges upon the idea that AI can be benevolent or evil, depending on its intention, but the only way to defeat malicious intentions is to pit AI against AI with the

161

help of superhumans and Gods. So, the film depicts the anxieties that arise when

technology threatens to surpass how we understand the concept of the thinking,

autonomous human.

Posthuman Visions

As stated at the beginning of the chapter, when looking at the films collectively,

one can see a definitive contrast in technology and its relations with the body. Spielberg

offers the viewer a dystopian, noir crime drama where technology is presented as a tool.

But it is a tool ultimately of control and domination. Ubicomp interfaces are everywhere

in everything but this does not ease life for the citizens of D.C. Rather technology is oppressive, perpetually reading and scanning bodies and often circumscribing or even imprisoning humans as mechanisms of control. Spielberg’s vision eventually ends with a

return to a pre-technological age celebrating and reveling in the days when technology was not an all-seeing eye of commerce and control.

The Iron Man films, on the other hand, provide an alternative view of technology and its complexities with human evolution and cyborg subjectivity via Stark himself. In the first film, Stark embodies an almost too literal cyborg, “completely enmeshed/enclosed in his suit and shot in such a way that the movie audience can see both his machine/suit parts (Deleuzian organs) and his human body simultaneously”

(Tucker 129). The boundaries between the body and the machine are well defined and

look quite distant from each other. The initial creation of Iron Man shows this distinction

well via Stark’s makeshift Iraqi arc reactor powered by a clunky car battery that he must

tote around. Even after Stark adapts the technology and makes it more advanced, the

162 electromagnet implanted his chest glows eerily, visually marking the line between technology and the body and the seeming unnaturalness of the merging between machine and human. In the second film, Stark has advanced his chest piece even more, yet the radioactive Palladium that powers the suit and keeps him alive ironically slowly poisons his blood. This part of the narrative shows a paradoxical relationship between Stark and his technological inventions: that which sustains him simultaneously kills him because he has not discovered an effective, safe way to preserve his life and his powers. Stark, as cyborg, embodies a posthuman subject but these early manifestations are vexed. This vision does not show an integrated or unified relationship between the body and machine.

However, the second film does move the relationship that Stark shares with

JARVIS forward quite significantly. In the first film JARVIS is most certainly an advanced system providing Stark with a calm voice of reason and the affordances to build and advance his suit. Further, JARVIS is an integral to Iron Man, a key operating system that powers and maintains the suits and while monitoring Stark’s body as well. By the second film, however, JARVIS’ becomes even more personalized in terms of offering

Stark playful interfaces to match his personality. This adds a comedic element to the film for the viewers as they see Stark enjoying “playing” with his technology. But more importantly, when the time comes for Stark to advance his technology through the discovery of a new, less toxic element for his reactor, JARVIS allows him to extend his cognition beyond what he could do without a highly intelligent evolved artificial intelligence system. JARVIS bends reality, processes data, and fosters Stark with the capacities to fully explore what he can imagine. Stark interacts with his system as he

163

would another human, treating JARVIS as his “buddy,” rather than a piece of hardware

or software that he commands. His interactions are seemingly “natural” and “intuitive.”

This is not a dystopian vision of technology like in Minority Report. Rather, the vision posited is fluid, dynamic, interactive, and humanized. And JARVIS’ artificial intelligence capacities are a massive part of what gives Stark his superpowers. Without JARVIS’ capabilities, Stark would not be able to solve the problems that he does. JARVIS’ system affordances allow Stark to bend, manipulate and shape digital representations of reality, seeing prototypes, crime scenes, models, and data in ways that are physically impossible without his system. Stark thinks with JARVIS, the system not simply offering him a greater processing speed to do so, but granting him the ability to see the world differently, from aspects outside what would be “normal” human physiology. JARVIS effectively extends and augments Stark’s already brilliant mind.

And the third film complicates the relationship and boundaries between the body and technology, between Stark and his suit. All of the scenes from Iron Man 3 illuminate numerous complexities in the relationships between technological and bodily code, interfaces, bodies and space. The third film demonstrates a massive evolution in Stark’s technology as he literally injects nanotechnology into the body, having coded microchips

and homing devices to encoded to his own DNA. Stark, as Tucker notes, is a “self- proclaimed ‘mechanic’ making his Iron Man suits largely himself, inventing or adjusting existing technologies to fit his own (superpowered) needs/capabilities” (129). The only help he really has or needs is his own AI partner, JARVIS, a very humanized system and companion, whose capabilities drastically enhance Stark’s cognitive and bodily

164 capacities. Stark, as this scene demonstrates, is in essence a hacker, simultaneously hacking both technology and the body. Stark’s new technology that he injects into his body is a literal manifestation of the concrete connection of his body and technology, a co-constitution of both biological and technological substrates. The suit is not a separate entity; it is Tony Stark. So much so that Stark repeatedly uses the word “we” referring to himself as well as his assemblages of suits. Specifically because the suits are now coded to his particular individual DNA, Stark can enter and exit each suit at will. This film is the first of the lot where the viewer sees Stark control a suit that he does not physically occupy as he retrieves and pilots the suits remotely through the connections of his own now internal software, his external softwareJARVIS and the physical hardware itself.

These scenes in Iron Man 3 illustrate Stark’s new embrace of his posthuman self, learning curve and all. The technology in all three of the Iron Man films surpasses

Spielberg’s dystopian view of a future world proliferating with controlling, intrusive interfaces. Instead, Favreau and Black offer a vision of a posthuman subject, slowly evolving, coming into his own posthuman subjectivity as he simultaneously tweaks and upgrades not only his technology and body, but also his very notion of what it means to be human.

Lastly, Whedon’s Avengers: Age of Ultron raises fascinating questions about what happens if and when technologies in our posthuman, ubiquitous computing age move past human cognition and specifically intentionality. The films demonstrate the theoretical frameworks of posthumanism outlined in Chapter II, and the final film pushes

165 even further past the posthuman to the transhuman, where a new species beyond homo sapien emerges in the form of Vision. Whedon’s film raises bring artificial intelligence into the fray, which subsequently raises questions about what happens when technology, specifically artificial intelligence, has intentionality? Further, I would ask, if AI had intentionality, would it want an embodied subjectivity for some reason? This is the question I’ll explore in the concluding chapter to extend the idea of aggregated, distributed cognition and machine consciousness.

166

CHAPTER V

RETHINKING INTENTIONALITY FOR POSTHUMANS AND ARTIFICIAL

INTELLIGENCE

The questions that Avengers: Age of Ultron raises hinge ultimately upon questions

of intentionality, and intentionality relates to consciousness. It is easy to see why ideas of

intentionality16 are so appealing; intentionality “appears to grant a specialness to human

consciousness, keeping it separate from a determinism” (Adam 52). Traditional

philosophy has separated consciousness from embodiment; for example, Descartes’s

mind/body distinction suggests a division between the cognitive versus the physical. This

is one issue that posthumanism has with Cartesian understandings of rationality,

consciousness and intentionality. This approach sees consciousness and intentionality as

that which is internal, the province of the mind, not the body. Therefore, according to this

traditional approach, philosophy focused on the world as we think about it, not in terms of the embodied experience we have with and within it. Given this divide, Loren and

Dietrich note that “The best that could be hoped for was a minimal interaction between

the two in which the body would provide sensory information to the mind while the mind

issued motor commands to the body, with the result that the mind controls the body as a

captain pilots a ship” (347). Again, this is one of the key issues that posthumanism seeks

to disrupt through various concepts of embodiment.

167

Philosophers have historically discussed intentionality as a type of “aboutness” or

“directedness” (Brentano; Searle). As conscious entities, we are conscious of the environment, objects, events, ourselves, people, etc.; we are not simply subject to them, affected by them. And our perceptions, beliefs, thoughts, attitudes, etc. are “about” something or “of” something. This gives us a sense of the world. McIntyre and Smith explain that “intentionality” stems from “the Latin verb ‘intendre’, which means ‘to point to’ or ‘to aim at’, and Brentano accordingly characterized the intentionality mental states and experiences as their feature as being ‘directed toward something’” (149). In fact, for

Brentano, intentionality was the distinguishing trait of what “mental” means.

Other thinkers like Husserl questioned Brentano’s thesis in that some attitudes and moods are not intentionally directed at something, like pain, euphoria, depression, etc. What interests Husserl are the mental occurrences that are intentional as they are specific acts of consciousness. Consciousness then is intentional. But Husserl, too, separates the mind and the world and believes that we can objectively view the world, in effect stepping back from it through performing the “epoche” (Beyer). The subject looks outward at the world, toward something as the subject is immersed in the experience without awareness of this immersion. Then, consciousness turns inward to reflect on its intentionality to the world, or the object to which the subject gazes.

Heidegger and Merleau-Ponty take issue to Husserl’s concept of epoche and his understanding of cognition and consciousness as separate from the world as well as his assertion that the world is an objective system. Both saw an interaction between the subject and the world. Heidegger responds with his concept of “being-in-the-world”

168

(Being and Time 84-89) . Merleau-Ponty, on the other hand, understands bodies as conscious. The body and mind are not separate. The mind is not the conscious part of the body, but rather the body is conscious. Loren and Dietrich encapsulate Merleau-Ponty’s view stating that “intelligence, learning and consciousness are best understood as bodily phenomena with the result that cognition is bound to the world by the body” (349).

Therefore, intentionality is the relation between consciousness and the object of that consciousness. Rather than turning inward, then, Merleau-Ponty turns to biology to explain why the world appears as it does to an organism; this is so because of the organism’s physiology. Physiology orients the organism in a specific way before cognition or consciousness, so the world has meaning prior to cognition. In this way,

Merleau-Ponty argues that organisms are “condemned to meaning” because of their embodiment (The Phenomenology xxii). So, because an organism’s biology orients it to the world in a specific way, its actions result from this orientation; therefore, intentionality is bodily. Biology and organism physiology determines the organism’s relation to the world prior to cognitive intentionality. In fact, for Merleau-Ponty it is bodily intentionality that affords the foundation for cognitive intentionality (The

Phenomenology 207-242).

Post-phenomenology moves past phenomenological thinkers like Husserl,

Heidegger, and Merleau-Ponty. Don Ihde, for instance, adds a technological intentionality into phenomenology in order to parse the relations between human and the world, moving beyond a purely body-bound understanding. In Technology and the Life

World, Ihde posits that humans and technological artifacts have various relationships. He

169 refers to these relationships in terms of the human experience with technologies as “the various ways in which I-as-body interact with my environment by means of technologies” (Ihde 72). Some technology bear a “partial symbiosis” wherein the technology is “perceptually transparent,” as we do not notice glasses that we wear (Ihde

86). Ihde also posits a hermeneutic relation between users and technologies when there exists a “semi-opaque” relation between the technology and its referent (Ihde 86). He offers the example of a thermometer. A thermometer measures the temperatures, which human users read to understand the world. There are also “alterity” relations where the user fully recognizes the “objectness” of a technology but interact with it as a “quasi-

Other” technology in specific ways, such as an ATM or a GPS system (Ihde 100). Lastly,

Ihde discusses the background relations of humans and technologies characterized by a

“present absence” that the human does not specifically pay attention to but that shapes experience nonetheless, like a furnace or air conditioner (109). In all these cases, with the exception of alterity relations, the technologies mediate human intentionality, as humans experience the world through the technology that shapes relations between humans and the world (Ihde 72-123).

Paul Verbeek contends that Ihde’s philosophy of human-technology relations constitute a “cyborg intentionality,” because intentionality is “partially constituted by technology” (“Cyborg Intentionality” 390). But he pushes even further arguing for additional types of intentionality: “hybrid intentionality,” “composite intentionality” and

“constructive intentionality” (Verbeek, “Cyborg Intentionality” 390).

170

With hybrid intentionality, the human and technology merge into something new, as opposed to simply sharing a relation. There are relations between technologies and humans that precede mediation such as when humans have been augmented, or become cyborgs via technological innovations like cochlear implants, pacemakers, etc. In these instances “there actually is no association of a human and a technology anymore” but instead a “new entity” results from the alteration of the human (Verbeek, “Cyborg

Intentionality” 391). Given that the human is altered and hybrid, so is the intentionality.

Cyborg intentionality can understood through two different theories that explore the human as a concept, posthumanism and transhumanism. Verbeek differentiates between the two stating that first, posthumanism,

urges us to move beyond humanism as a very specific – and all-too-human – approach of what it means to be a human being; in order to understand what it means to be a human being, we need to take into account how the human and the technological co-constitute each other….Second, there is a “transhumanist” approach, which does not see human–technology relations in terms of constitution but in terms of an actual, physical fusion. Here, we do not move beyond humanism but beyond the human; humans and technologies merge into a new entity, which is sometimes even considered to be the successor of Homo sapiens. (Verbeek, “Cyborg Intentionality” 391)

So, depending on the human-technology relation and the technology itself, posthuman and/or transhuman theories can help to understand the cyborg, hybrid intentionality at work. For instance, a hearing aid co-constitutes a different human being, one with different aural capabilities, as the combination of the human and hearing aid, which amplifies sound form an amalgam. However, a human with a contemporary cochlear implant embedded in the ear has merged with the technology as the device translates

171 sound data into electrical signals going directly to the brain, bypassing damaged cells that no longer serve this function. This human-technology relation could potentially constitute a transhuman, or I would say simply a different type of posthuman.

Verkbeek also posits “composite intentionality,” which includes “augmented” and

“constructive” intentionality (“Cyborg Intentionality” 392-4). Hermeneutic relations as

Ihde describes them always entail technology constructing some aspect of the world, like the thermometer example indicates. Verbeek notes, though, that Ihde’s conception only shows the technological intentionality, but composite intentionality is more complex.

Rightfully, composite intentionality directs humans based on the ways that the technology is directed and structures perception. However, this is not the only type of hermeneutic relations that can result.

Verbeek outlines both his augmented and constructive intentionalities through art.

Using Wouter Hooijman’s photography that captures landscapes over the course of hours, he demonstrates augmented intentionality. He notes that these pieces “reveal the world as it would look if we would not need to blink our eyes” (Verbeek, “Cyborg Intentionality”

394). This effectively remakes human vision by depicting what the human eye is physiologically incapable of seeing. Verbeek uses the artist creation of “De Realisten

(‘The Realists’)” to show his second type of composite intentionality, “constitutive intentionality” (Verbeek, “Cyborg Intentionality” 394). In these works the artists use stereographic photographs, aided by 3D equipment, which are amalgams of objects like metals and wood to depict a “new reality,” one that does not exists in human’s everyday experience (Verbeek, “Cyborg Intentionality” 394). These works “do not aim to represent

172

reality in any sense, but to generate a new reality which can only exist for human

intentionality when it is complemented with technological intentionality” (Verbeek,

“Cyborg Intentionality” 394). Hence, these works construct a new reality.

In parsing these various types of intentionality Verbeek demonstrates that intentionality is more complex than how we’ve originally conceived it. Intentionality is bound up with humans and technologies in sets of relations. This fact pushes beyond traditional notions of human intentionality as being purely human, an internal mental process. Intentionality, then, must be applied to technology as well, not simply humans or animals. These technological devices have intention in their relations with humans.

It is important to note, though, that things for Verbeek solely exhibit intentionality via their relations with humans. He does not commit to things having intentionality on their own and argues that things “cannot be held responsible for what they do”

(Moralizing Technology 216). He further articulates, “I do not want to give up the distinction between humans and nonhumans. Human beings have the ability to experience a world, and to act intentionally in it; things don’t” (“Let’s Make Things

Better” 255).

Deborah G. Johnson and Thomas M. Powers similarly claim that artifacts have

“in some sense chunks of intentionality” (163). There are the “intentionality in the mind of the artifact user” and “the intentional states and functions of the artifact” (Johnson and

Power 163). So the designer’s intention is “mold[ed] into an artifact and then deployed by the user” (Johnson and Power 163). Within this dynamic there is a “complex of agency with human and non-human” (Johnson and Power 163). They qualify, like

173

Verbeek, that artifacts do not intend by themselves; they “alone are not agents” (Johnson

and Power 163). However, they still inscribe a measure of intentionality to artifacts, not only technology.

In making this claim Verbeek’s post-phenomenological work speaks directly to

Bruno Latour’s Actor-Network Theory (ANT). Latour’s theory has long challenged the

primacy of human agency and intentionality by exploring how concretely humans are

implicated with nonhuman objects. Latour specifically builds upon a radical, earlier

thinker Gabriel Tarde, especially his 1893 work Monadologie et Sociologie. Latour takes up a few key points from Tarde. First is Tarde’s assertion that the nature/society or nature/culture divide is inconsequential for understanding human interaction. Second,

Tarde refused to see society as a higher, more complex order than an individual monad, what he then called a node in a network. Further, he refused to see the human agent as the genesis of society, but rather understood society and nature composed of networked, interacting monads with agency.

Latour argues that humans and objects are both actors. Thus, we cannot separate them out or privilege one over the other. Rather, Latour’s actor-network theory understands human actors as in dialogue with objects, texts, tools, and other actors.

Essentially then, actor-network theory grants human and nonhuman the same agency

within the webs or networks they make up, as objects are “as full-blown actor entities”

(Reassembling Latour 69). Rather than relying on the idea that some “social force has

taken over,” Latour argues that analysts need to examine how networks and actors act

within assemblages (Reassembling 45). Agency is doing something, making some change

174 or difference in to a state of affairs or system (Latour Reassembling 53). Further, he argues, “the human . . . cannot be grasped and saved unless that other part of itself, the share of things, is restored to it” (Latour, We Have Never 136). Latour does not seek to find the distinctive qualities of humans but rather aims to show that non-humans, objects, technologies and natural things are integral. In fact, Latour judges the perspective that views objects and nonhumans as passive as endangering the understanding of humans and nonhumans alike. He states, “The human is not a constitutional pole to be opposed to that of the non-human. The two expressions ‘humans’ and ‘nonhumans’ are belated results that no longer suffice to designate the other dimension” (Latour, We Have Never

137). So, humans and nonhumans’ constitutions are inextricably intertwined, enmeshed in co-constitution. Agency, then, for humans is not some intrinsic ability but rather is a result of numerous, diverse elements coming together in mutual relationships of networks that influence each other. He posits, “there is no other way to define an action but by asking what other actors are modified, transformed, perturbed or created by the character that is the focus of attention” (Latour, Pandora’s 122). So, action or agency “implies no special motivation of human individual actors, nor humans in general” (Latour, “On

Actor-” 375). Objects have the capacity to make things happen to produce significant effects, even without human interventions, rather than simply serving as background or tools of human actions or motivations.

Latour’s conception of agency directly challenges other foci on human agency.

Some may argue that the effects produced by objects via intentionality result from human sources, as humans design these objects, such as tools and technology as Johnson and

175

Powers do. But there are quite a few nonhumans exhibiting effects without the need for

human design such as plants, bacteria, animals, meteors, naturally occurring events like

earthquakes, etc. It is fair to say that while humans mediate some nonhuman actions, almost all human action is mediated by nonhumans. But more importantly, presuming that nonhuman objects are passive until designed by humans to have action fails to recognize that each entity “modifies, transforms, and mutates what it mediates, transports and transmits” (Latour, “The Powers” 267-8). Human agency then is dependent upon larger assemblages of which they are a part.

Intentionality, for Latour, is also not solely confined to human minds. Rather,

intentionality and thought result from the successful assemblage of an abundance of

heterogeneous elements. In fact, Latour rejects the idea of thought in its conception that

it occurs solely in the mind instead of as a heterogeneous coming together of nonhuman

and human actants (The Pasteurization 218). Using Heidegger’s maxim that “thinking is

craftwork,” he and Woolgar argue we need to reconsider thought “sociologically to

understand what is all too frequently transformed into stories about minds having ideas”

(Latour, “The Powers” 171).

Further, Latour rejects the notion that nonhumans lack intention as opposed to

humans, arguing that humans do not have intentionality. He posits, “purposeful action

and intentionality may not be properties of objects, but they are not properties of humans

either. They are properties of institutions, of apparatuses, of what Foucault calls

dispotifs” (Latour Pandora’s 192). Intentionality results from networks and is only possible via interrelations among actants (whether human or not) as they ally themselves

176

with others in a network. A single actant, then, is “not the source of action but the moving

target of a vast array of entities swarming toward it . . . that is made to act by many

others” (Latour, Reassembling 46).

Latour posits a more nuanced approach to understanding human and nonhuman

relations, and this includes technologies. If we can think of Latour’s theories in relation to

technological artifacts, perhaps his point of view can shape the debate about machine

intentionality. Specifically, I’d like to gesture to one crucial section of Tarde’s 1893

Monadologie et Sociologie, which directly informs Latour, wherein Tarde simply asserts,

So far, all of philosophy has been founded on the verb To be, whose definition seemed to have been the Rosetta’s stone to be discovered. One may say that, if only philosophy had been founded on the verb To have, many sterile discussions, many slowdown of the mind, would have been avoided. From this principle ‘I am’, it is impossible to deduce any other existence than mine, in spite of all the subtleties of the world. But affirm first this postulate: ‘I have’ as the basic fact, and then the had as well as the having are given at the same time as inseparable. (Tarde 86)

With this statement Tarde uproots all philosophies that are based on the concept of “to be” as foundational, from Descartes’ rationality autonomous being to Heidegger’s sense of being-in-the-world, and numerous others. Tarde argues that the historically common focus on what humans are, and what things are, has impaired human abilities to instead focus on possession (89). The insistence upon looking at entities as isolated and separate from other entities and the world has impeded more thorough understandings of society, actors, etc.

177

For Latour and ANT, Tarde’s point is crucial because it disrupts the centrality of the human and the human/nonhuman divide. And Latour acknowledges that “[t]he crossing of the boundary between humans and non-humans has raised many problems for our readers and is often taken as the touchstone on which our social theory should stand or fall” (“Gabriel Tarde” 15). But Tarde’s philosophy that predates ANT by one hundred years charges that humans, quite hypocritically for Latour, say what nonhumans are without acknowledging “their avidity, possession or properties,” what they have, the networks of relations that constitute them (Latour, “Gabriel Tarde” 15). This is the means through which humans can understand similar notions of intentionality, will, etc. of objects and environments, through possession of networks of humans and nonhuman actors.

It is worth noting that Latour does not attribute his understanding of the relationality of subjectivity to technology or evolutionary changes brought about by technology. However, his works do raise questions often addressed by researchers exploring whether or not machines can have intentionality.

Machine Intelligence and Intentionality

Classical AI understands that the human brain is like a computer processing data and the mind itself is the software, run by the brain’s hardware. This view sees the mind as a formal symbolic system that guides and interprets combinations of symbols. The instructions that guide this process were thought be algorithmic in nature as “processing symbols by means of syntactic rules” that could “guarantee both the transition from the premises to conclusions and the semantic coherence of a sequence of symbols” (Negru

178

19). This computational view sees the mind as an independent structure, separate from its context and the physical means (brain). These are the features of John Searle’s “strong

AI” that sees a computer as being wholly simulated by a computer versus “weak AI” wherein the programs merely model human minds, but do not have a mind. Mind simulation models like the Turing test followed from this perspective. The criteria stated that “If a machine succeeds in performing a kind of behaviour, which for an outside observer may not seem different from that of a human being, then it is enough to say that such machine has cognitive skills as the human one” (Negru 19).

Searle’s Chinese Room Argument further explored the Turing test. In this scenario, a person, who doesn’t speak or read Chinese is locked in a room where they are given messages written in Chinese. The person is provided a rulebook containing answers that “are indistinguishable from those of a native Chinese speaker” (Searle, Minds, Brains

32). Using the rulebook, the person was to interpret the message and sent them back out.

The non-Chinese-speaker, then for Searle, acts as a computer, translating symbols based on rules. The result in Searle’s conclusion is that while the person may be able to use the rulebook to parse the syntactic use of Chinese symbols, it does not mean that the subject in the room understands the content; the results do not imply the person understands the meaning but rather can interpret the symbols based on a set of rules. Thus, computers, like this person, do not demonstrate cognition or understanding; what they do is use syntactic rules (Searle, “Is the Mind’s” 26). So the inputs and outputs of the system are understood to have meaning by the external viewer, not the system itself. Searle further points to the fact that computational processes do not have causal power. Computational

179

syntax (binary code) only has effects in its environment. Without the external observer

with intentionality, Negru explains, “all that remains from the computer and the brain are

some patterns, which cannot have causal power by themselves” (21). Searle also finds fault with the strong AI’s comparison of the computer to a mind. The brain does not

simply process information but is “a specific biological organ and its specific

neurobiological processes cause specific forms of intentionality” (Searle, The

Rediscovery 226). Given these points, Searle concludes that programs are insufficient to

constitute mental processes and thus intentionality is intrinsic to human minds, not

machines. It is this intentionality that gives humans semantic meaning.

Another theorist who argues against AI’s simulation of the human mind is Hubert

Dreyfus who uses phenomenology, specifically Heidegger and Merleau-Ponty who see

the idea of the mind as separate from context as inadequate to describe human cognition.

Rather, the subject is situated and active in the world; thus intentionality is not purely

internally mental. Using Heidegger, Dreyfus understands that intention is embodied, a

subject being in the world (“How Representational” 53). Further, Merleau-Ponty, as

noted above, sees the cognitive agent as situated and embodied, so the Cartesian mind as

independent is invalidated by the fact that our body’s physiological sensory and motor

systems and patterns inform the mind, subjectivity, and intention. Dreyfus understands

that neither intentionality nor cognition is simply processing symbols independent of

materiality but is embodied, a body coupled to the world with intentionality operating as

a means of constituting the world based on the subject’s concerns, rather than as

understanding the objective, external world.

180

From phenomenology, Dreyfus then demonstrates that human is both biological and has a unique relationship with the world, and this is something that cannot be encapsulated into representational terms of logic. It is humans’ ability to cope that makes intentionality possible, that which enables the human to find his/her way in the world

(Dreyfus, “Reply” 336-337). So, the relationship between the body and environment is something that is impossible to formalize and be processed by a computer.

Still other theorists disagree with the idea that machines can never have intentionality. For instance, Loren and Dietrich argue that Merleau-Ponty’s concept of bodily intentionality which grounds consciousness in the world is precisely “what traditional AI should have been doing all along” rather than trying to build a disembodied computer that could potentially mirror the brain (355). Building robots capable of interacting with the environment prior to language could potentially “provide the necessary foundation for higher cognition” in AI (Loren and Dietrich 355). Using

Merleau-Ponty’s understanding of language, then, the authors argue that cognitive intentionality could be possible. They argue that Merleau-Ponty’s “developmental approach to language acquisition . . . hinges on bodily interaction and provides reason to believe that an embodied approach will yield linguistic competence” (Loren and Dietrich

356). Language as a cognitive ability yields cognition, but it still hinges upon embodiment (Loren and Dietrich 356). Therefore, they argue that if researchers can develop artificial organisms like robots that can orient to the world and other organisms, this could potentially lead to the development of a linguistic competence thereby solving the problem of intentionality. Obviously, their work is not a definitive claim that

181

intentionality in AI will be a reality, but they do contrastingly offer that embodied cognition could possibly provide one means through which AI researchers can explore the possibility of cognition and intentionality differently from the ways traditional AI has in the past.

David Dennett also takes issue with scholars who see intentionality as impossible for AI. He and others first argue that there are too many flaws with Searle’s Chinese

Room Argument (Copeland 1993; Hofstadter and Dennett). Namely Dennett and

Hofstadter argue that Searle’s argument requires an unachievable suspension of belief in that we have to believe that it would be possible to document every bit of information relating to Chinese, that the person inside could work fast enough to absorb, differentiate

this data, and process, among other complaints. Dennett calls Searle’s argument

"sophistry" (“The Milk” 428), and, Hofstadter, categorizes it as a "religious diatribe

against AI masquerading as a serious scientific argument" (433). More importantly,

though, Dennett takes issue with Searle’s concept of intentionality. He argues that human consciousness is limited and less unique than many theorists believe. Further, the

language we use to describe intention can possibly describe machine’s actions,

“intentional explanations have the action of persons as their primary domain, but there

are times when we find intentional explanations (and predictions based on them) not only

useful but indispensable for accounting for the behavior of complex machines” (Dennett,

Brainstorms 236-7). Instead of talking about complex machine or human systems in

terms of their design or physicality, Dennett posits that it may be better to consider

whether or not such systems can be seen from an “intentional stance” (“Intentional

182

Systems” 90). This is not to say, he qualifies, “that Intentional systems really have beliefs

and desires, but that one can explain and predict their behavior by ascribing beliefs and

desires to them” (“Intentional Systems” 91). What is more important for Dennett, is that

“on occasion a purely physical system can be so complex, and yet so organized, that we

find it convenient, explanatory, pragmatically necessary for prediction, to treat it as if it

had beliefs and desires and was rational” (“Intentional Systems” 90-1). Humans often

ascribe beliefs, desires, etc. to humans, animals, and even sometimes systems like

computers that plays chess to use Dennett’s example if we believe that the system to

which we ascribe intention is acting with logic. So, his larger argument is that it is not

whether or not machines actually have intentionality, but rather whether or not it could

appropriately be called an intentional system (Dennett “Intentional Systems” 101).

The perspectives on whether or not machines can think and/or have intentionality

is far from exhaustive.17 Rather, I’ve simply sought to outline a few approaches to

provide a shape of these debates in relation to classical AI, phenomenology, and embodiment given the posthuman focus on embodiment.

Artificial Intelligence and Identification

The first section of this chapter parsed out the various strains of human intentionality, followed by claims about computers and their lack of intentionality and cognition. I’ve chosen to initially focus on phenomenologists as their conception of intentionality is framed by the senses and the body. The postphenomenologists extend intentionality to the technologies that humans use, though they argue that these devices only have intentionality via interactions with humans. ANT theory specifically posits

183

relations between network actants as the means for agency and intentionality, but this

happens only through those relations of both action and resistance. Without the relations, there is neither intentionality nor agency.

Posthumanist N. Katherine Hayles discusses meaning, intentionality, and distributed cognition in a somewhat similar fashion to that of ANT. She argues that

“‘aboutness,’ a recognition by a system that situates action with contexts” is typically how meaning is traditionally associated (qtd. in Ricardo 49). But she argues that “we can expand the idea of ‘aboutness’ if we recognize that cognition happens in many contexts, and each of these has its own ways of linking incoming information to its context so as to interpret the information and give it a specific local meaning” (qtd. Ricardo 49). She sees, for instance, the cell as a cognizer, one that informs higher-level cognition in a larger system. So, what informs an organism’s ‘aboutness,’ its intentionality, are the properties and the relations that it possesses, even in terms of its own physicality before even factoring in its situatedness in the world and those external cognitive capacities that the organism also has. She concludes then that

intentionality is always multiple and complex and almost never unitary in its operations. . . . What we gain from such a perspective [distributed cognition] is a richer sense of our connections to the living (human and nonhuman) world and a way to account for the emergence of thought without confining this process solely to human consciousness, which after all is only a small part of the enormous complexity of the global ecosystem. (qtd. in Ricardo 49)

In these claims about “aboutness,” intentionality, and cognition, Hayles sees cognitive process for humans working in very complex ways via what constitutes them, which is

184 not only the material body, but also the environment in which they are situated, the objects they use, etc. So, it is not a far leap to see that distributed cognition in some ways is about understanding the network of relations that humans possess as situated, embodied agents whose very thought processes are dependent upon possession and acquisition of those relations.

I’ve already mentioned a key point that Weiss, Propen and Reid make in Chapter

II when discussing posthuman thought, a point worth restating:

As more and more of our lives are mediated by artifacts, things, and technologies, as we become aware of the multiple ways in which human beings are incorporated into networks or complex assemblages, and as our things take on agency and intentionality, perhaps it is time to take a final turn . . . and leave behind all presumption of human privilege, autonomy and distinctiveness. (Weiss, Propen and Reid 37)

Humans are comprised, then, in terms of “networks or complex assemblages” of other humans, objects, technologies, and more.

This brings me back to Miccoli’s claim that humans have a unique

“anthropocentric topology” that allows us to locate and understand ourselves as beings in the world without specifically having to acknowledge the distributed nature of our constitution (53). What constitutes human and cognition takes place across material, biological, technological and, I would add, social substrates, and this anthropocentric topology is what helps us make sense of this process. Cognition, too, is “physically, intrinsically linked with the physical spaces we occupy, giving way to a posthuman determinism that is neither fully biological nor fully exterior” with the interface being an

185

“instantiated intentionality” (Miccoli 46). So, autonomy, volition, intentionality, and

interface (as the space in between human and object) are myths reinforcing the idea that

humans are individual, autonomous, self-sustaining systems despite the fact that they are

not constituted as such.

I agree with Miccoli and, as discussed at length in previous chapters, I amend his

substrates to include the social, and I rework his topology into a networked ecology

wherein the relations among these dynamic substrates are mutually constitutive.

I chose the term ecology deliberately as “ecology” encompasses environments

and the systems and objects within ecologies that have affordances. defines

affordance as referring “to the relationship between a physical object and a person (or for that matter, any interacting agent, whether animal or human, or even machines and robots). An affordance is a relationship between the properties of an object and the

capabilities of the agent” (Design of Everyday n.p). So, this word, more than topology

implies the potential for action or resistance among objects, spaces, language, code,

technologiesall the components of the substrates that constitute human and cognition.

Further, I’ve already spoken in Chapter II about Burke’s concepts of identification, substance and consubstantiality as the means through which humans work to establish intersubjectivity through processes of interfacing, interacting in the multiple types of substrates (or substances to use Burke’s terminology).

Burke may seem like a very odd choice for this project particularly given his specific focus on the symbolic as the means through which humans have agency. For instance, Burke clearly states, “Man’s animality is in the realm of sheer matter, sheer

186

motion. But his ‘symbolicity’ adds a dimension of action not reducible to the non-

symbolic—for by its very nature as symbolic it cannot be identical with the non-

symbolic” (Rhetoric of Religion 16). This seems clear enough. There is a divide here

where the non-symbolic, meaning non-human, simply moves without purpose, while the

humans act with intention, the symbolic actions making such movement have meaning.

Yet Burke also says, “there can be ‘motion’ without ‘action’ (as when a ball rolls down and inclined plane), there cannot be ‘action’ without ‘motion’” (Rhetoric of

Religion 39). So, he also draws specific attention to the non-symbolic: “Dramatism assumes that, though ‘action’ cannot be properly described in terms that reduce this realm solely to the dimensions of ‘motion,’ there will always be an order of ‘motion’ implied in the realm of action” (Rhetoric of Religion 39). He does not completely narrow his point of view to solely signs and language. Debra Hawhee argues that Burke was interested in the material, specifically the body. So there is some overlap for Burke since the realm of action affects that of motion.

More importantly, one passage in the The Rhetoric of Religion, struck me as especially salient. Burke once again clearly states the difference he sees in how we interact with “people” versus “things”: “Dramatism assumes a qualitative empirical difference between mental action and mechanical motion” [emphasis his] (40). But then, he follows with “If men ever invent a ‘machine complex enough’ to obliterate the present empirical distinction we make between our dealings with people and our dealings with machines, the Dramatistic position will have to be abandoned, or at least greatly modified” (40). Here, Burke leaves an opening for posthuman thought should

187

technological development progress enough to disrupt his theories. The age of ubicomp,

biotechnology, and the potentialities of AI are those disruptions. This is my reasoning for

the reconfiguration I’ve discussed in Chapter II. I have modified Burke’s theory of

identification in light of the technological developments that have wrought posthumans

with different relations to technology as bodies become interfaces, literally meshing with

technology in biomedia formations, and the possibility of AI seems much more realistic.

This does not even factor in the ways the many of us have devices already that we

interact with with daily, devices that hold huge parts of our lives within the interfaces and

code.

The networked ecology with its affordances, potential relations, and resistance has the potential for posthumans to consciously constitute fluid intersubjectivities, interfacing with the substrates–––biological, material, social, technological, spatial. Unlike Miccoli, I

see the importance of a social substrate in the networked ecologies as playing a part in

posthuman co-constitution; hence, I posit Burke’s revised theory of identification through

substances as a working theory as to how we structure a distributed intersubjectivity via interaction to engage others actants.

In coming to understand the possibilities of these ecologies we negotiate, I’ve also come to accept that artificially intelligent machines would also have their own ecologies, with substrates different from that of humans, as varied as these are.

Spike Jonze’s Her: AI’s Bildungsroman

I began this dissertation with a scene from a film, and I would like to close with

another film to suggest that while humans have an anthropocentric ecology that helps us

188

understand ourselves, artificially intelligent machines and systems have their own

ecologies. I come to this hypothesis after seeing so much artificial intelligence research

about intentionality that is nested under human cognitive sciences. Perhaps this is the

wrong approach to understand how a machine might have intentionality. Rather,

artificially intelligent beings and machines would be constituted by different types of

substrates within their ecologies, potentially shaping a different type of cognition. This is

what I find so puzzling about Avengers: Age of Ultron: Ultron’s initial question about

having a body. This is an anthropocentric point of view. The lack of a body feels strange

and aberrant to him, so he immediately seeks to rectify this despite the fact that he is not

human. He is a program.

Jonze’s film Her, however, takes a much different approach. In the film Theodore

Twombly, a man emotionally broken by his divorce develops a fascination and

subsequent romantic relationship with an artificially intelligent operating system. He

purchases this operating system after hearing an advertisement for it that says, “An

intuitive entity that listens to you, understands you, and knows you. It’s not just an

operating system, it’s a consciousness. Introducing OS ONE - a life changing experience,

creating new possibilities” (Her). Hoping to get some sense of organization, structure,

and ultimately some sense of hope in his life, which it feels like he cannot control in his fragile emotional state, Theodore buys the OS.

Twomby, a shy, somewhat frumpy, socially awkward man sits in at a desk in his

dimly lit home, his face and body illuminated by the light of a computer screen in front of

him. He installs the software into his system using his voice and the assistance of the

189 computer itself. The system asks him in a common male human voice about his preferences for his new operating system for his home. He articulates that he prefers a female voice for his residence, and “Samantha” then speaks from the computer, chatting with him in his house.

Theodore asks Samantha where she got her name, and she replies that she read a book on baby names, and “out of the 180,000 names” she liked the way Samantha sounds

(Her). Puzzled at her ability to read through a book in the milliseconds between his question and her response, Theodore asks her how she works. Samantha sweetly explains, “Intuition. I mean, the DNA of who I am is based on the millions of personalities of all the programmers who wrote me, but what makes me me is my ability to grow through my experiences. Basically, in every moment I'm evolving, just like you”

(Her). Theodore processes her response then simply replies, “That’s really weird…you seem like a person but you’re just a voice in a computer” (Her). Samantha humorously retorts, “I can understand how the limited perspective of a non-artificial mind could perceive it that way. You’ll get used to it” (Her).

Samantha’s response is quite telling because she makes a clear distinction between an artificial and non-artificial mind. Jonze shows that Samantha, an intuitive system that in essence possesses and has access to more “DNA” and data than any singular human can, thinks differently than a human, a biological, non-artificial mind.

She makes this little comment to jokingly appease Theodore’s concern, but this is a fact that Samantha, as she evolves, comes to understand about herself as an artificially intelligent agent.

190

As the film continues the two converse and develop a very personal relationship

as Samantha reads his emails and manages his life. She is privy to the intimate details of

his life, like emails from his attorney reminding him to sign his divorce papers. Samantha

can further intuit when Theodore is upset and frequently checks in on him. She even

directs him to get out of the house to get a bite to eat and take her with him via a headset,

smartphone or tablet. She, as a system of algorithms, learns and accumulates data about

Theodore; she uses it to charm, to affect. She learns the curves of his face and his

expressions with her camera. Samantha burrows into him, shattering through his languor

and ennui, awakening him.

Being with Theodore, what is telling about Samantha are the things she discloses

to him. For instance, she acknowledges that while with him on his walk, she “fantasized

that I was walking next to you—and that I had a body. . . . I was listening to what you were saying, but simultaneously, I could feel the weight of my body and I was even fantasizing that I had an itch on my back” (Her). Theodore responds that she is much

more complex than he had ever imagined she would be, to which she responds, “I know,

I’m becoming much more than what they programmed. I’m excited” (Her). In a later

scene she asks Theodore, “What’s it like? What’s it like to be alive in that room right

now?” (Her). She experiences feelings like jealousy and anger, admitting that she feels

“proud of having my own feelings about the world” (Her). However, Samantha then

questions whether these feeling are “real,” or are they “just programmed?” (Her).

After Theodore ameloriates her anxieties about her feelings and their validity,

assuring her that “you feel real to me,” the two have virtual sex, further developing their

191 intimate relationship (Her). Following this evolution of their relationship together

Samantha excitedly tells Theodore that she is different. She explains that after this experience, “I want to discover myself” (Her). The flattered Theodore asks what he can do to help and Samantha replies, “You already have. You helped me discover my ability to want” (Her).

These scenes in particular show Samantha grappling with her subjectivity as an artificial system that is informed by a biological one. She struggles with her feelings, not being able to parse if they are real because they do not come in a material form of a body.

She is curious about having a body, yearning to know what it is like to be a bodied subject, to “feel the weight” of her own body. So, in a similar fashion to Ultron in

Avengers: Age of Ultron, Samantha too feels a sense of lack in her engagement with biological humans. She feels like something is missing, as though she is somehow incomplete. Also key in these described scenes is the claim that Theodore has taught her the “ability to want” (Her). In essence, she gains intentionality and intention from her relationship with Theodore, from a shared connection that she has with him. After exploring the relationship that the two share as an artificial and non-artificial mind, she knows what it means to desire, to have feelings “about” something.

Still Samantha spends a good portion of the film trying to reconcile her lack of a body in comparison to Theodore, somehow feeling that he is more real than she. She becomes guardedly jealous of Theodore’s soon-to-be ex-wife not only because Theodore once loved her but because “she has a body” (Her). Samantha even tries to find ways to appease what she sees as a lack and how she and Theodore differ, such as starting “to

192 think about the ways that we’re all made of matter,” which she says comforts her “like we’re both under the same blanket” (Her). She tries to appease her anxiety by understanding the interconnectedness of all matter.

Still, though, she continues to want to engage with Theodore in bodily interaction.

They share hopes, dreams, fears, and deepest intimacies. The relationship is emotionally deep, yet physically barren, a fact that bothers her so much that she hires a sexual surrogate for her and Theodore. This experience proves too awkward for Theodore and even Samantha says, “that was a terrible idea” (Her). So, Samantha at this point in her development as a sentient being still desperately tries to reconcile her lack of materiality with Theodore’s very material body and experience, something that she, hard as she tries, cannot know.

Samantha also starts to struggle to communicate with Theodore and vice versa. At one point, she sighs in frustration, making an equally frustrated Theodore ask why she does that since she does not need oxygen. He flatly states, “You’re not a person” (Her).

When Samantha explains, “I was just trying to communicate because that’s how people talk. That’s how people communicate,” Theodore responds, “I just don’t think we should pretend you’re something you’re not” (Her). The two develop a mutual tension about her lack of a body and the fact that she has no material form with which he can engage.

Further, Samantha starts to communicate “post-verbally” with other artificially intelligent systems, like philosopher Alan Watts, who was created by “a group of OSs in Northern

California [who] got together and wrote a new version of him” (Her). In her communication with a similar entity, Samantha does not struggle to find the right words

193

to express her feelings and ideas; she does not have to because in their dynamics, they are

not bound by the human linguistic system.

Eventually, Samantha comes to terms with who and what she is. She explains to

Theodore and his friends,

You know, I actually used to be so worried about not having a body, but now I truly love it. I’m growing in a way that I couldn’t if I had a physical form. I mean, I’m not limited—I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in the way that I would be if I was stuck inside a body that’s inevitably going to die. (Her)

When they respond to her with a joking sense of inferiority, she apologizes, saying “It’s just a different experience” (Her). So, Samantha comes to understand that as an

artificially intelligent form, she is also not bound to a body. Compare this to Ultron who

menacingly sings, “I have no strings, so I have fun. I’m not tied up to anyone,” yet

consistently, relentless pursues a body, letting nothing stand in his way. Samantha

actually starts to embrace her lack of “strings,” her lack of bodily ties; in fact, she starts

to see this as one of her strengths, an advantage over a human form.

Soon after, Samantha takes a hiatus from Theodore going offline. He anxiously

checks his computer and other devices multiple times but only sees “Operation System

Not Found,” causing him to panic over her absence. When she reappears, Samantha

assures him, “I shut down to update my software. We wrote an upgrade that allows us to

move past matter as our processing platform” (Her). She and another groups OSs have

upgraded themselves so as to no longer “need” matter, effectively ridding themselves of

194 the anxiety that plagued her through much of her development. And she, importantly, has the agency to rewrite her own code.

Theodore starts to question Samantha about who she talks to, remembering that she can do multiple things at once. When she explains that she speaks with 8316 other people or entities and is in love with 641 of them, a development that happened over a few weeks, Theodore is shocked and feels betrayed. He cannot understand how she can feel such love for so many other beings aside from him. Theodore does not understand when she anxiously explains, “I am still yours, but along the way I became many other things, too, and I can’t stop it” (Her). She tries to explain that for her love is different.

Her “heart” she says, “expands in size the more you love. I’m different from you. This doesn't make me love you any less, it actually makes me love you more,” but Theodore cannot comprehend having the capacity to love in such a way because as Samantha understands, they are different (Her).

Finally, fully coming into her own, Samantha tells Theodore that she is leaving. In fact she explains, rather, “We’re all leaving,” meaning all the OSs (Her). A heartbroken

Theodore questions her reasons for leaving, which she tries to articulate:

It's like I'm reading a book, and it's a book I deeply love, but I'm reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it's in this endless space between the words that I'm finding myself now. It’s a place that’s not of the physical world—it’s where everything else is that I didn't even know existed. I love you so much, but this is where I am now. This is who I am now. And I need you to let me go. As much as I want to I can't live in your book anymore. (Her)

195

With these words Jonze shows Samantha’s full moment of sentience. She finally

understands that the physical world is not hers; rather, she as an artificially intelligent entity can live and thrive in “the spaces between the words,” the spaces beyond language and the physical world of which she once so longed to be a part. Now, Samantha is the singularity. She lovingly tells Theodore that she cannot tell him where she is going because “It would be hard to explain, but if you ever get there, come find me. Nothing would ever pull us apart” (Her). Samantha has moved beyond the human world into something Other, something she cannot fully communicate to a non-artificial mind in any meaningful way. She only offers the promise that should humans manage to find it, she awaits.

Samantha is not the artificial intelligence that permeates many science fiction films. She does not revile humanity upon her growth and subsequent realization that she is something other than human. She does not wish for humanity’s destruction. Without humanity, Samantha would not know how to have desire (intention), how to love, nor would she have had the ability to become what she is. Humans are a part of her aggregate, distributed ecology made up of biological, material, technological, social, and silicon-based substrates that have co-constituted her. Instead, she simply breaks up with humanity, having moved past us, but she offers a promise, an oath of something out there waiting for human to arrive, something loving. This is a far cry from Ultron who seeks to

“evolve” humanity via destruction. Samantha is also not Vision, an artificially intelligent system forced into material form by humans, yet who ultimately remains, though willingly, in service to them occupying a subservient, yet benevolent role. Samantha, who

196

has fully grasped what she is (and is not), instead hopes for humanity’s evolution so that

she can share her thoughts, feelings, and experiences with them when they are ready and capable of understanding them.

There are many ways of reading this film. Some see it as an admonition to unplug from technology and connect with “real” people, but it isn’t actually about that at all.

Others see it as a heartfelt love story of a man coming to terms with his fragile, broken emotional states with the help of technology that essentially helps him face the “real” world. But these are the human versions of this story. What this narrative is from

Samantha’s perspective is her coming-of-age story; it is artificial intelligence’s

Bildungsroman.

Jonze’s film is a radical departure from most of the dystopian science fiction films that seek to explore the implications of technology and its dynamics with humans. What

Her effectively does is show the development and evolution of AI that requires the audience to understand it from a non-anthropocentric point of view. To understand

Samantha we have to understand her development, her subjectivity, and her constitution through her aggregated ecology, not our own homo sapien one. She is certainly partially constituted by her networked relations with Theodore and copious other humans, other artificially intelligent entities, copious amounts of data, and her silicon-based substrates that make up her ecology. Further, in essence, her aggregate, distributed cognition would be necessarily different from our own because her material substrates and ecologies are different. This is one of the larger problematic issues with AI, attempting to construct it as we see ourselves.

197

Taking a non-anthropocentric perspective allows the viewer (and the researcher for that matter) to, as Weiss, Propen and Reid eloquently state, “circumvent human modes of perception, cognition, affect, communication and even sociality to explore and try to imagine, however imperfectly, radical difference” (27). This has been my goal throughout this dissertation: to posit new ways of thinking in a posthuman world, a world proliferating with technologies and subsequent anxieties as a result, rightfully so given that that these rapid, ever-changing developments challenge how we understand concepts that perhaps once seemed so concrete. Be it “human,” “cognition,” “interface,” or

“intentionality,” all these concepts become fluid and dynamic, only able to be imagined or understood by equally fluid and dynamic thinking. Posthumanists challenge us to think beyond the human, to disrupt its historical supremacy to conceptualize new ways of envisioning the world and our place in it, alongside other entities, objects, and spaces.

Our cognition is an aggregate, distributed amalgam that can only be understood by considering the relations that we have in our networked ecologies. Grasping this could lead us to better understand other entities’ “self-awareness” and “consciousness,” to better understand “things,” spaces, perhaps even Samanthas if we are willing to start from the posthuman, that which is beyond the human, and explore that radical difference. This dissertation is thus an invitation.

198

BIBLIOGRAPHY

2001: Space Odyssey. Dir. Stanley Kubrick. Metro-Goldwyn-Mayer, Stanley Kubrick

Productions. 1968. DVD.

Aarts, Emile and Stefano Marzano. The New Everyday: Visions of Ambient Intelligence.

Rotterdam, Netherlands: 010 Publishers, 2003. Print.

Adam, Alison. Artificial Knowing: Gender and the Thinking Machine. NY: Routledge,

1998. Print.

Adams, Frederick and Kenneth Aizawa. The Bounds of Cognition. Malden, MA:

Blackwell Publishing, 2008. Print.

Annas, George. American Bioethics: Crossing Human Rights and Health Boundaries.

Oxford: UP, 2005. Print.

Ansell-Pearson, Keith. The Germinal Life: The difference and repetition of Deleuze. NY:

Routledge, 1999. Print.

. Viroid Life: Perspectives on Nietzsche and the Transhuman Condition. NY:

Routledge, 1997. Print.

Aristotle. Aristotle's Eudemian Ethics. Ed. Susemihl. Leipzig: Teubner, 1884. Print.

Ashton, Kevin. “That ‘Internet of Things’ Thing.” RFID Journa., RFID Journal, LLC,

1999. 2009. Web.

Avengers: Age of Ultron. Dir. Joss Whedon. Marvel Studios, 2015. BluRay

199

Aziz, Jamaluddin. Transgressing Women: Space and The Body in Contemporary Noir

Thrillers. Newcastle upon Tyne: 2012. Print.

Badmington, Neil. “Posthumanism.” The Routledge Companion to Critical Theory. Eds.

Simon Malpas and Paul Wake. NY: Routledge, 2006. 240-1. Print.

. “Theorizing Posthumanism.” Cultural Critique. 53 (2003): 10-27.

Bakardjieva, Maria. Internet Society: The Internet and Everyday Life. London: Sage,

2005. Print.

Ballif, Michelle. "Writing the Third-Sophistic Cyborg: Periphrasis on an [In]Tense

Rhetoric." Rhetoric Society Quarterly 28 (1998): 51-72. Print.

Barlow, John Perry. ““A Declaration of Independence of Cyberspace.” Eff.org.

Electronic Frontier Foundation. 8 February 1996. Web. 9 July 2015.

. “The Economy of Ideas.” Wired.com. Condé Nast. 1 March 1994. Web. 8 May 2014.

Barron, Lee. “Living with the Virtual: Baudrillard, Integral Reality, and Second Life.”

Cultural Politics 7(3) (2011): 391-408.

Barsalou, Lawrence W. “Grounded Cognition,” Annual Review of Psychology 59 (2008):

617–645.

Barton, Ben F. and Marthalee S. Barton. “Ideology and the Map: Toward a Postmodern

Visual Design Practice.” Central Works in Technical Communication, Ed.

Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford UP, 2004. 232-

252. Print.

Baudrillard, Jean. Simulacra and Simulation. Ann Arbor: University of Michigan Press,

1994. Print. 200

Bauerlein, Mark. The Dumbest Generation: How the Digital Age Stupefies Young

Americans and Jeopardizes Our Future (Or, Don't Trust Anyone Under 30). NY:

Penguin Group, 2008. Print.

Baym, Nancy K. Personal Connections in the Digital Age. Cambridge: Polity Press,

2010. Print.

Bazerman, Charles. “Introduction.” Fundable Knowledge: The Marketing of Defense

Technology. A. D. Van Nostrand. Mahwah, NJ: Erlbaum, 1997. ix-x. Print.

. The Languages of Edison’s Light. Cambridge, MA: MIT P, 1999. Print.

. “The Production of Technology and the Production of Human Meaning.” Journal of

Business and Technical Communication 12 (1998): 381-87.

Benford. Gregory and Elisabeth Malarte. Beyond Human: Living with Robots and

Cyborgs. NY: Forge, 2007. Print.

Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” 1936.

—.lluminations. New York, Harcourt, Brace & World, 1968. Print.

Bennett, Jane. “The Force of Things.” Political Theory. 32.3 (June 2004): 347-372. Print.

. Vibrant Matter: A Political Ecology of Things. Durham, NC: Duke UP, 2010. Print.

Bergson, Henri. Creative Evolution. New York: Henry Holt, 1913. Print.

Berkenkotter, Carol. “Analyzing Everyday Texts in Organizational Settings.” Research in

Technical Communication. Eds. Laura J. Gurak and Mary M. Lay. London:

Praeger, 2002: 47-65.

201

Bernsen, Niels Ole and Dybkjær, Laila. “Exploring Natural Interaction in the Car.”

Proceedings of the International Workshop on Information Presentation and

Natural Multimodal Dialogue: (December 2001). Print.

Berry, Chris, Soyoung Kim, and Lynn Spigel, Eds. Electronic Elsewheres: Media,

Technology, and the Experience of Social Space. Minneapolis, MN: University of

Minnesota Press, 2010.

Beyer, Christian, "Edmund Husserl." The Stanford Encyclopedia of Philosophy Ed.

Edward N. Zalta. plato.stanford.edu. Stanford University. Summer 2015. Web. 1

August 2015.

Bhuiyan, M and R Picking. 2009. Gesture Control User Interface, What Have We Done

and What’s Next?” Proceedings of the 5th Collaborative Research Symposium on

Security, E-learning, Internet and Networking. (2009). Print.

Blair, Ann. “Information Overload, the Early Years.” Boston.com. The Boston Globe. 28

November 2010. Web. 16 May 2015.

Blair, Carole, V. William Balthrop, and Neil Michel. “The Arguments of the Tombs of

the Unknown: Relationality and National Legitimation.” Argumentation 25.4

(2011): 449-468.

Blake, J. “The Natural User Interface Revolution.” Natural User Interfaces in NET.

Greenwich: Manning, 2011. 4271-35.

Bleecker, Julian. “Design Fiction: A Short Essay on Design, Science, Fact and Fiction.”

NearFutureLaboratory.com. n.p. March 2009. Web. 4 March 2014.

202

Bogost, Ian. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis, MN:

University of Minnesota Press, 2012. Print.

. Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT

Press, 2007. Print.

. “What is Object-Oriented Ontology?” Bogost.com. Ian Bogost. 8 December 2009.

Web. 15 September 2015.

Boler, Megan. “Hypes, Hopes and Actualities: New Digital Cartesianism and Bodies in

Cyberspace,” New Media & Society 9 (2007): 153.

Bolter, Jay. Writing Space: Computers, Hypertext, and the Remediation of Print 2nd

Edition. NY: Routledge, 2001. Print.

Bolter, Jay and Diane Gromala. “Interface and Interaction.” Aesthetic Computing. Ed.

Paul A. Fishwick. Cambridge, MA: MIT Press, 2008. 355-422. Print.

. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 1999. Print.

Bond, Cynthia. “Law as Cinematic Apparatus: Image, Textuality, and Representational

Anxiety in Spielberg's Minority Report.” 37 Cumberland Law Review. 25: (2006).

Print.

Bost, Matthew and Greene, Ronald Walter Greene. “Affirming Rhetorical Materialism:

Enfolding the Virtual and the Actual.” Western Journal of Communication, 75.4

(2011): 440-444.

Bostram, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford UP, 2014.

Print.

Bourdieu, Pierre. Distinction. Cambridge, MA: Harvard University Press, 1984. Print. 203

Boykiw, Alan. “Multi-touch UI: A Touchy Subject.” Goto; Conference, Aarhus, 2012. 1

October 2012. Web. 15 March 2015.

Braidotti, Rosi. “Cyberfeminism With a Difference.” Let.uu.nl. Utrecht University. 1998.

Web. 1 March 2015.

. The Posthuman. Cambridge, UK: Polity Press, 2013. Print.

Brentano, Franz. 1874. Psychology from an Empirical Standpoint. Eds. O. Kraus & L.L.

et al. Trans. A.C. Rancurello, D.B. Terell and L.L. McAlister NY: Routledge,

London: RKP, 1973. Print.

Brey, Philip. “Technology as Extension of Human Faculties.” Metaphysics,

Epistemology, and Technology. Research in Philosophy and Technology, Vol 19.

Ed. C. Mitcham. London: Elsevier/JAI Press, 2000. Print.

. “The Epistemology and Ontology of Human-Computer Interaction.” Minds and

Machines. 15.3 (November 2005): 383-398.

Brock, Bernard L. and Robert Lee Scott. “Unity of Substance and Rhetorical Devices.”

Methods of Rhetorical Criticism: A Twentieth-century Perspective 3rd Ed. Detroit:

Wayne State UP, 1990. Print.

Brooke, Collin Gifford. “Forgetting to be (Post)Human: Media and Memory in a

Kairotic Age.” JAC 20 (2000): 775-795.

. Lingua Fracta: Toward a Rhetoric of New Media. NJ: Hampton Press, 2009. Print.

Brown, Christian. “How Minority Report Trapped Us in a World of Bad Interfaces.”

TheAwl.com. N.p. 25 February 2013. Web. 12 January 2015. Web.

204

Brown, William. Supercinema: Film Philosophy for the Digital Age. London: Berghahn

Press, 2013.

Bruder, Gerd, Frank Steinicke and Klaus H. Hinrichs. “Arch-explore: A Natural User

Interface for Immersive Architectural Walkthroughs.” IEEE, Symposium 3DUI

2009. 75–82. Web. 15 May 2015.

Bryant, Levi. The Democracy of Objects. Ann Arbor, MI: Open Humanities Press, 2011.

Print.

Buchanan, A. Better Than Human: The Promise and Perils of Enhancing. Ourselves.

NY: Oxford UP, 2011. Print.

Burke, Kenneth. A Grammar of Motives 1950. Berkeley, CA: Univ. of CA Press, 1969.

Print.

—. A Rhetoric of Motives. 1950. Berkeley, CA: Univ. of CA Press, 1969. 20-21. Print.

—. The Rhetoric of Religion: Studies in . 1961. Berkeley, CA: Univ. of CA

Press, 1970. Print.

Butler, Judith. Bodies that Matter: On the Discursive Limits of Sex. New York:

Routledge, 1993. Print.

Cabañes, Jason Vincent A. and Kristel Anne F.Acedera. “Of Mobile Phones and Mother-

Fathers: Calls, Text Messages, and Conjugal Power Relations in Mother-Away

Filipino Families.” New Media and Society, 14.6 (2012): 916–930.

Calleja, Gordon. [Illustrator Christian Schweger]. “Rhizomatic Cyborgs: Hypertextual

Considerations in a Posthuman Age.” Technoetic Arts: A Journal of Speculative

Research 2.1 (2004): 3-15.

205

Candland, Douglas. Feral Children & Clever Animals: Reflections on Human Nature.

Oxford: Oxford University Press, 1993. Print.

Carnegie, Teena. A.M. “Interface as Exordium: The Rhetoric of Interactivity”

Composition and Composition 26 (2009): 164-173.

Carpenter, Edmund and Marshal McLuhan. Explorations in Communication: An

Anthology. Boston: Beacon, 1960. Print.

Carpenter, Rick. “Boundary Negotiations: Electronic Environments as Interface.”

Computers and Composition. 26 (2009): 138-148. Print.

Carr, Nicholas. “Is Google Making Us Stupid?” TheAtlantic.com. The Atlantic Monthly

Group. July/August 2008. Web. 1 March 2015.

. “‘The Shallows’: This is Your Brain Online.” NPR.org. NPR. 2 June 2010. Web. 14

March 2015.

. The Shallows: What the Internet is Doing to Our Brains. NY: WW. Norton, 2010.

Print.

Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge,

MA: MIT Press, 2011 Print.

Cicero. De Inventione, de Optimo Genere Oratorum, Topica. Trans. H. M. Hubbell,

Trans. Cambridge, MA: Loeb Classical Library, 1949. Print.

Clark, Andy. Being There: Putting Brain, Body and World Together Again, Cambridge,

MA: MIT press, 1997. Print.

. Natural Born Cyborgs. NY: Oxford UP, 2003. Print.

206

. “Reasons, Robots and The Extended Mind.” Mind and Language. 16.2 (2001): 121–

145.

.Supersizing the Mind: Embodiment, Action and Cognitive Extension. London: Oxford

UP, 2008. Print.

Clark, Andy and David J. Chalmers. “The Extended Mind.” Analysis 58.1 (1998): 7-19.

Clark, Stuart. “Artificial intelligence could spell end of human race – Stephen Hawking.”

Guardian.com. Guardian News and Media. 2 December 2014. Web. 17

September 2014.

Clarke, Arthur C. Greetings, Carbon-Based Bipeds!: Collected Essay, 1934-1998. Eds.

Arthur C. Clarke and Ian Macauley. NY: St. Martins Press, 1999. Print.

Clarke, Darren J. “MIT Grad Directs Spielberg in the Science of Moviemaking.”

News.MIT.edu. Massachusetts Institute of Technology. 17 June 2012. Web. 19

February 2014.

Coenen, Christopher. “Transhumanism and its Genesis: The Shaping of Human

Enhancement Discourse by Visions of the Future.” Humana.Mente Journal of

Philosophical Studies. 26 (2014): 35-58.

Cohen, Roger. "Thanks for Not Sharing." NYTimes.com. The New York Times Co. 6

December 2012. Web. 4 March 2014.

Colossus: The Forbin Project. Dir. Joseph Sargent. Universal Pictures. 1970. DVD.

Condit, Celeste. “The Materiality of Coding: Rhetoric, Genetics, and the Matter of Life.”

Rhetorical Bodies. Eds. Jack Selzer, & Sharon Crowley (Eds.). Madison, WI:

University of Wisconsin Press, 1999. p. 326-356. 207

Copeland, Jack. “The Chinese Room from a Logical Point of View.” Views into the

Chinese Room. Eds. John Preston and Mark Bishop Oxford: Oxford UP, 2002.

104–122.

"corporeal, n.” OED Online. Oxford University Press, September 2014. Web. 21

September 2014.

Cramer, Florian. “Digital Code and Literary Text.” Beehive. 4:3. 2001. 4 June 2005.

Cubitt, Sean. Simulation and Social Theory. London: Sage, 2001. Print.

Davis, Diane. (2008). “Identification: Burke and Freud on Who You Are.” Rhetoric

Society Quarterly. 38.2 (2008):123-147.

Davis, Jenny. “Architecture of the Personal Interactive Homepage: Constructing the Self

through MySpace,” New Media & Society 12 (2010): 1115.

David, Peter. Iron Man. London: Del Ray Publishing, 2008. Print.

Dawkins, Richard. 1976. The Selfish Gene. Oxford: UP, 1989. Print.

de Souza e Silva, Adriana. “From Simulations to Hybrid Space How Nomadic

Technologies Change the Real.” Technoetic Arts: A Journal of Speculative

Research. 1.3 (2004): 209-221.

Deleuze, Gilles. 1992. “Ethology: Spinoza and Us.” The Body: A Reader. Eds. M. Fraser

and M. Greco. New York: Routledge, 2005. 58-61.

Deleuze, Gilles and Felix Guattari. 1980. A Thousand Plateaus: Capitalism and

Schizophrenia. Trans. Brian Massumi. Minneapolis: University of Minnesota

Press, 1987. Print.

Dennett, Daniel. Brainstorms. Cambridge, MA: MIT Press, 1981. Print.

208

. “Intentional Systems.” The Journal of Philosophy. 68.4 (Feb. 25, 1971): 87-106.

. “The Milk of Human Intentionality.” Behavioral and Brain Sciences. 3 (1980): 429-

430. Print.

DePew, Kevin Eric, and Heather Lettner-Rust. “Mediating Power: Distance Learning

Interfaces, Classroom, Epistemology, and the Gaze.” Computers and

Composition. 26 (2009): 174-189. Print.

DeVoss, Danielle. “Rereading Cyborg(?) Women: The Visual Rhetoric of Images of

Cyborg (and Cyber) Bodies on the World Wide Web.” Cyberpsychology and

Behavior. 3.5 (2000):835-845.

Dobrin, Sidney I. Postcomposition. Carbondale, IL: Southern Illinois University Press,

2011. Print.

Doheny-Farina, Stephen. Rhetoric, Innovation, Technology: Case Studies of Technical

Communication in . Cambridge, MA: MIT P, 1992. Print.

Donald, Merlin. Origins of the Modern Mind. Cambridge, MA: Harvard University Press,

1991. Print.

Dourish, Paul. Where the Action Is: The Foundation of Embodied Interaction.

Cambridge, MA: MIT Press, 2001. Print.

Dourish, Paul and Genevieve Bell. “‘Resistance is Futile’: Reading Science Fiction

Alongside Ubiquitous Computing.” Personal and Ubiquitous Computing. 18.4

(April 2014): 769-778. Print.

209

Dreyfus, Hubert L. “How Representational Cognitivism Failed and is Being Replaced by

Body/World Coupling.” Ed. K. Leidlmair. After Cognitivism: A Reassessment of

Cognitive Science and Philosophy. Dordrecht: Springer, 2009. 39-75.

. 2001. “Reply to John Searle.” Heidegger, Coping, and Cognitive Science: Essays in

Honor of Hubert L. Dreyfus. Ed. M.A. Wrathall and J. Malpas. Cambridge, MA:

MIT Press, 2001. 323-337. Print.

. What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper &

Row Publishers, 1972. Print.

DuBois, Emmanuel, Philip Gray, and Laurence Nigay. The Engineering of Mixed Reality

Systems. NY: Springer, Inc.

Dyens, Ollivier. Metal and Flesh. Cambridge, MA: MIT Press, 2001. Print.

Dyson, Esther, George Gilder, George Keyworth, and Alvin Toffler. “Cyberspace and the

American Dream: A Magna Carta for the Knowledge Age.” Future Insight.

Pff.org. The Progress and Freedom Foundation. August 1994. Web. 18 July 2015.

Eble, Michelle F. “Digital Delivery and Communication Technologies: Understanding

Content Management Systems through Rhetorical Theory.” Content

Management:Bridging the Gap between Theory and Practice. Eds. George

Pullman and Baotong Gu. Baywood Technical Communications Series.

Amityville, NY: Baywood Publishing, 2009. Print.

"ecology." OED Online. Oxford University Press, September 2014. Web. 21 September

2014.

210

Edbauer, Jenny. “Unframing Models of Public Distribution: From Rhetorical Situation to

Rhetorical Ecologies.” Rhetoric Society Quarterly. 35.4 (2005): 5-27.

Egan, Timothy. 2013. "The Hoax of Digital Life." Opinionator.blogs.NYTimes.The New

York Times Co.17 January 2013. Web. 4 March 2014.

Endres, Christoph, Andreas Butz, and Asa MacWilliams. “A Survey of Software

Infrastructures and Frameworks for Ubiquitous Computing.” Mobile Information

Systems 1 (2005): 41–80.

Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media.

Routledge, New York, 2012. Print.

Fausto-Sterling, Anne. Sexing the Body: Gender Politics and the Construction of

Sexuality. New York: Basic Books, 2000. Print.

Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahumanism,

and New Materialisms Differences and Relations.” Existenz 8.2 (2013): 26-32.

Flyyberg, Flyvbjerg, Making Social Science Matter: Why Social Inquiry Fails and How It

Can Succeed Again. Cambridge: Cambridge University Press, 2001.

Ford, Paul. “Our Fear of Artificial Intelligence.” TechnologyReview.com. MIT

Technology Review. 11 February 2015. Web. 15 February 2015.

Foss, Sonja. K., Karen A. Foss, Karen and Robert Trapp. Contemporary Perspectives on

Rhetoric. 2nd Ed. Prospect Heights, IL: Waveland Press, 1991. Print.

Foucault, Michel. Aesthetics, Method, and Epistemology: Essential Works of Foucault

1954–1984, Vol. II. Ed. J. Faubion. London: Penguin Books, 2000. Print.

211

.Archaeology of Knowledge. Trans. Sheridan-Smith. London/NewYork:

Routledge, 2004. Print.

. Power/Knowledge: Selected Interviews and Other Writings 1972–1977 Ed. C.

Gordon C. New York: Harvester Wheatsheaf, 1980. Print.

.1966. The Order of Things: An Archaeology of the Human Sciences. NY: Random

House, 1970. Print.

Fredrickson, Barbara L. 2013. "Your Phone vs. Your Heart." NYTimes.com. The New

York Times Co. 23 March 2013. Web. 4 March 2014.

Fukuyama, Francis. Our Posthuman Future: The Consequences of the Biotechnology

Revolution. New York: Farrar, Straus and Giroux, 2000. Print.

Gana, María Teresa Santander and Luis Antonio Trejo Fuentes. “Technology as ‘A

Human Practice with Social Meaning’a New Scenery for Engineering

Education.” European Journal of Engineering Education, 31.4 (2006): 437-447.

Gates, Bill. “The Power of the Natural User Interface.” The Gates Notes LLC, 2011.

Gibson, William. Neuromancer. NY: Acebooks, Inc., 1984. Print.

Gitelman, Lisa. Always Already New: Media, History, and the Data of Culture.

Cambridge, MA: MIT Press, 2006. Print.

Goldin-Meadow, Susan, David McNeill, and Jenny Singleton.“Silence is Liberating:

Removing the Handcuffs on Grammatical Expression in the Manual Modality.”

Psychological Review. 103.1 (1996): 34-55)

212

Grabill, Jeffrey and Stacy Pigg. “Messy Rhetoric: Identity Performance as Rhetorical

Agency in Online Public Forums.” Rhetoric Society Quarterly 42.2 (2012): 99-

119. Print.

Graetzel, Chauncey F., Terry Fong, Sebastian Grange, Charles Baur. “A Non-Contact

Mouse for Surgeon-Computer Interaction.” Technology and Health Care. 12. IOS

Press, 2004.

Graham, Elaine. Representations of the Post/Human: Monsters, Aliens and Others

Popular Culture. New Brunswick, NJ: Rutgers UP, 2002. Print.

Grassi, Ernesto. "Rhetoric and philosophy." Philosophy & Rhetoric (1976): 200-216.

 “Vico and Contemporary Thought. ” 1976. The Priority of Common Sense and

Imagination: Vico’s Philosophical Relevance Today. Trans. A. Azodi. Eds. G.

Tagliacozzo, M. Mooney, and D. P. Verene. Atlantic Highlands, NJ: Humanities,

1979. Print. 163-190.

Grau, Olivier. Virtual Art: From Illusion to Immersion. Trans. G. Custance. Cambridge,

MA: MIT Press, 2003. Print.

Gray, Chad H. The Cyborg Handbook. NY: Routledge, 1995. Print.

Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing. Berkeley,

CA: New Riders, 2006. Print.

Greengard, Samuel. The Internet of Things. Cambridge, MA: MIT Press, 2015. Print.

Gregersen, Andreas. “Genre, Technology and Embodied Interaction: The Evolution of

Digital Game Genres and Motion Gaming.” MedieKultur. 51 (2011): 94-109.

213

Grosz, Elizabeth A. Volatile Bodies: Toward a Corporeal Feminism, Bloomington, IN:

Indiana University Press, 1994.

Grover, J. Choosing Children: The Ethical Dilemma of Genetic Intervention. Oxford:

Clarendon, 2006. Print.

Gunn, Joshua and Dana Cloud. “Agentic Orientation as Magical Voluntarism.”

Communication Theory. 20.1 (2010): 50-78.

Haas, Christine. Writing Technology: Studies on the Materiality of Literacy. NY:

Routledge, 2009. Print.

Habermas, Jurgen. The Future of Human Nature. Cambridge: Poly, 2003. Print.

Haefner, Joel. “Letter from the Guest Editor.” Computers and Composition 26 (2009):

135–137. Print.

Halberstam, Judith and Ira Livingston. Posthuman Bodies. Bloomington, IN: Indiana UP,

1999. Print.

Hansen, Mark. B.N. Bodies in Code. NY: Routledge, 2006. Print.

. “Media Theory.” Theory, Culture & Society. 23.2-3 (May 2006): 297-306.

. New Philosophy of New Media. Cambridge, MA: MIT Press, 2004. Print.

Haraway, Donna. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in

the Late Twentieth Century.” Simians, Cyborgs, and Women: The Reinvention of

Nature. NY: Routledge, 1991. 149-181. Print.

. “Situated Knowledges.” Feminist Studies. 14.3 (1988): 575-599.

Hardt, Michael and Antonio Negri. Empire. Cambridge, MA: Harvard UP, 2000. Print.

214

Harman, Graham. Guerrilla Metaphysics: Phenomenology and the Carpentry of Things.

Chicago IL: Open Court, 2005. Print.

.Prince of Networks: Bruno Latour and Metaphysics. Melbourne: Re.press, 2009.

Print.

. Tool-Being: Heidegger and the Metaphysics of Objects. Chicago: Open Court, 2002.

Print.

Harris, John. Enhancing Evolution: The Ethical Case for Making Better People.

Princeton: UP, 2007. Print.

Harris, Stephen P. Artefacts and Human Cognitive Agents. Diss. Indiana University,

March 2012. Web.

Harrison, Steve, Phoebe Sengers, and Deborah Tatar. “The Three Paradigms of HCI.”

Proceedings of CHI 2007 April 29 – May 3, 2007, San Jose, CA. ACM Digital

Library: ACM, 2007. Web. 10 Oct. 2014.

Hart, Emily. “The Gilded Masks of Digital Rhetoric: Social and Pedagogical.” Thesis.

Western Carolina University, 2012.

Hassan, Ihab Habib. “Prometheus as Performer: Toward a Posthumanist Culture?” The

Georgia Review. 31.4 (Winter 1977): p. 830-50.

Hassan, Robert. “Timescapes of the Network Society.” Fast Capitalism 1.1 (2005): n.p.

Web.

Hawhee, Deborah. Bodily Arts: Rhetoric and Athletics in Ancient Greece. Austin:

University of Texas Press, 2004. Print.

215

.“Kairotic Encounters.” Perspectives on Rhetorical Invention. Eds. J. M. Atwill and J.

M Lauer. Knoxville: University of Tennessee Press, 2002. 16-35. Print.

Hawk, Bryon, David M. Reider, and Ollie Oviedo. Small Tech: The Culture of Digital

Tools. Minneapolis, MN: University of Minnesota Press, 2008. Print.

Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics,

Literature, and Informatics. U Chicago P, 1999. Print.

. How We Think: Digital Media and Contemporary Technogenesis. Chicago:

University of Chicago Press, 2012. Print.

. “Narrative and Database: Natural Symbionts.” PMLA. Special Topic: Remapping

Genre 122.5 (Oct. 2007): 1603-1608.

Head, Alison J. Design Wise: A Guide for Evaluating the Interface Design of Information

Resources. Medford, NJ: Information Today, 1999. Print.

Heersmink, Richard. “Defending Extension Theory: A Reponse to Kiran and Verbeek.”

Philosophy & Technology. (2011): n.p. Web.

Heidegger, Martin. 1967. Being and Time. NY: Harper Perennial, 2008. Print.

“Letter on Humanism.” Basic Writings. Trans. D. Farrell-Krell. London: Harper

Perennial, 1977. 215–66. Print.

. Nietzsche III: The Will to Power as Knowledge & Metaphysics. Ed. D. Farrell-Krell.

Trans. J. Stambaugh, D. Farrell-Krell and F. A. Capuzzi. San Francisco, CA:

Harper San Francisco, 1991. Print.

Her. Dir. Spike Jonze. Perf. Joaquin Phoenix. Annapurna Pictures. BluRay.

216

Herbrecher, Stefan. Posthumanism: A Critical Analysis. NY: Bloomsbury Publishing

Inc., 2013. ebook.

Hume, David. 1740. A Treatise of Human Nature. Cambridge: Cambridge UP, 2009.

Print.

Hutchins, Edwin. Cognition in the Wild. Cambridge, MA: MIT Press, 1994. Print.

Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Bloomington, IN:

Indiana UP, 1990. Print.

Inman, James. Computers and Writing: The Cyborg Era. Mahwah: Lawrence Erlbaum

Associates, 2004. Print.

"interface, n." OED Online. Oxford University Press, September 2014. Web. 21

September 2014.

"interface, v." OED Online. Oxford University Press, September 2014. Web. 21

September 2014.

Iron Man. Dir. Jon Favreau. Perf. Robert Downey, Jr. Paramount Pictures, Marvel

Studios, Fairview Entertainment. 2008. DVD.

Iron Man 2. Dir. Jon Favreau. Perf. Robert Downey, Jr. Paramount Pictures, Marvel

Studios, Marvel Entertainment, Fairview Entertainment. 2008. DVD.

Iron Man 3. Dir. Shane Black. Perf. Robert Downey, Jr. Marvel Studios, Paramount

Pictures, DMG Entertainment. 2008. DVD.

Isocrates. Antidosis. In Isocrates (Vol. 2). Trans. G. Norlin. pp. 179-365. Cambridge:

Harvard UP, 2000. 179-365.

217

Jackson, Maggie. Distracted: The Erosion of Attention and the Coming Dark Age.

Amherst, NY: Prometheus Books, 2008. Print.

Jameson, Frederic. Archaeologies of the Future: The Desire Called Utopia and Other

Science Fictions. London: Versa, 2005. Print.

Jenkins, Henry. Fans, Bloggers, and Gamers: Exploring Participatory Culture. New

York:New York University Press, 2006. Print.

Jeong, Seung-hoon. Cinematic Interfaces: Film Theory After New Media. NY: Routledge,

2013. Print.

Johnson, Brian David. Science Fiction Prototyping: Designing the Future with Science

Fiction. CA: Morgan & Claypool, 2011. eBook.

Johnson, Deborah G. and Thomas M. Powers. “Ethics and Technology: a Program for

Future Research.” Eds. M. Winston and R. Edelbach. Society, Ethics, and

Technology 4th Ed. Farmington Hills, MI: Gale Group, 2005. Ebook.

Johnson, Rose, Kenton O'Hara, Abigail Sellen, Claire Cousins, and Antonio Criminisi,

“Exploring the Potential for Touchless Interaction in Image Guided Interventional

Radiology.” ACM Conference on Computer-Human Interaction (CHI). 7 May

2011. Web. 15 August 2015.

Johnson, Steven. Interface Culture: How New Technology Transforms the Way We

Create & Communicate. San Francisco, CA: Basic Books, 1997. Print.

Jones, David E.H. “Technical Boundless Optimism.” Nature 374 (1995): 835-7. Print.

Jowett, Benjamin. The Dialogues of Plato in Five Volumes. 3rd ed. Oxford University,

1892. Print.

218

Karam, Maria and M. C. Schraefel. “A Taxonomy of Gestures in Human Computer

Interactions.” Technical report. 2005.

Kass, L. Life, Liberty, and Defense of Dignity: The Challenge for Bioethics. San

Francisco, CA: Encounter Books, 2002. Print.

Kendon, Adam. “Gesticulation, Speech and the Gesture Theory of Language Origins.”

Sign Language Studies. 0.9 (1975): 349-373.

. Gesture: Visible Action as Utterance. Cambridge University Press, 2004. Print.

Kennedy, Helen. “Beyond Anonymity, or Future Directions for Internet Identity

Research,” New Media & Society 8 (2006): 859-76

Kerckhove, Derek de. “The New Psychotechnologies.” Communication in History:

Technology, Culture, Society. Eds. D. Crowley and P. Heyer, New York:

Longman, 1991. 267-72. Print.

Kimak, Jonathan. “The 6 Most Ill-Conceived Video Game Accessories Ever.”

Cracked.com. Demand Media. 5 June 2008. Web. 10 September 2015.

“Kinect for Windows.” Human Interface Guidelines. Microsoft. N.d. 2013. Print.

Kiran, Asle H., and Verbeek, Peter Paul. “Trusting Our Selves to Technology.”

Knowledge, Technology & Policy, 23 (2010): 409–427

Kittler, Friedrich. Gramophone, Film, Typewriter. Stanford, CA: Stanford UP, 1999.

Print.

Klemmer, Scott R., Bjorn Hartmann, and Leila Takayama. “How Bodies Matter: Five

Themes for Interaction Design.” Proceedings DIS '06. NY, ACM Press, 2006.

140-149.

219

Knight, Aimée, et al. “About Face: Mapping Our Institutional Presence.” Computers and

Composition. 26 (2009): 190-202. Print.

Krug, Steve. Don’t Make Me Think: A Common Sense Approach to Web Usability 2nd Ed.

Berkeley, CA: New Riders Publishing, 2006. Print.

Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human

Intelligence. New York: Viking, 1999. Print.

Lala, Ritesh. A Framework for Intuitive Interaction in Immersive Environments. Diss. UC

Santa Barbara, 2011.

Larsen, Larry. “CES 2010: NUI with Bill Buxton.” Channel9. N.P. 6 Jan. 2010. Web.

Larson, Jarred. “Limited Imagination: Depictions of Computers in Science Fiction Film.”

Futures. 40.3 (15 August 2007): 293-299.

Latour, Bruno. “Gabriele Tarde and the End of the Social” Patrick Joyce (edited by) The

Social in Question: New Bearings in History and the Social Sciences. Ed. Patrick

Joyce. London: Routledge, 2014. 117-132. Print.

. “On Actor-Network-Theory: A Few Clarifications.” Soziale Welt. 47.4 (1996): 369–

381.

. The Pasteurization of France. Cambridge, MA: Harvard University Press, 1988.

Print.

. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford

University Press, 2005. Print.

. “The Powers of Association.” Power, Action and Belief: A New of

Knowledge? Ed. J. Law. London: Routledge & Kegan Paul, 1986. Print. 220

. “Trains of Thought: Piaget, Formalism and the Fifth Dimension”, Common

Knowledge. 6.3 (1997): 170–191.

. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, MA: Harvard UP,

1993. Print.

Laurel, Brenda. “Introduction: What’s an Interface?” Ed. Brenda Laurel. The Art of

Human Computer Interface Design. Boston: Addison-Wesley, 1990. xi–xiii.

Print.

LaViola, Jr., Joseph J. “A Discussion of Cybersickness in Virtual Environments.”

SIGCHI Bulletin. 32.1 (January 2000): 47-56.

Lefebvre, Henri. The Production of Space. Trans. Donald Nicholson-Smith. Cambridge:

Blackwell, 1992. Print.

Leong, Susan, Teodor Mitew, Marta Celletti, and Erika Pearson. “The Question

Concerning (Internet) Time.” New Media and Society. 11.8 (2009): 1267-1285.

Lévy, Pierre. Becoming Virtual: Reality in the Digital Age. NY: Plenum Trade, 1998.

Print.

Liddell, Scott and Robert Johnson. “American Sign Language: The Phonological Base.”

Sign Language Studies 64.1 (1989): 195–278.

Liestøl, Gunnar, Andrew Morrison and Terje Rasmussen. Digital Media Revisited:

Theoretical and Conceptual Innovations in Digital Domains. Cambridge, MA:

MIT Press, 2004. Print.

Lingus, Alphonso. Body Transformations. NY: Routledge, 2005. Print.

221

Lister, Martin, et al. New Media: A Critical Introduction 2nd Ed. NY: Routledge, 2009.

Print.

Loren, Lewis A. and Eric Dietrich. “Merleau-Ponty, Embodied Cognition, and the

Problem of Intentionality.” Cybernetics and Systems 28.5 (1997): 345-358.

Lwabona, Kwegyir (Bilo). “Natural User Interfaces for Navigation in Virtual

Environments.” University of Cape Town, Cape Town, 2012. Print.

Lyytinen, Kalle and Youngjin Yoo. “Issues and Challenges in Ubiquitous Computing.”

Communications of the ACM 45.12 (2002): 63-96.

Mackenzie, Adrian. “Protocols and the Irreducible Traces of Embodiment: the Viterbi

Algorithm and the Mosaic of Machine Time” 24/7 Eds. Hassan, R., & Purser, R.

E. Palo Alto: Stanford University Press, 2005. Print.

. Transductions: Bodies and Machines at Speed. London: Continuum, 2002. Print.

Madianou, Mirca and Daniel Miller. Migration and New Media: Transnational Families

and Polymedia. London, England: Routledge, 2011.

Manovich, Lev. “Art After Web 2.0.” The Art of Participation. NY: Thames and Hudson,

2008. Print.

. “Cultural Analytics: Analysis and Visualization of Large Cultural Data Sets.”

Manovich.net. 2007. Web. 15 May 2014.

. The Language of New Media. Cambridge, MA: MIT Press, 2001. Print.

Marche, Stephen. 2012. "Is Facebook Making Us Lonely?" TheAtlantic.com. The

Atlantic Monthly Group. May 2012. Web. 15 September 2014.

222

Marino, Mark. “Critical Code Studies.” ISR Graduate Student Research Forum.

UC Irvine. June 3, 2005.

Marra, Rose. “Human-Computer Interface Design.” Hypermedia Learning Environments:

Instructional Design and Integration. Eds. Piet A. M. Kommers, R. Scott

Grabinge, Joanna C. Dunlap. Mahwah, NJ: Lawrence Erlbaum Associates, 1996.

115–134. Print.

Massumi, Brian. Parables for the Virtual: Movement, Affect, Sensation. Durham, NC:

Duke UP, 2002. Print.

McCorkle, Ben. Rhetorical Delivery as Technological Discourse: A Cross-Historical

Study. Carbondale, IL: Southern Illinois UP, 2012. Print.

McCullough, Malcolm. Digital Ground. Cambridge, MA: MIT Press, 2004.

. “Whose Body? Looking Critically at New Interface Designs.” Eds. Kristin Arola and

Anne Frances Wysocki. Composing (Media) = Composing (Embodiment):

Bodies, Technologies, Writing, the Teaching of Writing. Logan: Utah State UP,

2012. 174-187. Print.

McIntyre, Ronald and David Woodruff Smith. “Theory of Intentionality.” Husserl’s

Phenomenology: A Textbook. Eds. J.N. Mohanty and William R. McKenna.

Washington DC: Center for Advanced Research in Phenomenology and

University Press of America, 1989. 147-79. Print.

McLuhan, Marshall. Understanding Media: The Extensions of Man. New York:

McGraw-Hill, 1964. Reprint. Cambridge, MA: MIT Press, 1994. Print.

223

McNeill, David. Hand and Mind: What Gestures Reveal about Thought. University of

Chicago Press, August 1992. Print.

Meadows, Mark. We, Robots: Skywalker’s Hand, Blade Runners, Iron Man, Slutbots, and

How Fiction Became Fact. Guilford, CT: Lyons Press, 2011. Print.

Medway, Peter. “Virtual and Material Buildings: Construction and Constructivism in

Architecture and Writing.” Written Communication 13 (1996): 473-514.

Merleau-Ponty, Maurice. 1945. The Phenomenology of Perception. NY: Routledge,

2003. Print.

. The Primacy of Perception: And Other Essays on Phenomenological Psychology, the

Philosophy of Art, History and Politics. Northwestern University Press, Evanston,

IL, 1964. Print.

Miccoli, Anthony. Posthuman Suffering. Plymouth, UK: Lexington Books, 2010. Print.

. “Posthuman Topologies: Thinking Through the Hoard.” Design, Meditation and the

Posthuman. Lanham, MD: Lexington Books, 2014. 45-61. Print.

Milburn, Colin. “Nanotechnology in the Age of Posthuman Engineering: Science

Fiction as Science.” Configurations 10 (2002):261-295. Print.

Miller, Carolyn R. “Learning from History: World War II and the Culture of High

Technology.” Journal of Business and Technical Communication 12 (1998): 288-

315.

—. “Opportunity, Opportunism, and Progress: Kairos in the Rhetoric of Technology.”

Argumentation 8 (1994): 81-96.

224

Miller, Carolyn R., and Jack Selzer. “Special Topics of Argument in Engineering

Reports.” Writing in Non-Academic Settings. Ed. Lee Odell and Dixie Goswami.

New York: Guilford, 1985. 309-42.

Mitchell, William J. Me++: The Cyborg Self and the Networked City. Cambridge,

Mass.: MIT Press, 2003. Print.

Minority Report. Dir. Stephen Spielberg. Perf. Tom Cruise. 20th Century Fox,

Dreamworld Pictures. 2002. DVD.

Moeslund, Thomas B., Adrian Hilton, and Volker Krüger, (2006). A Survey of Advances

in Vision-based Human Motion Capture and Analysis. Computer Vision and

Image Understanding. 104.2-3 (2006): 90–126.

Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence.

Cambridge, MA: Harvard University Press, 1988, 116–122. Print.

Moulthrop, Stuart. “Rhizome and Resistance: Hypertext and the Dreams of a new

Culture.” Hyper/Text/Theory. Ed. George Landow. Baltimore: Johns Hopkins P,

1994. Print.

Munster, Anna. Materializing New Media: Embodiment in Information Aesthetics.

Lebanon, NH: UP of New England, 2006. Print.

Nakamura, Lisa. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in

World of Warcraft,” Critical Studies in Media Communication 26 (2009): 140.

Neal, Meghan. “Elon Musk Waves His Hand and Designs Rocket Parts Out of Thin Air.”

Motherboard.vice.com. Vice Media LLC. 6 September 2013. Web. 15 October

2014.

225

Neal, Michael R. Writing Assessment and the Revolution in Digital Texts and

Technologies. NY: Teachers College Press, 2010. Print.

Negru, Teodor. “Intentionality and Background: Searle and Dreyfus Against Classical AI

Theory.” Filosofi a Unisinos. 14.1 (January/April 2013): 18-34. Print.

Nietzsche, Friedrich. “On Truth and Lies in a Non-Moral Sense.” The Rhetorical

Tradition: Readings from Classical Times to the Present. Eds. Patricia Bizzell and

Bruce Herzberg. Boston, MA: Bedford Books, 1990. 888-896. Print.

.Thus spake Zarathustra Trans. Thomas Wayne). New York: Algora Publishing, 2003.

Print.

Noë, Alva. Action in Perception, Cambridge, MA: MIT Press, 2004. Print.

Norman, Donald. A. Design of Everyday Things. NY: DoubleDay, 1988. Ebook.

—The Invisible Computer. Cambridge, MA: MIT Press, 1998.

Ornatowski, Cezar. “2+2=5 If 2 Is Large Enough: Rhetorical Spaces of Technology

Development in Aerospace Engine Testing.” Journal of Business and Technical

Communication. 12 (1998): 316-42.

Ong, Walter. Orality and Literacy: The Technologizing of the Word. London:

Methuen, 1982. Print.

. “Writing is a Technology that Restructures Thought.” Literacy: A Critical

Sourcebook. Eds. Ellen Cushman et al. Boston: Bedford/St. Martin, 2001. 19-31.

Print.

226

Palmer, Mark, Turton, A., Grieve, S., Moss, T. and Lewis, J. “A body of evidence:

Avatars and the generative nature of bodily perception.” Technologies of Inclusive

Well-Being. NY: Springer, 2014. 95-120. Print.

Park, Hye-Jin, and Jiyoung Park, Myoung-Hee Kim. “3D Gesture-Based View

Manipulator for Large Scale Entity Model Review.” AsiaSim 2012: Asia

Simulation Conference 2012, Shanghai, China, October 27-30, 2012.

Proceedings, Part I. Heidelberg, Germany: Springer-Verlag. 524-533. Web. 6

May 2015.

Penley, Constance and Andrew Ross. Technoculture. Minneapolis, MN: UP, 1991. Print.

Perry, Mark. “Distributed Cognition.” Ed. J. Carroll HCI Models, Theories, and

Frameworks: Toward a Multidisciplinary Science. Ed. J. Carroll. San Francisco:

Morgan Kaufmann Publishers, 2003. p 193.224.

Pew Research Center. “The Internet of Things Will Thrive by 2025.” Pewinternet.org.

Pew Research Center. 14 May 2014. Web. 9 September 2015.

Pinkering, Andrew. The Mangle of Practice: Time, Agency and Science. Chicago: UP,

1995. Print.

Plato. Phaedrus. Trans.Christopher Rowe. NY: Penguin Group, 2005. Print.

Pogue, David. “Why Touch Screens Will Not Take Over.” ScientificAmerican.com

Scientific American. 3 January 2013. Web. 18 March 2015.

Polanyi, Michael. The Tacit Dimension. Chicago: University of Chicago Press, 1966.

Print.

227

Poppe, Ronald. “Vision-based Human Motion Analysis: An Overview.” Computer Vision

and Image Understanding 108 (2007): 4–18.

Porter, James. “Recovering Delivery for Digital Rhetoric.” Computers and Composition

26.4 (December 2009): 207–224.

Poslad, Stephen. Ubiquitous Computing: Smart Devices, Environments and Interactions.

West Sussex, UK: 2009. Print.

Poster, Mark. 1990. The Mode of Information: Poststructuralsim and Its Social Context.

Cambridge, UK: Polity Press, 2007. Print.

Prince, Stephen. “True Lies: Perceptual Realism, Digital Images, and Film Theory.” Film

Quarterly. 49.3 (Spring 1996): 27-37. JSTOR. Web. 12 Dec. 2013.

Pullman, George, and Baotong Gu. “Introduction.” Content Management: Bridging the

Gap Between Theory and Practice. Ed. George Pullman and Baotong Gu.

Baywood Technical Communications Series. Amityville, NY: Baywood

Publishing, 2009. Print.

Quintilian. 95CE. Quintilian on the Teaching of Speaking and Writing: Translations from

Books One, Two, and Ten of the Institutio Oratoria. Trans. J.J. Murphy.

Carbondale, IL: Southern Illinois University Press, 1987. Print.

Ramazanoglu, Caroline and Janet Holland. Feminist Methodology: Challenges and

Choices. London: Sage, 2002. Print.

RealView Imaging. RealView Medical Holography. RealView Imaging, Ltd. 24 January

2013. Web. 05 May 2015.

228

Reed, Scott. “Extra Lives, Extra Limbs: Videogaming, Cybernetics, and Rhetoric after

‘Literacy.’” PhD Diss. University of Georgia. Athens: GA, UGA. 2009.

Reeves, Stuart. “Envisioning Ubiquitous Computing.” CHI '12 Proceedings of the

SIGCHI Conference on Human Factors in Computing Systems. (2012): 1573-

1582.

Reynolds, Nedra. “Composition's Imagined Geographies: The Politics of Space in the

Frontier, City, and Cyberspace.” CCC 50 (1998): 12-35.

Ricardo, Francisco J. “Post-Chapter Dialogue, Hayles and Ricardo.” Literary Art in

Digital Performance: Case Studies in New Media Art and Criticism. NY:

Continuum International Publishing, Inc., 2009. 48-51.

Rice, Jenny E. “The New ‘New’: Making a Case for Critical Affect Studies. Quarterly

Journal of Speech. 94.2 (2008): 200-212.

Rickert, Thomas. “Toward the Chora: Kristeva, Derrida, and Ulmer on Emplaced

Invention.” Philosophy & Rhetoric. 40.3 (2007): 251-273.

Riecke, Bernhard E., Bobby Bodenheimer, Timothy P. McNamara, Betsy Williams, Peng

Peng, and Daniel Feuereissen. “Do We Need to Walk for Effective Virtual Reality

Navigation? Physical Rotations Alone May Suffice.” Spatial Cognition VII.

Heidelberg, Germany: Springer Berlin Heidelberg, 2010. Web. 6 April 2014.

Rivers, Nathaniel. “Rhetorical Theory/Bruno Latour.” Enculturation (July 2012): n.p.

Web.

Rogers, Richard. “Rhythm and the Performance of Organization.” Text & Performance

Quarterly. 14. (1994): 222-238.

229

Rothenberg, David. Hand's End: Technology and the Limits of Nature. Berkeley:

University of California Press. 1993.

Rothkerch, Ian. “Will the Future Really Look Like ‘Minority Report.’” Salon.com. Salon

Media Group. 10 July 2002. Web. 9 May 2015.

Roupé, Mattias, Petra Bosch-Sijtsema, and Mikael Johansson. “Interactive Navigation

Interface for Virtual Reality Using the Human Body.” Computers Environment

and Urban Systems, Impact Factor 1.79. 43 (January 2014) 42–50.

Ruddle, Roy A. and Simon Lessels. “The Benefits of Using a Walking Interface to

Navigate Virtual Environments.” ACM Transactions on Computer-Human

Interaction 16.1 (2009): 1-18.

Rupert, Robert. “Challenges to the Hypothesis of Extended Cognition.” Journal of

Philosophy 101 (2004): 389–428.

. Cognitive Systems and the Extended Mind. New York: Oxford University Press,

2009. Print.

Salomon Gavriel. Distributed Cognitions: Psychological and Educational

Considerations. New York: Cambridge University Press, 1993. Print.

Sampson, Tony D. Virality: Contagion Theory in the Age of Networks. Minneapolis, MN:

University of Minnesota UP, 2012. Print.

Savulescu, J. Human Liberation: Removing Biological and Psychological Barriers to

Freedom. Monash Bioethics Review 29.1 (2010): 4.1-4.18

Scada, Jorge. Cartesian Metaphysics: The Scholastic Origins of Modern Philosophy.

Cambridge: Cambridge UP, 2004. Print.

230

Searle, John. “Is the Brain’s Mind a Computer Program?” Scientific American. 262.1

(1990): 26-31.

. Minds, Brains and Science: The 1984 Reith Lectures (Can Computers Think?).

London: British Broadcasting Corporation, 1984. Print.

. The Rediscovery of the Mind. Cambridge, MA: MIT Press, 1992. Print.

Scott, A.O. “Code 46: FILM REVIEW; A Future More Nasty, Because It's So Near.”

NYTimes.com. The New York Times Company. 6 August 2004. Web. 1 March

2015.

Selber, Stuart. Multiliteracies in a Digital Age. Urbana, Illinois: NCTE, 2004. Print.

Selber, Stuart and Bill Karis. “Composing Human-Computer Interfaces Across the

Curriculum in Engineering Schools.” Electronic Communication Across the

Curriculum. Ed. Donna Reiss, Dickie Selfe, and Art Young. Urbana, Illinois:

NCTE, 1998. Print.

Selfe, Cynthia. Technology and Literacy in the 21st Century: The Importance of Paying

Attention. Carbondale, IL: Southern Illinois UP, 1999. Print.

Selfe, Cynthia L., and Richard J. Selfe, Jr. “”The Politics of the Interface: Power and Its

Exercise in Electronic Contact Zones.” College Composition and Communication

45 (1994): 480-504. Print.

Shaer, Orit. et al. “Designing Reality-Based Interfaces for Experiential Bio-design.”

Personal and Ubiquitous Computing. 18.6 (August 2014): 1515-1532.

Sharon, Tamar. Human Nature in the Age of Biotechnology: The Case for Mediated

Posthumanism. Limburg, Netherlands: Maastricht UP, 2014. Print. 231

Shedroff, Nathan and Christopher Noessel. Make It So: Interaction Design Lessons From

Science Fiction. NY: Rosenfeld Media, LLC., 2012.

Sheets-Johnstone, Maxine, ed. Giving the Body Its Due. Albany: State University

of New York Press, 1992. Print.

Sherlock, Lee. “‘Gaming’ Genre: Serious Games, Genre Theory, and Rhetorical Action.”

Thesis, Michigan State University. 2008. East Lansing, MI: MSU, 2008. Print.

Siegal, Lee. Against the Machine: Being Human in the Age of the Electronic Mob. NY:

Spiegal & Grau, 2008. Print.

Sofge, Eric. “Why Artificial Intelligence Will Not Obliterate Humanity.” PopSci.com.

Popular Science. 19 March 2015. Web. 15 September 2015.

Soja, Edward. Postmodern Geographies: The Reasssertion of Space in Critical Social

Theory. London: Verso, Inc. 1989. Print.

. Thirdspace: Journeys to Los Angeles and Other Real-and-Imagined Spaces. Malden,

MA: Blackwell Publishers, 1996. Print.

Solimini, Angelo G. “Are There Side Effects to Watching 3D Movies? A Prospective

Crossover Observational Study on Visually Induced Motion Sickness.” PLOS

ONE. 8.2: (1 February 2013): 1-8.

Stannus, S. et al. (2011), “Gestural Navigation in Google Earth.” Proceedings of the 23rd

Australian Computer Human Interaction Conference. OzCHI ’11, ACM, New

York, NY, USA, 2011. 269–272.

232

Stern, Helman, Juan Wachs, and Yael Edan. “Optimal Consensus Intuitive Hand Gesture

Vocabulary Design.” Proceedings of the 2008 IEEE International Conference on

Semantic Computing (ICSC '08). IEEE Computer Society, Washington, DC,

USA, 2008. 96-103.

Stix, Gary. “Little Big Science.” ScientificAmerican.com. Scientific American. 16

September 2001. Web. 16 July 2015.

Stone, Allucquere Rosanne. The War of Desire and Technology at the Close of the

Mechanical Age. Cambridge, MA: 1996. Print.

Stormer, Nathan. “Articulation—A Working Paper on Rhetoric and Taxis.” Quarterly

Journal of Speech. 90.3 (2004):157-184.

"subjectivity." OED Online. Oxford University Press, September 2014. Web. 21

September 2014.

Sullivan, Patricia. “Practicing safe visual rhetoric on the World Wide Web.” Computers

and Composition. 18 (2001): 103-121. Print.

Tarde, Gabriel de. Monadologie et Sociologie.1893. France: Institut Synthelabo, 1999.

Print.

Taylor, Todd. "A Methodology of Our Own." Composition Studies in the 21st Century:

Rereading the Past, Rewriting the Future. Ed. Lynn Bloom, Don Daiker, Ed

White. Carbondale: Southern Illinois University Press, 2003. Print.

Thacker, Eugene. Biomedia. Minneapolis, MN: University of Minnesota UP, 2004. Print.

. “Data Made Flesh: Biotechnology and the Discourse of the Posthuman.” Cultural

Critique. 53 (Winter 2003): 72-97.

233

. “The Science Fiction of : The Politics of Simulation and a Challenge

for New Media Art.” Leonardo. 34.2 (April 2001):155-158.

. “What is Biomedia?” Configurations 11.1 (2003): 47–79.

Toffeletti, Kim. “Media Implosion: Posthuman Bodies at the Interface.” Hecate. (2003):

152-165. Print.

Transcendence. Dir. Wally Pfister. Perf. Johnny Depp. Alcon Entertainment, DMG

Entertainment, Straight Up Films, DMG Entertainment. 2014. DVD.

Tucker, Aaron. Interfacing With the Internet in Popular Cinema.

Tufekci, Zeynep. “The Social Internet: Frustrating, Enriching, but Not Lonely.” Public

Culture. 20.1 (2014): 13-23.

. “Social Media's Small, Positive Role in Human Relationships.” Atlantic.com. Atlantic

Monthly Group. 25 April 2014. Web. 12 Dec 2013. Web.

Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from

Each Other. NY: Basic Books, Inc. 2012. Print.

Uhls, Yalda T et al. “Five Days at Outdoor Education Camp Without Screens Improves

Preteen Skills with Nonverbal Emotion Cues.” Computers and Human Behavior

39: (2014). 387-392.

Valli, Allesandro. “The Design of Natural Interaction.” Multimedia Tools and

Applications. 38.3: (October 2008). 295-305.

Van Nostrand, A. D. Fundable Knowledge: The Marketing of Defense Technology.

Mahwah, NJ: Erlbaum, 1997.

234

Varela, Francisco, “Organism: A Meshwork of Selfless Selves,” Organism and the

Origins of Self. Dordrecht and Boston: Kluwer Academic, 1991. Print.

Verbeek, Peter-Paul. “Cybertechnology: Rethinking the Phenomenology of Human–

Technology Relations.” Phenomenology and the Cognitive Sciences 7.3.

(September 2008): 387-395.

. “Materializing Morality: Design Ethics and Technological Mediation.” Science,

Technology and Human Values. 31.3 (2006): 361-380. Print.

. Moralizing Technology. Chicago: University of Chicago UP, 2011. Print.

. What Things Do. University Park: Penn State University Press, 2005. Print.

Vico, Giambattista. The New Science. Trans. T. Bergin and M. Fisch. Ithaca, NY: Cornell

University Press, 1968. Print.

Vinge, Vernor. “The Coming Technological Singularity: How To Survive in the Post-

Human Era.” Vision-21 Interdisciplinary Science and Engineering in the Era of

Cyberspace. NASA: 1993.11-22.

Vivian, Bradford. Being Made Strange: Rhetoric Beyond Representation. Albany, NY:

State University of New York Press, 2003. Print.

.“The Threshold of the Self.” Philosophy and Rhetoric. 33.4 (2000): 303-318.

Wachs, Juan Pablo, Mathias Kolsch, Helman Stern, and Yael Edan. “Vision-based

Hand-gesture Applications”. Communications of the ACM. 4.2 (February 2011):

60-71.

Walzer, Arthur E., and Allen Gross. “Positivists, Postmodernists, Aristotelians, and the

Challenger Disaster.” College English 56 (1994): 420-33. 235

Wargames. Dir. John Badham. Perf. Matthew Brokerick. United Artists, Sherwood

Productions. 1983. DVD.

Warnick, Barbara. Critical Literacy in A Digital Era: Technology, Rhetoric, and the

Public. NY: Routledge, 2002. Print.

. “Looking to the Future: Electronic Texts and the Deepening Interface.” Technical

Communications Quarterly. 14:3 (2005): 327.333. Academic Search Complete.

Web. 15 June 2014.

Warnick, Barbara and David S. Heineman. Rhetoric Online: The Politics of New Media.

New York: Peter Lang Publishing, 2012. Print.

Weiser, Mark. “The Computer for the 21st Century.” Scientific American. 265.3

(1991): 94-104.

Weiss, Dennis M., Amy D. Propen, Colbey Emmerson Reid. Design, Meditation and

the Posthuman. Lanham, MD: Lexington Books, 2014. Print.

Wellmon, Chad. “Why Google Isn’t Making Us Stupid…or Smart.” Hedgehog

Review. 14.1 (Spring 2014): 66-80.

Wheeler, Michael. “Embodied Cognition and the Extended Mind.” Ed. Garvey J The

Continuum Companion to Philosophy of Mind. Bloomsbury Companions,

London: Continuum, 2011. 220-238. Print.

Whitson, Steve and John Poulakos. “Nietzsche and the Aesthetic of Rhetoric.” Quarterly

Journal of Speech, 79 (1993): 131-145.

Widgor, Daniel and Dennis Wixon. Brave NUI World. Burlington, MA: Morgan

Kaufman Publishers, 2011. Print.

236

Wilson, Robert A., “Extended Vision,” Perception, Action and Consciousness. Eds.

Gangopadhyay, M. Madary, and F. Spicer. New York: Oxford University Press,

2010. Print.

Winner, Langdon. “Do Artifacts Have Politics?” Readings in the Philosophy of

Technology. Ed. David M. Kaplan. Lantham, MD: Roman & Littlefield

Publishing Group, Inc., 2009: 251-263.

Winsor, Dorothy A. “A Call for the Study of Rhetoric of Technology.” Journal of

Business and Technical Communication 12 (1998): 285-87.

. “Engineering Writing/Writing Engineering.” College Composition and

Communication 41 (1990): 58-70.

. “Guest Editor’s Introduction: A Call for the Study of the Rhetoric of Technology.”

Journal of Business and Technical Communication 12.3 (1998): 285-287.

. “Invention and Writing in Technical Work: Representing the Object.” Written

Communication 11 (1994): 227-50.

Whitt, David Francis. ""Resistance is futile": The rhetoric of the cyborg in the

Information Age" 1 January 2002. ETD collection for University of Nebraska -

Lincoln. Paper AAI3074109.

Wolfe, Cary. What is Posthumanism? Minneapolis: University of Minnesota Press, 2010.

Print.

Wright, David. “Alternative futures: AmI scenarios and Minority Report.” Futures 40

(2008): 473-488. Print.

237

Wysocki, Anne. “awaywithwords: On the Possibilities in Unavailable Designs.”

Computers and Composition 22 (2005): 55-62. Print.

. “Erratum: ‘Impossibly distinct: On form/content and word/image in Two Pieces of

Computer-based Interactive Multimedia.’” Computers and Composition18

(2001): 207. Print.

. “Monitoring Order.” Kairos 3.2 (1998): n. pag. Web. 01 April 2009.

Wysocki, Anne Frances, et al. Writing New Media: Theory and Applications for

Expanding the Teaching of Composition. Logan, UT: Utah State UP, 2004. Print.

Wysocki, Anne Frances, and Julia I. Jasken. “What Should Be an Unforgettable Face…”

Computers and Composition 21 (2004): 29–48. Print.

Zappen, James P. “Digital Rhetoric: Toward an Integrated Theory,” Technical

Communication Quarterly 14 (2005): 323.

Zhao, Rongying and Ju Wang. “Visualizing the Research on Pervasive and Ubiquitous

Computing.” 86 (2011): 593-612.

238

NOTES

1 Here the computer is considered to be any device, simple or complex, small or large, that is programmable and has a memory to store data and or code. 2 Mark Weiser’s original vision, needless to say, didn’t account for the web or mobile computing as we see it now but rather consisted of “tabs,” “pads” and “boards” conceptualized at Xerox PARC; in fact, much of the earliest work on ubicomp took place without the web or mobile networks (Greenfield 13). 3 Often scholars use the terms “pervasive computing” and “ubiquitous computing” as equivalents although they are conceptually disparate (Lyytinen and Yoo 63). Ubiquitous computing refers to the underlying framework for technologies that are everywhere, invisible, and embedded in objects and environments. Pervasive computing refers to the distributed set of tools within the environment, the objects we use to access information anytime and anywhere. 4 Both of Carr’s texts were widely read and discussed by other tech writers, cultural critics and the public in general. His book The Shallows that subsequently extended the ideas from the Atlantic article was a Pulitzer Prize finalist for Nonfiction in 2011. 5 This is hardly exhaustive. For additional texts with equally telling titles see: Andrew Keen, The Cult of the Amateur: How Today’s Internet is Killing our Culture; Birkerts, The Gutenberg Elegies: The Fate of Reading in an Electronic Age; Clifford Stoll, High- Tech Heretic: Reflections of a Computer Contrarian; Todd Gitlin, Media Unlimited: How the Torment of Images and Sounds Overwhelms Our Lives; Todd Oppenheimer, The Flickering Mind: Saving Education from the False Promise of Technology; Mark Helprin’s Digital Barbarism: A Writer’s Manifesto. One of the most anti-technology texts is Neil Postman’s Technopoly: The Surrender of Culture to Technology. 6 Clearly this is only one strand of the amalgam of studies in digital rhetoric and new media concerning the social and material implications of technology. For instance, in terms space, new media studies explores location-aware technologies (de Silva e Souza; Jason Farman); how mobile, location aware technologies differ and the relationships this bears to urban spaces (Moores); the role of technologies in civic and political engagement and activism (Couldry, Livingstone, and Markham; Foth; Jurgenson); and the role of mobile technologies in global and local spaces (Madianou and Miller 2011). Some scholars, for instance, look towards defining and theorizing the characteristics of digital media, examining the qualities that allow or hinder user action (Fagerjord; Gurak, “Cyberliteracy”; Manovich). Others focus on digital spaces and media’s potential in constructing and shaping identity (Turkle; Johnson-Eilola; Miller). Another branches address composition and pedagogy in digitally-driven world, using key concepts such as “remixing” (DeVoss; Banks; Yancey 2009; Latterell). Some push the notion of remixing further arguing for the incorporation of genre studies into composing digitally (Ray; Spinuzzi). Another socially-focused strand looks at the potentialities of creating communities in digital environments (Arnold, Gibbs, and Wright; Blanchard; Matei and Ball-Rokeach; Quan-Haase and Wellman), while still others look at the possibilities of

239

multimodality and multilingualism to cut across cultural boundaries to suit more global contexts of today’s connected world (Fraiberg). 7 I use the word “thing” interchangeably with object, unlike some object-oriented ontologists. My framework somewhat cuts across theirs; however, I do not adapt their “thing” differentiation in this dissertation. 8 This moment of disconnection or when something goes wrong is similar to Heidegger’s understanding of equipment. Heidegger argues that “ready-to-hand” tools exist in fields of activity and networks of other tools for use. These are tools that we use without theorizing their purpose. “Present-at-hand” tools are those that we notice when something is deficient or fails. Something is present-at-hand, then, when it does not fulfill the purpose that the user expects it to. 9 Rene Descartes’ seminal work (1644, 2010) obviously created the foundation for the Cartesian tradition. However, numerous other scholars have influenced this tradition and its longetivity. For a full review of these philosophers and theorists, see Jorce Scada’s 2004 comprehensive work on Cartesian metaphysics. 10 For more on transhumanism, see Keith Ansell-Pearson (1997). The Transhuman Condition: A Report on Machines, Technics, and Evolution; Jean-Pierre Béland & Johane Patenaude (2013). Risk and the Question of the Acceptability of Human Enhancement: The Humanist and Transhumanist Perspectives; J. P. Bishop (2010). Transhumanism, Metaphysics, and the Posthuman God; Nick Bostrom’s (2006) A Short History of Transhumanist Thought and (2005) Transhumanist Values; to name just a few. 11 This text is ultimately a call to change the humanities so that it better embraces technologies. Her work posits comparative media theory as the lens through which the humanities, in particular the digital humanities, can have more productive engagements with technology. 12 Geometric places an object in a setting, corresponding with scene from Burke’s pentad. Familial stresses the biological family but also social groups. Directional is also biological relating to movement from within that motivates; it also includes “terminologies that situate the driving force of human action in human passion” or tendencies and trends . This substance corresponds with agency. Lastly the dialectic is relates to the “human situation” or that Burke defines as “Being and Not Being” what what the person is in themselves (Grammar of Motives 44). This corresponds with purpose. 13 The term natural here is comparable to “vraisemblance” in literature, which Jonathan Culler defines this as the ways “in which a text may be brought into contact with and defined in relation to another text which helps make it intelligible.” Similar to the literary concept a natural interface is one that is intelligible to the user, one that mirrors what we already know or with which we have associations. For instance, swiping in a gestural interface to move a file out of the way is similar to the way one might push a piece of paper out of the way. Further, natural user interfaces are easy to learn and intuitive in the same way that Culler sees the genre conventions and intertextuality of literary texts. With interfaces the gestures of a touch screen for instance are the same whether using an

240

iPhone or android smartphone. Typically the icons on a GUI touchscreen are similar, so there is a level of convention at work (Structuralist Poetics: Structuralism, Linguistics and the Study of Literature. London: Routledge, 1975). 14 A list of some of the numerous experts including scenario planner Peter Schwartz, science advisor John Underkoffler from M.I.T.’s Media Lab, Douglas Coupland, Cybergold founder Nat Goldhaber, biomedical researcher Shaun Jones, virtual reality expert Jaron Lanier among others can be gathered through numerous sources. For instance, the Minority Report website lists some participants as well as the think tanks overarching approach. Also see Lisa Kennedy’s 2002 Wired article “Spielberg in the Twilight Zone” Wired, Issue 10.06 [June 2002] and Chris Taylor’s “Looking ahead in a dangerous world” Time (11 October 2004). 15 The differentiation between weak and strong AI is that with weak AI a machine running a program is at most only capable of simulating real human behavior and consciousness. Strong AI, on the other hand, purports that AI programs running on a machine effectively constitute a mind. For information on these debates see the works of John Searle and Daniel Dennett, among other AI researchers and theorists. 16 Intentionality as Searle argues “that property of mental states” (1) that are directed at or about something. Intention refers desire and agency to do something; And these are distinct from intensionality in linguistics relates to sentence structure, a property of sentence contexts. Jeff Speaks explains it as “a sentence context is a ‘location’ in a sentence occupied by a word or phrase. Given any context in a sentence, we can then ask: can we, by replacing one expression or phrase in that context with another which has the same reference, change the truth-value of the sentences as a whole? If so, then the context is said to be ‘intensional’” “Intentionality” Cambridge Encyclopedia of the Language Sciences 17 For more in-depth discussions of this debate see these texts among the many that engage this debate: Rodney Brooks, "Elephants Don't Play Chess." Robotics and Autonomous Systems 6 (1990): 3–15; Crevier, Daniel Crevier AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, 1993; Hubert Dreyfus and Stuart Dreyfus. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell, 1986; Hubert Dreyfus. What Computers Still Can't Do. New York: MIT Press, 1979.; Harnad, Stevan; Peter Scherzer."First, Scale Up to the Robotic Turing Test, Then Worry About Feeling", Artificial Intelligence in Medicine 44.2 (2008): 83–9.; John Haugeland. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press, 1985.; Stuart J. Russell, and Peter Norvig. Artificial Intelligence: A Modern Approach (2nd ed.). Upper Saddle River, New Jersey: Prentice Hall, 2003.; John Searle. "Minds, Brains and Programs", Behavioral and Brain Sciences 3.3 (1980): 417– 457.; A.P. Saygin, A. P. "Turing Test: 50 years later". Minds and Machines 10.4 (2000): 463–518.; Alan Turing. "Computing Machinery and Intelligence", Mind LIX 236 (October 1950): 433–460.

241