User-Machine (Agent-Agency)

Total Page:16

File Type:pdf, Size:1020Kb

User-Machine (Agent-Agency)

CHAPTER 2

USER-MACHINE (AGENT-AGENCY)

Introduction: Agency, Technology and Internetworked Symbolic Action

For Kenneth Burke, the discussion of motive – of what people are doing and why they are doing it – centers in the terms of dramatism, and therefore a Burkean approach to answering this question can most easily be wrapped around discussion of computing environments perhaps as a kind of "text" – in terms of the Burkean pentad: Agent,

Agency, Act, Scene, and Purpose. These elements are present in every computing situation – although one might say they are present in every situation – every time a human being sits in front of a computer. Taking great liberties with the main thrusts of A

Grammar Of Motives, I will draw attention to what I refer to as sites of Human-Computer

Interface (or Interaction), which are patterned conceptually (though not functionally) after some of Burke’s pentadic “ratios.” Focus and discussion is directed to the “Act” of computing, although I persist in anchoring these ratios to the human half, the “Agent,” of the “human-computer interaction.” As we analyze this interaction, we can consider various points of view, characterizing computers as tools, as contexts, as cultural icons, and as internetworked media. Privileged in this scheme, or extension of Burkean dramatism, is the agent or “user” of computer technology. While Burke’s tendency is to foreground, or emphasize, the term “Act,” primarily in reference to the symbolic action of a literary text, my approach is to analyze (and problematize) “ratios” or entanglements 49 of the human Agent – the user, the cyberwriter, the netizen, the hacker, the hypertext designer, the “content provider” – in relation to each term in the pentad. Assumed as a backdrop, a context, is the dramatistic Act of internetworked symbolic action, perhaps even the “act” of being virtual.

I would argue that there is a kind of chronological (hierarchical) order in the movement of what I call “sites of interaction” between humans and computers. As the computer user shifts focus from moment to moment, concentrating now on the machine itself and the program applications she uses (the Agency), perhaps next upon the task she hopes to accomplish by means of the computer (the Act), and later spends time familiarizing herself with the graphical user interface on the screen (the Scene), she shifts contexts, re-prioritizes her actions and the machine’s motions. If computing is a rhetorical act, then we can organize a Burkean approach to the computing act in dramatistic terms.

As we “read” the interactions between humans and computers, and more importantly, the interactions between humans by means of computers and internet technology, we can draw correlations between the shifting sites, or “levels” of focus between the human and the computer, and the five terms of the pentad.

Conceptualizing Computers

Shifting focus is nothing new to our lives with machines. Denise Murray in

Knowledge Machines (1995) points out that, while driving in our cars we shift from horizon, to signpost, to speedometer, to rearview mirror, and back to the road. At first these shifts are conscious, awkward, unfamiliar until our minds and eyes become accustomed to them, as we form mental “schemata” around them. After several years of 50 commuting, these schemata become seamless. We know our speed, even if we do not consciously remember glancing at the speedometer (19).

In front of our computers, we perform similar shifts of focus. After several years of work-related computer use, we train ourselves to perceive these shifts as seamless movements between tasks and the tools we need to complete them. At first, however, our interactions with new machines are conscious – each piece of equipment for a time after its introduction into the office or work environment is "too much with us." We fuss over it, read its manuals, call the service representatives, limit access to it, and, in short, we get acquainted with it. As our familiarity and comfort with the equipment grows, so does our ease of focus-shift, from the machine itself, say, to the quality, variety, and quantity of tasks it is performing. As we gain experience with the machine, our shifts in focus become smoother, less conscious, at times almost automatic. We could argue that as the level of trepidation lowers, the level of expectation rises.

The networked computer enters our work lives with a larger scope and depth of complexity than any other piece of technology. We ubiquitously anthropomorphize its hardware (i.e., circuit boards, chips, housings, cables, and other parts of the physical machine), and at the same time become immersed in its software (the applications, or programs, written to enable tasks by means of a user interface). When I say that we shift focus, what I mean is that our needs and motives for interacting with the computer shift and change constantly. To help examine motives and approaches to internetworked writing, I have divided these sites of interaction between user and computer into five. For want of more elegant names, I tentatively give them utilitarian tags: user-machine, user- screen, user-application, user-task, user-user. Each of these relationships between the user 51 and the computer is generated by correlating motives, needs, or impulses. Each generates a roughly characteristic vocabulary and fairly consistent groups of concepts, ideas, and principles. Each, I hope to show, adheres in particular if not discrete clusters of causes and effects that directly affect the ways computer users interact with the machine, the screen, the software, the task, and each other.

Humans and Technology—Agent and Agency

Media saturation, particularly television commercials selling Information

Technology products and services, would support the argument that the computer terminal screen is the ubiquitous initial point of human-computer interaction. As we consider what computers have come to signify, we might as well be talking about the dramatistic term “Scene,” as these jazzy, colorful images – a bizarre window-box of screen-within-screen, proscenium framed by proscenium repeat and multiply daily as marketers, manufacturers, and other “entrepreneurs” leap into the computer-train. For millions of Americans, television advertising marked the first time ever we saw the machine’s new graphically exciting face. But before the sexy pictures, before Windows

3.X, before the Macintosh, was our popular conceptualization – and misconceptualization

—of the technology itself, as physical machine, as icon, as 20th Century Satan and Savior.

Even before the early 1960’s planning and design of ARPANET, the military system that served as the initial backbone and prototype to the internet, computers had arrived in our cultural lexicon, and perhaps even in our dramatis personae of 20th Century Western archetypes, reflected in popular literature. Before Alvin Toffler put a name to it,

Huxley’s Brave New World created a substantial wave of future-shock with various techno-forecasts including a model still held as a goal of virtual reality (VR) by 52 designers, a collection of multisensory virtual-experience machines called “feelies.” In

1951, readers of speculative fiction sympathized with Vonnegut’s dystopian angst aimed at computer technology and technocratic industrial practices in his popular first novel

Player Piano. By 1961 computers had earned enough space in the collective mind to rate a cultural stereotype, and that stereotype was not pretty. Dark satires reflected America’s unease with its new “thinking machines,” exemplified most memorably in Joseph

Heller’s Catch-22, which struck a Luddite nerve in 1961 with a character promoted to the rank of Major by a very logical pentagon computer. The reason? The recruit’s first name is Major. In fact, all three of his names are “Major.” Thus, by means of computer-logic, he exits boot camp as Major Major Major Major, and immediately begins to hide, miserable in his incompetence, in an office where no one is admitted to see him, unless he is absent. By 1968 no one in viewing audiences seemed surprised by the ominous

(murderous) “Hal 9000” computer in Stanley Kubrick’s 2001: A Space Odyssey, and by the time ARPANET is up and running (first operational nodes in 1969), Alvin Toffler’s

Future Shock (1970) seems almost overdue. With these ominous, ironic models of computer technology making up the computerworld scene, already these man-made servants seemed more scary than helpful – few Americans felt “served” by as much as they felt themselves to be in the service of the computers owned and operated by the IRS, the Draft Board, and the phone company. As an agency by means of which man could escape drudgery, free up work time and space, and live “the good life,” the computer seemed an escaped Jinn. In accepting their labors, many felt they had agreed to accept their harsh intolerance of “error” in more elements of their lives and work than had ever been imagined (Postman, 111 ff.). 53 It is no surprise that Burke viewed computer technology with unease, even suspicion. Influenced by Nietzsche and others to conceive of human thought and terministic behaviors as connected to, not separate from, our humanity, while at the same time taking seriously our tendency to fall into traps of “occupational psychosis,” when we should rather be exercising “perspective by incongruity,” – thinking for ourselves – he challenged the new technologies and the typical technocratic god-terms to creep to the top of social and ethical “clusters,” or terms surrounding values of work, education, politics, and living.

The issue: If man is the symbol-using animal, some motives must derive from his

animality, some from his symbolicity, and some from mixtures of the two. The

computer can't serve as our model (or "terministic screen"). For it is not an

animal, but an artifact. And it can't truly be said to "act." Its operations are but a

complex set of sheerly physical motions. Thus, we must if possible distinguish

between the symbolic action of a person and the behavior of such a mere thing.

--Kenneth Burke "Mind, Body, and the Unconscious" (LSA, 1966)

Critical to a Burkean approach to internetworked writing is the distinction between action and motion. If we are to maintain that “computing is a rhetorical act,” we need to distinguish between the human act of computing, that is the human employment of the machine as an agency in the accomplishment of some task or goal, or in the creation of a text, an application, or an operating system, and the machine motion that implements (or results from) that human act. When I say that “computing is a rhetorical act” I do not refer to the functions of the machine itself, the processing of the binary stream of data bundled into bits and bytes of on-off impulses. The computing machine is capable, in 54 Burke’s view, only of motion – it has no needs (except our own need to keep it functioning for our own purposes), no desires, no goals, no urges. Computers do not act, but humans, in the employment of computers as agency, are said to be “computing” when they initiate (or plan, design, structure, make possible) the motion of the machine.

While any human action can be “interpreted” or perceived as having meaning or purpose, and therefore can be analyzed rhetorically, the Burkean system is primarily concerned with symbolic action. Therefore, it seems important to start from some basic claim, from a ground zero declaration that “computing” is a uniquely human act, and that it is well within the realm of utterance and meaning – of symbolicity – in that it exists mainly within a dependence upon both “artificially,” or deliberately created computer languages, and the employment of “natural,” or evolved human languages. While it seems handy to designate computer languages such a s Fortran, Cobol, C++, Visual

Basic, or Java as “artificial” as opposed to human languages such as English, Latin,

French, or Russian, which we might term “natural,” the lines between computer languages and human languages must necessarily blur and tangle, since both human and computer languages are created, “developed,” and used by humans. Burke provides a possible method for striking some kind of comfortable balance in conceptualizing the differences between human languages and computer languages, and their respective symbolic function as a means of human motive. Human languages encompass the full range of what Burke in his discussion of “Poetics in Particular, Language in General”

(LSA 1966) calls the four “linguistic dimensions”:

Viewed from the standpoint of “symbolicity” in general, Poetics is but one of the

four primary linguistic dimensions. The others are: logic, or grammar; rhetoric, 55 the hortatory use of language, to induce cooperation by persuasion and

dissuasion; and ethics. By the ethical dimension, I have in mind the ways in

which, through language, we express our characters, whether or not we intend to

do so. (LSA 28)

Computer languages have a tendency to fall predominantly into the logical/grammatical” dimension of language. We can, when pressed, imagine some arguments for the poetic, hortatory, and ethical dimensions of computer languages. Certainly an unusual irony of computer languages is that they function on one level as construction of logical operations in the interface between human and machine (compiler), but take on extra dimensions of symbolicity in those instances when one programmer who understands a particular computing language reads the code written by another. Primarily, however, we must assume that computer languages consist and exist mainly in the realm of logical expressions, with the expectation of logical (rational / consistent) results of any utterance flowing from human to machine. One need not “persuade” a computer. Nor could one expect the machine to appreciate poetic, or artistic, symbolic expression, or to value human personality (ethical dimensions of expression). Thus, while it is reasonable to reserve final evaluation of computer languages in light of their incipient status (after all, the English language has had roughly a millennium to run loose from the pen, and almost a century on the keyboard, while the Java programming language, for example, has at this writing been in existence for less than a decade), it is understandable that many consider computing languages as “off-shoot” technical languages, or even mere “codes,” invented for particular purposes, developed mainly as functions of logical expression. 56 Emanating from this “neutral” or “logic-only” quality of computer usage and computing languages is a kind of paradigm creep symptomatic of the post-industrial, or

“information” age – its pervasiveness inspires cautionary critique such as the concept of a society ruled by “technique” envisioned by Jacques Ellul (1954/1964) – a conceptualization that Ellul argues is a result of our nature, our drive to create a technology-driven culture. Yet from a Burkean standpoint, we can follow textual trails in both academic and popular media, trails leading less toward anthropological concerns and more toward logological issues. The Burkean view of the “internet explosion,” or

“computer revolution” is more likely to find and focus on various “terministic screens,” showing that a dominant economic and industrial movement into the “culture of information” or computer-privileging hierarchies in the world of work, can result in habits of logic-only thinking, of oversimplification, of assuming that because industries, economies, and institutions can be designed and managed on a “grammatical” or logical basis, that all human symbolic interaction inevitably will be conceptualized from a logical or technological framework, say, in the dystopian ways considered by Neil

Postman’s Technopoly (1992).

Presumably, according to Burke, if our entire culture has assimilated technocratic values, we should be able, by charting clusters, to “get our cues as to the important ingredients subsumed in “symbolic mergers” (ATH 233). In the late 20th Century, clusters of “scientistic” terms and trails of technocratic thinking can be followed in the field of

Rhetoric and Composition Studies. In the teaching of college writing, for example, we can see the three classical appeals divided and re-apportioned to fit a technology-driven terministic screen: textbooks privilege logos (logical appeals) over ethos (personal 57 credibility), and both of these above pathos (emotional appeals), some even abandoning comment on emotional appeals altogether (an exception to this is Ramage &

Bean’s Writing Arguments). Such emphasis on logos can on the one hand be explained by the “service” mission some composition departments set for themselves, framed upon a desire to prepare students to write during their academic career in courses across the curriculum. However, composition textbooks in these courses tend to hold up as

“examples” of the assigned writing tasks published, journalistic articles published in

Harper’s, Atlantic, or the New Yorker, and as most writing assignments in first-year courses are written in modes and styles best suited for a broad, general audience rather than a scholarly one. Our textbook’s passion is for logos, and its insistence upon “expert sources” in place of the writer’s (admitted lack of) scholarly ethos reveals much about our judgments about writing. Regardless of our insistence upon allowing students to

“express themselves,” and in Burke’s words, by careful examination of how a textbook, or a teacher, or an entire academic department, handles the teaching and practicing of rhetorical appeals,

We reveal, beneath an author’s “official front,” the level at which a lie is

impossible. If a man’s virtuous characters are dull, and his wicked characters are

done vigorously, his art has voted for the wicked ones, regardless of his “official

front.” (ATH 233).

The danger in viewing Ellul’s “technological society” as a given, as a natural force, an outcropping of our intrinsic tool-making, tool-using nature as Homo sapiens, is that it reinforces a kind of mechanistic fatalism almost resonant of Burke’s discussion of final causes as a problem of “Scope and Reduction” in A Grammar of Motives (1945/1969). 58 What Burke would argue is that this privileging of “digital culture” is the result of terms clustered around an economic and political “God Term” – in this case, computer technology – one that has taken hold and (as we might argue is a logical next step in

Capitalist thinking) become a way of thinking, speaking, and writing, especially in conceptualizing or “marketing” our hierarchies. Ellul, on the other hand, has assigned to our “nature” or “essence” as human beings, a feature, technique, that marks our species apart from others. For Ellul, technology is an “ultimate cause,” and although he is not alone in this thinking, his fatalism stretches further than that of, say, Norman (1993),

Murray (1995), Feenberg (1991), and possibly even Postman (1992).

For Burke, technology is acknowledged as a primary element in modern human existence, causing man to be “separated from his natural condition by instruments of his own making,” but at the same time Burke acknowledges the willfulness of such separation. He would allow that any number of political, social, or economic processes can move to create a terministic screen, one among millions, that may result from a massive “occupational psychosis” that through sheer numbers and force of pragmatic, ubiquitous economic success, has become an assumption, has perhaps even earned a kind of grammatical “reduction” in our collective self-image. We can easily envision such a powerful and fast-growing technology, one that has caused newsworthy fluctuations in the economy and in political rhetoric, to present clusters of terminologies and figures of speech around various God Terms, and to establish vocabularies and critical hierarchies within its own self-worship – marking, of course, who is “in” and who is “out” of the terministic “loop” creating all the fuss and hubbub. In short, the language of high technology has crept into our casual dead and dying metaphors, a sure sign that its impact 59 has been felt economically and socially: “He’s a few bytes short of an algorithm,” or

“She lectured for an hour, but her students just couldn’t seem to download the concept and run it properly” are bland examples of how we use computer jargon metaphorically in everyday conversation, outside the context of the technology itself. Computers, terministically and historically speaking, have arrived.

Extending the Burkean System: Machine as Agency

Why computers? Why the internet? The questions explored here concern motives, and the validity of discussing in Burkean terms the human-computer interface in the context of internetworked writing. And by “reasons” or “motives” I am speaking more or less synonymously with “immediate,” rather than “ultimate” causes. To credit ultimate causes to metaphysical origins, or instead to naturalistic forces, will still not bring the discussion closer to “motive” in the immediate sense. That is, whether we argue that events or actions – in this case, internetworked writing – occur because of pre- programmed chemical chains in our DNA (“It’s just human nature”), or because of

Divine Plan played out from elements laid out by Design during the Creation (“It’s God’s will”), we still have argued only ultimate causes, precursors or ancestors of the current action or event. That is, in Burkean terms, if we are seeking to uncover or reconstruct motives, we cannot explain away particular uses of technology, whether industrial, social, symbolic, or political, by calling upon “our technological nature” (i.e. the

“predetermination” of our chemical DNA structures) as humans, nor by supernatural,

“final” causes such as The Creation (i.e. “It is part of God’s Plan”), because both here 60 function as constants and not as variables (GM, “Scope and Reduction” 98-99).1 Still, it is important to consider not only the immediate, physical site of the human writer’s focus upon the single instantiation of machine – the particular machine present at a particular moment, and possibly the operating system and software that has been installed on it – but to consider also the larger implications of what is signified in the term

“computer” before the user sits down and begins to interact with the machine. Whether consciously or unconsciously, people have culturally informed ideas about the computer qua computer which they bring to their interactions with the machine. While all users are more or less aware of the humming of the CPU, the faster-than-human-sight flickering of pixellated light emanating from the screen, the layout of the keyboard, the make and model of mouse, fingerpad, or toggle, new users tend to focus on the hardware, sustain their gaze a little longer, a bit more slowly, consider the physical configurations more carefully than do habitual, experienced users.

This user-machine interaction is of course not unique to the new user.

Experienced users, technicians, corporate personnel, systems analysts, hardware designers, programmers, computer scientists and hackers all spend substantial amounts of time focused on the machine itself. Yet just as a physician's view of the human corpus is different from that of the lay viewer, the computer expert "sees" the hardware in a more technical, and less symbolic, poetic, or emotional way. The computer expert’s symbolic, terministic relationship with the machine is structured and shaped by values and concepts that adhere to the computer technology sciences and industry. His own “Occupational

Psychosis” (PC 37-49) frames the patterns and techniques with which he approaches both 61 the physical machine itself, and the concept – or his understanding(s) of the concept – of the computer.

Because perceptions of the machines themselves vary among individuals, the physical presence of the machine interacts in interesting ways with the subculture made up of the personalities within an institution, organization or corporation – ways which determine biases, literacies, and preferences for or against the presence of the machines in all or some of its offices (Dautermann 1996). The same is true of educational settings.

Especially in college composition classrooms, I have noted that, despite claims that

“Young people are more computer-literate than us (university teachers),” students come to the computing environment from a wide variety of experiences. Some may have attended "wired" high schools, or spent hours using computer games and the internet at home, while others may have little or even no experience in front of the monitor at all.

But especially when dealing with adults, I agree with Carroll and Rosson (1987), who are careful to note that

New users are not 'blank slates' for training designers to write upon. Indeed, the

most accurate way to think about new users is as experts in other, noncomputer

domains. (83)

If we were to critique the act of computing dramatistically, a possible inevitability would be the irresistible tendency to prioritize terms of the pentad. Carroll and Rosson, experts in software design theory, and in the relatively new field of Human-Computer Interaction studies, argue that a primary concern in analyzing the human-computer interaction is to privilege the human side of the equation, sometimes attempting to accomplish the impossible, the making of terms that resist the “god-position” of the machine, or of the 62 applications running on the machine, or even the computational tasks of which the machine is eminently more capable than the human user. Perhaps the current project becomes clearer if we attempt both to tangle and untangle “man” from what Burke calls his “instruments,” while at the same time acknowledging that one of man’s “instruments”

– language itself – the instrument by means of which this discourse evaluates Burkean views of technology, and extends Burkean systems into the realm of internetworked writing, is as much a technology as the computer, the automobile, or the electric light bulb. Forgiveness for tangling symbolicity, technology, and instrumentality (agency) in such a manner comes from Burke himself:

We are the instruments of our instruments. And we are necessarily susceptible to

the particular ills that result from our prowess in the ways of symbolicity. Yet,

too, we are equipped in principle to join in the enjoying of all such quandaries,

until the last time. Men’s modes of symbolic action are simultaneously

untanglings and entanglements. (LSA viii)

Burke and Computer “Intelligence”

In the Burkean tradition of lexical beginnings, we can start with a definition of

“Artificial Intelligence” from the Oxford English Dictionary: “(the field of study that deals with) the capacity of a machine to simulate or surpass intelligent human behaviour.”

A key term in this definition for Burke might be “behaviour.” The Turing test used by computer specialists as an attempt to gauge the “intelligence level” of a computer program, is based upon the ability of a computer program to simulate human dialogue.

That is, if a program can convince testers that they are communicating, by means of 63 computer mediation, to a human being rather than a computer program, the program – and thus the programmers, by entelechial association – are said to have “passed the

Turing test” (Turkle 85-96).

To be intelligent, to think, within the Burkean system, is to be human. To be man, in the Burkean system, is defined openly, and discussed exuberantly in Burke’s

“Definition of Man”

Man is

the symbol-using (symbol-making, symbol-misusing) animal

inventor of the negative (or moralized by the negative)

separated from his natural condition by instruments of his own making

goaded by the spirit of hierarchy (or moved by the sense of order)

and rotten with perfection. (LSA 16)

Burkean analysis provides a some useful, if on occasion complicated, logological distinctions between humans and animals, and between humans and machines. Unlike

Turing, who catalogues an impressive list of arguments against the claim that machines can have “intelligence” (some of which he gives more credence than others),2 Burke is very clear about essential differences between humans and non-humans:

As regards our basic Dramatistic distinction, “Things move, persons act,” the

person who designs a computing device would be acting, whereas the device

itself would but be going through whatever sheer motions its design makes

possible. (LSA 64)

As always, Burke draws humanity into the realm of motive. It is our symbolicity that distinguishes people from animals, but our animality that separates us from the machines. 64 The above commentary continues, as Burke clarifies his insistence that computers, although of man's own making, do not participate in humanity:

These motions could also be utilized as to function like a voice in a dialogue. For

instance, when you weigh something, it is as though you asked the scales, "How

much does this weigh?" and they "answered," though they would have given the

same "answer" if something of the same weight had happened to fall upon the

scales, and no one happened to be "asking" any question at all. The fact that a

machine can be made to function like a participant in a human dialogue does not

require us to treat the two kinds of behavior as identical. (LSA 64)

His summation of this distinction once again complicates his definition of man, as well as his vision of man's symbolicity. Yet it functions as a powerful structuring device for dramatistic analysis. If we ask "In a world where machines have become staggeringly complex, where our tasks performed in cooperation with these complex machines have created the illusion of "smart machines," perhaps even a kind of emerging "cyborg culture," how do we make positive distinctions between ourselves, and our "machine- selves"? Burke replies:

...in one notable respect, a conditioned animal would be a better model than a

computer for the reductive interpretation of man, since it suffers the pains and

pleasures of hunger and satiety, along with other manifest forms of distress and

gratification, though it's weak in the ways of smiling and laughing. In brief, man

differs qualitatively from other animals since they are too poor in symbolicity,

just as man differs qualitatively from his machines, since these man-made

caricatures of man are too poor in animality. (LSA 64) 65 Computer as “Dual” Agency

When we employ the agency of a computer for the purpose of completing tasks, the agency is twofold. First is the machine itself, what computer professionals often refer to as "hardware": the "central processing unit," comprising circuit board technology,

"hard disk" and "floppy disk" technologies (Read-Only and Random-Access "Memory" technologies), various signal-cable-conduit-port technologies in the service of connectivity, a visual interface such as a monitor or LCD screen, and input devices such as the keyboard, mouse, and perhaps even a microphone for voice-interface technologies often used by the handicapped or injured. For the majority of Americans who use computers in the home, the workplace, or both, these components have proved so complex and daunting in their multiple permutations and combinations, that the computer industry has been forced (or perhaps we should say it has forced itself, in the name of profits) even at this incipient stage of computing technology to develop and conform to as many standards as possible, and to develop "Plug and Play" features so that even the least

"mechanical" person can set up her own computer and become "wired" with little or no technical support. As with most complicated and sophisticated agencies we humans employ – automobiles and aircraft come immediately to mind – the computer can be employed successfully to a certain extent by novices and amateurs. However, design, repair, re-configuration, and sometimes even minor adjustments or customizations are beyond the abilities of a great percentage of non-experts. For some, the possibility of participating in design and development seems out of the question, especially where microscopic computer chip technology is involved. However, design experts insist that users are key elements and should always be included in the design process, even though 66 in practice they often have been left out in the technological cold (Catterall, Taylor &

Galer 1991: Norman 1993; Nielsen 1993; Cooley 1996; Friis 1996; Vaske & Grantham

1990).

The second "Agency," that with which most users are engaged, is the program or application – the "software" – which enables all users to "interact" with the machine.

Even programmers, as they plan, design, and build more applications, must use compiler applications (also known as "software development environments") which "compile" and translate their computer-language codes to the machine-code binary programs that run underneath the compilers, and in turn "translate" that code into the "ones and zeroes" necessary for electronic computing to occur:

Table 1. Layers of Symbol Between Human and Machine.

User Human Human Language Symbolic Action Program Application Code Operating System Code Machine Language Code Central Processing Unit Machine

Although Burke’s explanation of the computer binary system (LSA 1962/1963/1966) is a side-trip, an illustration on his way to elaborating upon man’s symbolic use of the negative, it is nevertheless refreshingly clear and lucid, and sharply in tune with the state of computer science in the early 1960’s:

In the application of the binary system to the "electronic brains" of the new

calculating devices, the genius of the negative is uppermost as it is in the stop-go

signals of traffic regulation. For the binary system lends itself well to 67 technological devices whereby every number is stated as a succession of

choices between the closing of an electrical contact and the leaving of the contact

open. In effect, then, the number is expressed by a series of yeses and noes, given

in one particular order. (471)

Note that for Burke, as for many of his time, “computing” remained an operation most concerned with the manipulation of numerical equations. Computing was about

“computation,” rather than “information.” Computers were clearly capable (as instruments, or agencies) of a high level of symbolicity, but of no animality – no emotion or passion – whatsoever.

It is not controversial at all to argue that, from a Burkean perspective, such mechanisms as the Turing test, designed to measure “artificial intelligence,” are irrelevant, because they are measuring only the imitation at best, or even less than imitation, perhaps what Burke would consider the mere storage and display, or possibly

“simulation,” of human symbolic action, while that action remains a function of human intelligence. The question we must ask, to test any program or machine claiming

“artificial intelligence,” is not “can it behave in a manner that convinces the majority of average humans that they are communicating with another human by means of the machine,” but instead, the Burkean AI test is framed differently, emanating from an entirely different project. We must ask of the so-called intelligent machine: “Does it act, or does it merely move?” Turkle explains John Searle’s analogy of the “Chinese Room,” where instructions written in English are given to someone who does not read Chinese, but allows him to pass the correct Chinese responses through a slot in response to

Chinese questions. Burke would accept this analogy, agreeing that the mere matching up 68 of symbols without understanding has nothing to do with human intelligence, but is what the major function of computer technology has come to mean – albeit at blinding speeds. Added to this ruling, however, Burkean critique of both the current excitement and ongoing paranoia surrounding artificial intelligence would argue that the ultimate

Turing test cannot be performed, not until we find some way to become sufficiently clairvoyant to discern whether the seemingly “intelligent” machine is acting through motives – through needs, desires, stimuli, emotion, passion, inspiration, and so on – or if it is merely moving through time, through space, fashioned by designers to display symbolic representations of human expression.

Demonstrating Agency in Applications for Internetworked Symbolic Action

In the beginning (1969) was ARPANET: the Advanced Research Projects Agency

Network, developed and run by the United States military and defense-related research institutions. Soon, the first applications of internetworked writing began to grow. In May of 1989, North America had approximately 6 million e-mail addresses (Murray, 1995).

According to the most recent Harris Poll (Taylor, 1999):

…[T]he Internet is the fastest growing technology in the history of the world. No

other twentieth century technology comes close – not the telephone, the

automobile, the radio, the television or the computer grew at anything like this

speed. In the latest survey, conducted in December, the total number of computer

users, also from all locations, has increased to 69% of all adults, from 63% in

1998, 61% in 1997, 54% in 1996 and 50% in 1990. While that rate of growth is 69 impressive compared to the historical growth rate of other twentieth century

technologies, it is very modest compared to the rise of the Internet. In 1995 less

than one person in five (18%) who used a P.C. was online. Now fully 81% of all

P.C. users go online. (1)

In December 1999, at least 115 million people in the USA were estimated to have internet access. Most of these users, regardless of whether or not they have programming or systems expertise, will use internetworked writing applications of five general categories: 1) electronic mail, 2) hypertext and web browser technology (including “e- commerce” and database search-string interactive forms and extensions), 3)

Asynchronous Bulletin Board systems, or topic-board technologies (these include internet newsgroups, web boards, Electronic Forum, and Usenet), and 4) synchronous “real-time” chat or MOO clients.

Like the many ways in which Burke tells us that “man” is entangled with his technologies, these technologies tend to tangle and weave into each other as users

“multitask” online. In addition, these technologies weave computer users into thousands of user communities and networks, some characterized by “gift economies,” the swapping and production of various wares, images, and entertainment items (currently most notable are “Mp3” digital music files that rival compact disk quality). Other “virtual communities” are marked by the attraction not of “information,” software applications, commercial endeavors, or even stock market “day trading,” but instead, the lure is more human, less formulaic: they carry the attraction of social interaction among users. In terms of speed and volume of technological growth, the one most powerful motivator for writing with internetworked computer technology would seem to be connectivity. That is, 70 the truly attractive feature of logging on is not necessarily the “information,” or the computer technology itself, but the sheer global interconnectedness of humanity.

According to Steve Case, founder of America OnLine, the world’s largest internet service provider, this is no accident: “We recognized early on that the killer app[lication]3 was people. We always viewed this as a participatory medium, not just another way to distribute data” (Nollinger, 1995).

It does not take a great leap of logic to arrive at the conclusion that in the current historical rush into computer interconnectivity, people in all aspects of their lives are not trying to become machine-enhanced, 21st Century “cyborg” businesses, educators, and social beings, but have seen the opportunities provided by internetworked computer applications as simply new ways to reinforce and reconnect old patterns.

This realization that people (and other animals) tend to cling to old paradigms and patterns in the face of new environments (either real or virtual), is worth noting and repeating throughout any dramatistic discussion of technology and its impact on social order and interaction. Engagement and extension of Burkean ideas about “Agency” must expand and contract: I have in a sense “expanded” the notion of “agency” to explore global issues and human assumptions about computers, computer programs, and their place in human interaction, and will also attempt to suggest ways of “contracting” the view of computer agency to particular commentary on software and internetworked writing technologies. As Michael Overington (1977) helpfully observes:

As a method, dramatism addresses the empirical questions of how persons explain their actions to themselves and others . . . . As a meta-method, dramatism turns from common sense explanatory discourse to that of the social scientist, in an effort to analyze 71 and criticize the effect of a “connotational logic” on social scientific explanations of action. Thus, dramatism attempts to account for the motivational (explanatory) vocabulary of ordinary discourse and its influence on human action and for particular sociological vocabularies when they are used to explain human action. (93)

In the case of internetworked writing, it seems nearly impossible to extricate method from meta-method, for as the use of internetworked communications grows daily, perceptions and studies of its uses and abuses become almost drowned out by the clamorings of popular media and press whose interests – prurient, morbid, economic, sensational as they may be – at this stage have overwhelmed and shouted down the slower, more deliberate commentaries of academic and scientific researchers and critics.

The voices of those attempting to advise the populace when, why, or how they can use the internet have mingled and sometimes been distorted by battalions of marketing and sales representatives, all shouting “Do it now!” Even though precisely what everyone is supposed to be doing online is often unclear and frankly irrelevant to many people’s lives, businesses, and chosen interests.

I have latched onto the term “internetworked symbolic action” – a term I hope is sufficiently ugly to prohibit its adoption or proliferation much beyond this discussion – in order to represent the use of internet applications and technology to exchange and distribute messages (for now the term “information” is set aside), or perhaps I can call them “utterances,” which often are interlocked with pixellated images and digitized sound.

Four Categories of Internetworked Symbolic Action 72 From the internet users’ point of view, there are generally four categories of internetworked symbolic action4: 1) Electronic mail, 2) Hypertext and world wide web

(web browser) technology, 3) Asynchronous Bulletin Board Systems or “newsgroups”

(also called “threaded discussions” or “web boards”), and 4) Synchronous “real-time” chat or MU* (MUD: Multi-User Dimensions, or Multi-User Dungeons; MOO: Multi-

User Dimensions “Object-Oriented”; MUSH: Multi-User Shared Hallucination). Each internet technology considered as an “Agency” or means of transmittal and reception of internetworked symbolic action can also be assigned numerous – perhaps hundreds – of legitimate motives. That is, in a sample of sources in composition theory, sociology, and psychology, it is easy to demonstrate the eagerness of researchers and theorists to include reasons or explanations, that is motives, for employing internetworked writing technologies. For each of the four main categories of technologies used for internetworked symbolic action, I have catalogued a “sampler” of motives provided by writers in the fields of Computers & Composition, Sociology, Psychology, and Software

Design Theory. It is possible that any reader could insert dozens or hundreds more from experience and from sampling studies in these and other disciplines. Because essays and monographs in the study of writing and composition tend in large part toward prescriptive discourse, whose motives tend to be teacher and pedagogy-centered, while those of internetworked writing in industry concentrate on descriptive or user-centered language, I have taken the liberty of separating “educational” uses of internetworked writing technologies from more general uses.

Electronic mail. Suggested advantages and motives suggested for using email: 73 In education: {1} The ability to cross international boundaries, {2} asynchronicity, {3} file sending/storage/retrieval system, {4} discussion printout capability (Moran 1993, pp.

10-17); {5} increased teacher-student communication, {6} increased student-teacher communication (Knobel, Lankshear, Honan, and Crawford, 1998, p. 24).

In the workplace, and in the world: {7} Eliminates “phone-tag,” {8} overcomes time- zones, {9} circumvents scheduling conflicts, {10} less intrusive than phone calls, {11} participants tend to judge ideas rather than persons, {12} does not interrupt work or activities in progress, {13} spontaneity, {14} blurs boundaries between orality and literacy (Sims, 1996, pp. 41-64); {15} expands democratic discursive practices, {16} reproduces dominant social, economic, political (hierarchical) structures, {17} provides a site for the practice of literacy, {18} discursive forum for bringing about social or political change (Selfe 1996, pp. 261, 275-276); {19} telecommuting, (Allen 1996, p.

231; Igbaria, Shayo, and Olfman 1998, pp. 234-236); {20} continuing education (Allen

1996, p. 231); {21} reduced hierarchy, {22} gauging others’ abilities (Tapscott, 1998, pp.

228-229); {23} virtual work/task teams, {24} virtual communities (Igbaria, Shayo, and

Olfman, 1998, pp. 236-239); {25} Fraud: “chain letter” pyramid schemes; {27} advertising “junk” mail (Grabosky & Smith, 1998, pp. 121, 142); {28} write quickly and easily to congressional representatives, senators (Rheingold, 1993, pp. 93-95); {29}

Communicate not just one-to-one, or one-to-many, but “many-to-many” (Lévy, 1998, pp.

140-142).

Hypertext and web browser (World Wide Web) technology. Suggested advantages and motives suggested for using hypertext and web server technologies: 74 Education: {1} International, global possibilities (Moran 1993, pp. 10-17); {2} represent knowledge in different ways (Hawisher & Selfe 1998, pp. 3-19); {3} exploration and discovery (Galin & Latchaw 1998, 48-49); {4} supports active learning,

{5} encourages collaboration, {6} facilitates cross-disciplinarity, {7} creates a virtual

“information space” that is mental, not physical, {8} provides a broad social context for student writers, {9} postmodernity, a poststructural “space” (Johnson-Eilola, 1997, pp.

205, 185-188, 93, 206, 135-137, 141-143).

In the workplace and in the world: {10} New constructive ways of writing, {11} new exploratory ways of reading (Hawisher, LeBlanc, Moran, & Selfe, 1996, p. 206) {12}

“Screen addiction”; {13} Shares 9 of Walter Ong’s “features of orality” (Welch, 1999, pp. 184-186); {14} Pornography (Noonan, 1998, p. 147); {15} Psychotherapy (King &

Moreggi, 1998, p. 85); {16} Information conduit; {17} management and control of information dissemination; {18} information sharing; {19} reader navigation (Johnson-

Eilola & Selber, 1996, pp. 126, 128, 132, 133); {20} technical manuals, documentation

(Wieringa, McCallum, Morgan, Yasutake, Schumacher, 1996, p. 146); {21} crime: fraud

(Grabosky & Smith, 1998, p. 139). ). {22} Online voting, presidential primary 2000

(Kamman, 2000).

Asynchronous Bulletin Board systems. Suggested advantages and motives suggested for using asynchronous technologies such as BBS’s, Newsgroups, threaded web discussion boards, and asynchronous electronic forum:

Education: {1}Community of scholars, such as Bitnet and Fidonet; {2} participants begin to realize that knowledge is socially constructed; (Hawisher, LeBlanc, Moran, and

Selfe, 1996, pp. 77, 126, ); {3} printout of discussions; {4} text-based environment; {5} 75 expand the writer’s audience; {6} sense of community; {7} demonstrate a high degree of involvement from participants; {8} encourage equitable participation; {9} decreases

“leader-centered” communication (Hawisher, 1992, 84-91); {10} Forum for strong opinions, controversial issues (Allen, 1996, p. 220).

In the workplace and the world: All of the above, in addition to: {11} Sexual content

(Noonan, 1998, pp. 157-162; {12} Military communications (Hauben and Hauben, 1997, pp. 115-124); {13} builds self-esteem for people with disabilities (Tapscott, 1998, pp. 90-

91; Grandin, 1995, p. 100); {14} pornographic materials free and accessible (Mehta and

Plaza, 1997, p. 57); {15} discussions stored on Usenet system, not taking up space in individuals’ internet accounts (Day, 1998, p. 164).

Synchronous “real-time” chat or MOO. Suggested advantages and motives suggested for using real-time technologies such as Internet Relay Chat (IRC), MOO,

Talk, and Interchange:

Education: {1} Interactive collaborative writing; {2} facilitates cooperative learning

(Hawisher, LeBlanc, Moran, Selfe, 1996, p. 242); {3} discussion printout (Day, 1998, p.

159) {4} adds a real-time dimension to a collegial online relationship (Rheingold, 1994, p. 177-178).

In the workplace and the world: {5} A real-time means to do “real work”; {6} a means of playing with communication (Rheingold, 1994, p. 177-178); {7} a way to construct

“fronts” or “other selves; {8} Cybersex (also known as “tinysex,” “hotchat,” or

“netsex”); {9} romance; {10} “therapeutic” role-play (Tapscott, 1998, pp. 94-95, 170-

171; Turkle, 1995, 177-232); {11} fraudulent “pitches,” insidious marketing, disguised advertising (Grabosky and Smith, 1998, pp. 141-142); {12} “internet addiction” 76 (Griffiths, 1998, p. 69); {13} IRC addiction; {14} boredom; {15} computer skills;

{16} cultural diversity; {17} curiosity; {18} escapism; {19} exploitation; {20} friendship; {21} fun; {22} sexual flirtation and negotiation (participants “trolling” the net to arrange real sex); {23} socializing; {24} therapy (IRC survey, 2000).

While the above lists are by no means exhaustive, launching into critical analysis of any small or large collection of them will yield implications about the orientation, or rhetorical position of writers and researchers who set out to explain some facet or range of reasons for using internetworked computer technologies as instruments for writing

(internetworked symbolic action). Burke lingers over the Agency-Purpose "ratio" for good reason, readily acknowledging that tools and implements are shaped to serve our purposes, while at the same time noting that we often find new purposes for which to use previously developed instruments (GM p. 286). Thus, while our purposes shape our instruments, the reverse is true as well: we reshape our purposes, sometimes discovering new ones, in the process of mastering and using instruments developed outside of our own sphere of work and study.

For a brief example, we can isolate Cynthia Selfe’s (1996) commentary on

“Theorizing e-mail for the practice, instruction, and study of literacy.” Lifted out of context, a the range of features and experiences noted that cluster around electronic mail

– features that also serve as reasons or explanations for using e-mail – appears unhelpful, even self-contradictory. Among her findings, Selfe notes that the use of e-mail expands democratic discursive practices, reproduces dominant social, economic, political

(hierarchical) structures, provides a site for the practice of literacy, and facilitates a discursive forum for bringing about social or political change (Selfe 1996, pp. 261, 275- 77 276). However, within the motivational framework of Selfe’s project, which is to draw the attention of researchers to a technology which has created new sites of literacy and discursive practices, the contradictions and seemingly disconnected uses and abuses of e- mail come together under a kind of ægis of the “new frontier”:

As e-mail continues to change—as it aligns with different social formations in dynamic cultural landscapes, as the political positions and concerns of groups inhabiting this space change, as the space of e-mail itself expands and is legislated, as the technology that supports these communicative exchanges is altered—the nature of theorizing and the results of theoretical analyses will change as well. (286)

Selfe’s project is fueled by an argument that e-mail is a textual, social “space,” which requires further exploration and study, even as it changes in this nascent period of its development.

Most of the studies on computers and internetworked writing remain largely uncritical of internetworked technologies in the sense that they analyze and report on pedagogical and commercial uses of existing and emerging technologies, in terms of textual, anecdotal, and statistical outcomes resulting from various strategies of implementation, rather than considering “top-down” design (there are exceptions, especially noted in various articles in the Journal Computers and Composition, along with discussion and forums at the annual meetings of the Conference on College

Composition and Communication, the Computers and Writing Conference, and others). If

I say that we use technology uncritically it is not to say that deliberate, informed choices are not made, nor is it to fall back into claims of industrial or technocratic oppression. I mean that in addition to our unfamiliarity with the general principles and details of 78 computer and software architecture, we must also deal with the "opaque" nature of computer technologies – we can't open the hood and look inside, so to speak (Norman,

1993, p. 79), and therefore can find ourselves either rejecting out-of-hand in exasperation our place in the design, development, and modification of technologies, excusing ourselves from the critical loop, or we can find ourselves struggling humbly to adapt to their foreign designs, accepting or adopting the discursive habits of the original designers as though it were a priori that the reply to every request for changes or modifications is justifiably some version of: “this is how this technology works.”

Technologies arrive often as a result of military or industrial development.

Advances in civil engineering, manufacturing, information technology, food production and product distribution technologies result from economic and political prioritization, which is to say, that is where the money is. Technologies developed in one sector of a society often benefit others through adaptation and extension of principles, infrastructure, and production resources. Word processing technology, for example, was developed originally not for literary or educational purposes, but for the purpose of "information exchange" in military and scientific projects. If we allow a broad definition of "word processing" to mean the general storage and display of alphanumeric characters on printout or video display, then we might even allow that the word storage and display technology proceeds from a set of scientific and military-industrial assumptions about the nature and uses of written language. Burke argues in his discussion of "Scope and

Reduction" that the investment in technology made in some sectors of a society, in developments and instruments that are profitable and good for some purposes, will over time create problems as they migrate into other areas: 79 It is in its becoming that technology most fully represents the human agent,

since his inventing of it is an act, and a rational act. In its state of being (or

perhaps we might better say its state of having become) it can change from a

purpose to a problem. And surely much of the anguish in the modern world

derives from the paradoxical fact that machinery, as the embodiment of rationality

in its most rational moments, has in effect translated rationality itself from the

realm of ideal aims to the realm of material requirements. Few ironies are richer

in complexities than the irony of man's servitude to his mechanical servants. For

though it is nothing less than an act of genius to invent a machine, it is the

nagging drudgery of mere motion to feed one. (GM, pp. 109-110; emphasis in

original)

Burke’s stance on the potential for man’s own natural tendency to separate himself from his “natural state” by implements of his own design presents a serious conundrum, for as

Charles Darwin, Walter Ong, and others – even Burke – have noted, man’s natural tendency is to create and use technologies. As such, this state of techno-obsession could be said to inhere in the landscape of humanity, to be a part of the scene, or the totality of human social existence. Even as we consider the computer and technologies that support internetworked symbolic action, we must eventually lean in closer, to focus on the machine itself, our only way in, our only point of contact, or site of interaction, of

“seeing” what it is we are doing becomes synecdochically embodied in the screen. Notes: Chapter 2

1. Specifically, Burke argues that when we investigate causes and motives,

“God” can be omitted from our calculations since it is an invariant term, present

as the ground of all motives. And we can concentrate upon the search for terms

that help us to detect concomitant variations, for it is by the discovery of these

that we shall learn how to produce or avoid the specific contexts that serve as de-

terminants. A scientist might happen to believe in a personal God, and might even

pray to God for the success of his experiments. In such an act of prayer, of course,

he would be treating God as a variable. Yet, when his prayer was finished, and he

began his experiments, he would now, qua scientist, treat “God” as an invariant

term, as being at most but the over-all name for the ultimate ground of all

experience and all experiments, and not a name for the particularities of local

context with which the scientific study of conditions, or correlations, is

concerned. (GM 98).

2. Turing’s now-famous essay, “Computing Machinery and Intelligence” features a comprehensive list of arguments – some with which Turing sympathizes, some he approaches critically. The objections are labeled somewhat cryptically: (1) The

Theological Objection; (2) The 'Heads in the Sand' Objection; (3) The Mathematical

Objection; (4) The Argument from Consciousness; (5) Arguments from Various

Disabilities; (6) Lady Lovelace's Objection (machines cannot give rise to surprises); (7) 81 Argument from Continuity in the Nervous System; (8) The Argument from Informality of Behaviour; (9) The Argument from Extra-Sensory Perception.

3. In Rewired David Hudson echoes Steve Case as he argues that interconnectivity is the key to the success and astounding growth of internet use and technological advances:

…[I]t was people’s interest in and attraction to one another that made the Internet

take off like it did. People are still this technology’s greatest asset, its “killer app,”

the term applied to a function or piece of software that makes it irresistible. (327)

4. It is possibly appropriate to acknowledge that from a programmer’s point of view, or a system analyst’s point of view, these four categories lump together software and hardware that may be “incompatible” or different from each other. Thus, I emphasize that these technologies serve in each category to facilitate the same kind of internetworked communications for users. I distinguish and select these technologies with the full awareness that the changes in development of network applications and hardware are moving so fast that by the time this reaches a reader, one or all of these categories may have undergone changes, become proprietary and dropped off in usage, been replaced, or simply disappeared.

Recommended publications