Lucy Suchman, 'Human Machine Reconsidered', Sociology
Total Page:16
File Type:pdf, Size:1020Kb
Lucy Suchman, 'Human/Machine Reconsidered', Sociology Depa... file://localhost/Users/Ana/Documents/Ana%20from%20Chico/arti... unilogo.gif (559 bytes) Department of Sociology . Lancaster University . On-Line Papers Lucy Suchman Home Human/Machine Reconsidered Page Lucy Suchman Department of Sociology, Lancaster University Lancaster, LA1 4YL UK Copyright This online paper may be cited or briefly quoted in line with the usual academic conventions. You may also download it for your own personal use. This paper must not be published elsewhere (e.g. mailing lists, bulletin boards etc.) without the author's explicit permission. But please note that . - this is a draft, - if you copy this paper you must include this copyright note, this paper must not be used for commercial purposes or gain in any way, - note you should observe the conventions of academic citation in a version of the following form: Lucy Suchman, 'Human/Machine Reconsidered' (draft) published by the Department of Sociology, Lancaster University at: http://www.lancaster.ac.uk/sociology/soc040ls.html Author’s note: This paper is a work-in-progress that will be developed further as part of a 2nd, revised edition of Plans and Situated Actions: the problem of human-machine communication. Cambridge University Press, originally published in 1987. A version of this paper was presented at the conference Sociality/Materiality, Brunel University, UK, September 9-11, 1999. A previous version was published in "The Japanese Bulletin of Cognitive Science" in January 1998, and as an afterward to the Japanese translation of the original book, published in Japan in 1999. With this essay I want to rejoin a discussion in which I first 1 of 14 6/5/08 4:47 PM Lucy Suchman, 'Human/Machine Reconsidered', Sociology Depa... file://localhost/Users/Ana/Documents/Ana%20from%20Chico/arti... participated over ten years ago, on the question of nonhuman – particularly machine – agency. My renewed interest in questions of agency is inspired by recent developments both in the area of interactive computing and in the debate over human-nonhuman relations within social studies of science and technology. What I propose to do is another attempt at working these two projects together, in what I hope will be a new and useful way. The newness is not so much a radical shift in how we construct human-machine boundaries, as a reconsideration of those boundaries based in critical reflection on my own previous position, in light of what has happened since. What follows is the beginnings of the argument. Machine agency As an aspect of late twentieth century technoscientific imaginary the sociality of machines is well-established.(1) Beginning with early work on machine intelligence in the 1950’s, our conception of machines has expanded from the instrumentality assigned them in craft and industrial contexts to include a discourse of machine as acting and interacting other. Within the project of designing intelligent machines in the 1970’s and 1980’s, at the time when I first encountered this project directly, the premise was being established that computational artifacts just are interactive, in roughly the same way we are albeit with some more and less obvious limitations. However ambitious, the problem in this view was a fairly straightforward task of overcoming the limitations of machines by encoding more and more of the cognitive abilities attributed to humans into them.(2) The less visible and somewhat restrained AI projects of the 1990’s play down the personification of machines in favor of biologically-inspired computer science and engineering initiatives aimed at recreating artificial life forms, via the technologies of neural networks, genetic algorithms, situated robotics, and the like.(3) These new developments shift the project of machine intelligence away from what is now referred to as "good old fashioned symbolic information processing" AI, toward a technical practice based in more foundational metaphors of biosocial evolution and, in Sarah Franklin’s phrase, "life itself."(4) Nonetheless, attributions of human-like machine agency seem alive as ever in both professional and popular discourse. The growth of the internet in particular has brought with it a renewed interest in the idea of personified computational artifacts attributed with a capacity for intelligent, interactive behavior. The dominant form of this project today is the promotion of computational agents that will serve as a kind of personal representative or assistant to their human users. The idea of personal agents was animated perhaps most vividly in the form of "Phil," the bow-tied agent in Apple Corporation’s video 2 of 14 6/5/08 4:47 PM Lucy Suchman, 'Human/Machine Reconsidered', Sociology Depa... file://localhost/Users/Ana/Documents/Ana%20from%20Chico/arti... "The Knowledge Navigator." But more reduced implementations of this fantasy now abound in the form of "knowbots" and other software agents. One of the best known proponents of such artifacts is Pattie Maes of the MIT Media Lab. In a 1995 talk titled "Interacting with Virtual Pets and other Software Agents," Maes assures us that "the home of the future" will be "half real, half virtual," and that: the virtual half of our home won’t just be a passive data landscape waiting to be explored by us. There will be active entities there that can sense the environment ... and interact with us. We call these entities software agents. Agents are personified in Maes’ interface as cartoon faces, attributed with capacities of altertness, thinking, surprise, gratification, confusion and the like. As Maes describes them: Just like real creatures, some agents will act as pets and others will be more like free agents. Some agents will belong to a user, will be maintained by a user, and will live mostly in that user’s computer. Others will be free agents that don’t really belong to anyone. And just like real creatures, the agents will be born, die and reproduce... I’m convinced that we need [these agents] because the digital world is too overwhelming for people to deal with, no matter how good the interfaces we design ... Basically, we’re trying to change the nature of human computer interaction ... Users will not only be manipulating things personally, but will also manage some agents that work on their behalf. Maes’ text begs for a thorough rhetorical analysis.(5) For my present purposes, however, I’ll simply note the kind of rediscovery of the interaction metaphor (still, interestingly, in future tense) evident here. Moreover, thanks to the distributive powers of the internet, the computer other with whom we are to interact has proliferated into populations of specialist providers – agents, assistants, pets – whose reasons for being are to serve and comfort us, to keep us from being overwhelmed in the future workplace/homeplace of cyberspace. What appears constant is the value placed on the artifact's capacity to be autonomous, on the one hand, and just what we want, on the other. We want to be surprised by our machine progeny, but not displeased. My interest is not to argue the question of machine agency from first principles, but rather to take as my starting point the project of tracing how the effect of machines-as-agents is being 3 of 14 6/5/08 4:47 PM Lucy Suchman, 'Human/Machine Reconsidered', Sociology Depa... file://localhost/Users/Ana/Documents/Ana%20from%20Chico/arti... generated. This includes the translations that render former objects as emergent subjects, shifting interests and concerns across the human/artifact boundary. We can then move on to questions of what is at stake in these particular productions-in-progress, and why we might want to resist and refigure them. My original analysis/critique Let me return for a moment to my earlier efforts to explore these relations of human and machine. I arrived at Xerox Palo Alto Research Center (PARC) in 1979, as a student of anthropology interested in relations of technology and work, and with a background in ethnomethodology and interaction analysis. My colleagues at PARC in the early 1980’s were leading proponents of "knowledge representation," which included efforts to encode "goals" and "plans," or computational control structures that when executed would lead an artificially intelligent machine imbued with the necessary knowledge to take appropriate courses of action. Around 1981 they took on the project of designing an "intelligent, interactive" computer-based interface that would serve as a kind of expert advisor in the use of a photocopier characterized by its intended users as "too complex." Their strategy was to take the planning model of human action and communication then prevalent within the AI research community as a basis for the design. My project became a close study of a series of videotaped encounters, by diverse people including eminent computer scientists, attempting to operate the copier with the help of the prototype interactive interface. While the troubles that people encountered in trying to operate the machine shifted with the use of this "expert system" as an interface, the task seemed as problematic as ever. To try to understand those troubles better I developed a simple transcription device for the videotapes, based in the observation that in watching the videotapes I often found myself in the position of being able to "see" the difficulties that people were encountering, which suggested in turn ideas of how they might be helped. If I were in the room beside them, in other words, I could see how I might have intervened. At the same time I could see that the machine appeared quite oblivious to these seemingly obvious difficulties. The question then became: What resources was I, as a kind of fully-fledged intelligent observer, making use of in my analyses? And how did they compare to the resources available to the machine? The answer to this question, I quickly realized, was at least in part that the machine had access only to a very small subset of the observable actions of its users.