
John Benjamins Publishing Company This is a contribution from Pragmatics & Cognition 13:3 © 2005. John Benjamins Publishing Company This electronic file may not be altered in any way. The author(s) of this article is/are permitted to use this PDF file to generate printed copies to be used by way of offprints, for their personal use only. Permission is granted by the publishers to post this file on a closed server which is accessible to members (students and staff) only of the author’s/s’ institute. For any other use of this material prior written permission should be obtained from the publishers or through the Copyright Clearance Center (for USA: www.copyright.com). Please contact [email protected] or consult our website: www.benjamins.com Tables of Contents, abstracts and guidelines are available at www.benjamins.com Robotics, philosophy and the problems of autonomy* Willem F.G. Haselager Nijmegen Institute for Cognition and Information (NICI) Robotics can be seen as a cognitive technology, assisting us in understand- ing various aspects of autonomy. In this paper I will investigate a difference between the interpretations of autonomy that exist within robotics and philosophy. Based on a brief review of some historical developments I sug- gest that within robotics a technical interpretation of autonomy arose, related to the independent performance of tasks. This interpretation is far removed from philosophical analyses of autonomy focusing on the capacity to choose goals for oneself. This difference in interpretation precludes a straightforward debate between philosophers and roboticists about the autonomy of artificial and organic creatures. In order to narrow the gap I will identify a third problem of autonomy, related to the issue of what makes one’s goals genu- inely one’s own. I will suggest that it is the body, and the ongoing attempt to maintain its stability, that makes goals belong to the system. This issue could function as a suitable focal point for a debate in which work in robotics can be related to issues in philosophy. Such a debate could contribute to a grow- ing awareness of the way in which our bodies matter to our autonomy. Keywords: autonomy, robotics, philosophy, embodiment . Introduction Developments within Artificial Intelligence (AI) influence the way we con- ceive ourselves; they help to shape and change our self-image as, amongst oth- ers, intelligent and autonomous creatures. The field of robotics consists of a particularly noticeable set of tools for the study of basic features of human and animal cognition and behavior (Pfeifer 2004). In this paper I wish to ap- proach robotics, as a cognitive technology (as described by Dascal 2004), from Pragmatics & Cognition 13:3 (2005), 55–532. issn 0929–0907 / e-issn 1569–9943 © John Benjamins Publishing Company © 2005. John Benjamins Publishing Company All rights reserved 56 Willem F.G. Haselager a philosophical perspective and examine the way it plays, as well as could play, a role in the understanding of ourselves as autonomous agents. Over the years a growing number of claims have been made within AI regarding the existence of autonomous agents. In fact, autonomous agents ap- pear to be a new research paradigm within AI. Candidates for autonomous agents range from software entities (such as search bots on the web) to sophis- ticated robots operating on Mars. At first sight, robots appear to be reasonable contenders for autonomy for several reasons. First of all, robots are embodied in the sense that their artificial bodies permit real action in the real world. In the case of evolutionary robotics, generally speaking, simulated robots operate in virtual reality, but even in these cases at least certain bodily characteris- tics and physical features of objects in the world are central to the simulation. Generally, within robotics the focus is on the interaction between morphology, control system and world. Thus, in robotics the emphasis is more on bodily action, whereas the traditional (purely information processing) AI approaches center more on thinking. The capacity to ‘do something’ makes robots relevant to the topic of autonomy. Secondly, the robots’ behavior is often emergent in the sense that it is not specifically programmed for (Clark 2001), and therefore invokes characterizations as ‘surprising’, ‘original’ or even ‘unwanted’. More- over, the behavior of robots can be based on their history of interaction with the environment: they can learn (change their behavior over time as a result of the outcome of earlier actions), instead of blindly repeating fixed behavioral patterns. Thus, robots are not only doing things, but often seem to be doing them ‘in their own way’. Finally, in many cases when one is observing robots it is hard to refrain from a certain amount of empathy. One immediately starts wondering about what the robot is doing or trying to achieve. Even if they fail, robots often at least seem to have struggled to achieve a goal. Obviously, many of the words and phrases used above (like action, emer- gence, original, learn, and struggled to achieve a goal) are ambiguous in the sense that one could argue that these concepts only apply to robots in a meta- phorical way, instead of literally, as we generally assume they do in the case of human beings and animals. For many people, robots’ bodies, control systems, possibilities for learning and adaptation, and therefore ultimately their behav- ior is too much dependent on human programming and design in order to speak of genuine autonomy. A complicating factor in addressing this issue of metaphorical versus lit- eral use of the concept of autonomy is that the meaning of ‘autonomy’ and ‘agency’ is far from clear. Like so many other concepts that are central to our © 2005. John Benjamins Publishing Company All rights reserved Robotics, philosophy and the problems of autonomy 57 self-understanding, the notion of autonomy is difficult to define. Rather it seems to have a variety of meanings, which at best bear a family resemblance to one another, and apply to a great diversity of cases. Similarly, there does not seem to be an absolute cut-off point between autonomous and non-autonomous be- havior, but rather a fuzzy border. The only thing that seems to be clear from the beginning is that too permissive and too restrictive interpretations of autonomy need to be avoided. For instance, an understanding of autonomy that would al- low thermostats, or one that would only permit adult human beings to qualify, seem inadequate. We may end up with one of them, but to start the investiga- tion on the basis of such extreme interpretations would be question begging. Adding to the conceptual uncertainty is, as I will try to show below, the dif- ference in interpretations of autonomy between robotics and philosophy. For some this may mean that the debate about autonomy of robots is a non-starter. I would like to suggest however, that it is precisely the interplay between empir- ical research in robotics and conceptual analysis in philosophy that may help to clarify the confusion. Robotics may profit from a philosophical analysis of the concept of autonomy, because this may lead to a fuller specification of the conditions that need to be fulfilled before robots can be said to have passed the benchmark. On the other hand, philosophy might gain in its attempt to clarify what is (and is not) meant by autonomy, by receiving from robotics concrete and challenging examples of behavior as test cases. Therefore, even though the confusions involved in the issue of the autonomy of robots are considerable, the debate itself can be fruitful for both philosophy and robotics. This debate, in turn, ultimately may deepen our understanding of the important role our bodies play in our autonomy. 2. Robots, autonomy and intervention Historically, robotics grew out of the development of teleoperation (Murphy 2000), i.e., the operating of tools (often more or less like elaborate pliers) from a distance. This way, human operators could avoid having to be present in dan- gerous circumstances. They manipulated the operator that steered the remote, a tool that functioned for instance in an area with very high temperatures. In the late 1940s teleoperation was used in the first nuclear reactors. These tele- operators were improved by including more feedback to the human opera- tors so they would get a better feeling of what was happening at the remote’s end. Manipulating the operators to direct the remote was very tiring and time © 2005. John Benjamins Publishing Company All rights reserved 58 Willem F.G. Haselager consuming, and soon the idea arose to have simple and repetitive operations performed automatically, without human steering. This got known as super- visory control: The operator monitors the process and gives high-level com- mands (e.g., ‘turn’) while the remote performs parts of the task on its own. All the time the human operators could take over the manipulations. This de- veloped into part-time teleoperation and a semi-autonomous remote. It is no great exaggeration to say that the origins of the roboticist’s use of the phrase ‘autonomous agents’ lie here. Increasing the autonomy of the remote simply means reducing the need for human supervision and intervention. Autonomy is interpreted relative to the amount of on-line (while the robot is operating) involvement of human operators. Robots go beyond telemanipulators in that their sensorimotor capacities enable them to deal with a greater variety of circumstances and events without on-line human involvement. Robots are considered to be capable to act, in the sense that they not merely undergo events or have effects (like stones rolling down a hill). The robots’ operations can be reactive, responding to what is go- ing on (taxes and tropisms), but also proactive, in pursuit of the goals that are active within them, thereby becoming less environmentally driven.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-