<<

HISTORIESHUMAN-CENTERED AND COMPUTING Editors: Robert R. Hoffman, Jeffrey M. Bradshaw, and Kenneth M. Ford, Institute for Human and Cognition, [email protected]

Beyond Asimov: The Three Laws of Responsible

Robin R. Murphy, Texas A&M University David D. Woods, Ohio State University

ince their codifi cation in 1947 in the col- With few notable exceptions,5,6 there has been lection of short stories I, , Isaac Asi- relatively little discussion of whether , now S or in the near , will have suffi cient percep- mov’s three have been a staple tual and reasoning capabilities to actually follow of science fi ction. Most of the stories assumed the laws. And there appears to be even less serious that the robot had complex perception and rea- discussion as to whether the laws are actually vi- soning skills equivalent to a child and that robots able as a framework for human–robot interaction, were subservient to humans. Although the laws outside of cultural expectations. were simple and few, the stories attempted to dem- Following the defi nitions in Moral : onstrate just how diffi cult they were to apply in Teaching Robots Right from Wrong,7 Asimov’s various real-world situations. In most situations, laws are based on functional morality, which as- although the robots usually behaved “logically,” sumes that robots have suffi cient agency and cog- they often failed to do the “right” thing, typically nition to make moral decisions. Unlike many of because the particular context of application re- his successors, Asimov is less concerned with the quired subtle adjustments of judgment on the part details of robot design than in exploiting a clever of the robot (for example, determining which law literary device that lets him take advantage of the took priority in a given situation, or what consti- large gaps between aspiration and reality in robot tuted helpful or harmful behavior). autonomy. He uses the situations as a foil to ex- The three laws have been so successfully incul- plore issues such as cated into the public consciousness through enter- tainment that they now appear to shape society’s • the ambiguity and cultural dependence of lan- expectations about how robots should act around guage and behavior—for example, whether humans. For instance, the media frequently refer what appears to be cruel in the short run can to human–robot interaction in terms of the three actually become a kindness in the longer term; laws. They’ve been the subject of serious blogs, • social utility—for instance, how different indi- events, and even scientifi c publications. The Sin- viduals’ roles, capabilities, or backgrounds are gularity Institute organized an event and Web valuable in different ways with respect to each site, “Three Laws Unsafe,” to try to counter pub- other and to society; and lic expectations of robots in the wake of the movie • the limits of technology—for example, the im- I, Robot. Both the philosophy1 and AI2 commu- possibility of assuring timely, correct actions in nities have discussed ethical considerations of ro- all situations and the omnipresence of trade-offs. bots in society using the three laws as a reference, with a recent discussion in IEEE Intelligent Sys- In short, in a variety of ways the stories test the tems.3 Even medical doctors have considered ro- lack of resilience in human–robot interactions. botic in the context of the three laws.4 The assumption of functional morality, while ef-

14 1541-1672/09/$26.00 © 2009 IEEE IEEE INtEllIgENt systEMs Published by the IEEE Computer Society

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. fective for entertaining storytelling, the military’s weaponization of ro- today’s robots are literal-minded neglects operational morality. Oper- bots, and discussions are now shifting agents—that is, they can’t tell if their ational morality links robot actions to the question of whether weaponized world model is the world they’re and inactions to the decisions, as- robots can be “humane.”9,10 Such really in. sumptions, analyses, and investments weaponization is no longer limited to All this aside, the biggest problem of those who invent and make ro- situations in which humans remain in with the is that it views safety botic systems and of those who com- the loop for control. The South Ko- only in terms of the robot—that is, mission, deploy, and handle robots in rean government has published vid- the robot is the responsible safety operational contexts. No matter how eos on YouTube of robotic border- agent in all matters of human–robot far the autonomy of robots ultimately security guards. Scenarios have been interaction. While some speculate on advances, the important challenges of proposed where it would be permis- what it would mean for a robot to be these accountability and liability link- sible for a to fire upon able to discharge this responsibility, ages will remain.8 anything moving (presumably target- there are serious practical, theoreti- This essay reviews the three laws cal, social-cognitive, and legal limi- and briefly summarizes some of the tations.8,12 For example, from a legal practical shortcomings—and even Asimov’s laws are based perspective the robot is a product, so dangers—of each law for framing it’s not the responsible agent. Rather, human–robot relationships, includ- on functional morality, the robot’s owner or manufacturer is ing reminders about what robots liable for its actions. Unless robots can’t do. We then propose an alter- which assumes that are granted a person-equivalent sta- native, parallel set of laws based on tus, somewhat like corporations are what humans and robots can real- robots have sufficient now legally recognized as individual istically accomplish in the foresee- entities, it’s difficult to imagine stan- able future as joint cognitive systems, agency and cognition dard product liability law not apply- and their mutual accountability for ing to them. When a failure occurs, their actions from the perspectives of to make moral decisions. violating Asimov’s first law, the hu- human-centered design and human– man stakeholders affected by that robot interaction. failure will engage in the processes ing humans) without direct human of causal attribution. Afterwards, Applying Asimov’s permission.11 they’ll see the robot as a device and Laws to Today’s Robots Even if current events hadn’t made will look for the person or group who When we try to apply Asimov’s laws the law irrelevant, it’s moot because set up or instructed the device erro- to today’s robots, we immediately robots cannot infallibly recognize hu- neously or who failed to supervise run into problems. Just as for Asi- mans, perceive their intent, or reli- (that is, stop) the robot before harm mov in his short stories, these prob- ably interpret contextualized scenes. occurred. It’s still commonplace af- lems arise from the complexities of A quick review of the computer vi- ter accidents for manufacturers and situations where we would use ro- sion literature shows that scientists organizations to claim the result was bots, the limits of physical systems continue to struggle with many fun- due only to human error, even when acting with limited resources in un- damental perceptual processes. Cur- the system in question was operating certain changing situations, and the rent commercial security packages autonomously.8,13 interplay between the different social for recognizing the face of a person Accountability is bound up with roles as different agents pursue mul- standing in a fixed position continue the way we maintain our social re- tiple goals. to fall short of expectations in prac- lationships. Human decision-making tice. Many robots that “recognize” always occurs in a context of expec- First Law humans use indirect cues such as tations that one might be called to Asimov’s first law of robotics states, heat and motion, which only work account for his or her decisions. Ex- “A robot may not injure a human be- in constrained contexts. These prob- pectations for what’s considered an ing or, through inaction, allow a hu- lems confirm Norbert Wiener’s warn- adequate explanation and the con- man being to come to harm.” This ings about such failure possibilities.8 sequences for people when their law is already an anachronism given Just as he envisioned many years ago, explanation is judged inadequate are

July/August 2009 www.computer.org/intelligent 15

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. critical parts of accountability sys- tice and take stock of humans (and by a person who bears full responsi- tems—a reciprocating cycle of being that the people robots encounter or bility for all safety matters. Human- prepared to provide an accounting for interact with can notice pertinent as- factors studies show that remote one’s actions and being called by oth- pects of robots’ behavior).15 For ex- operators are immediately at a dis- ers to provide an account. To be con- ample, is it acceptable for a robot to advantage, working through a me- sidered moral agents, robots would merely not hit a person in a hospi- diated interface with a time delay. have to be capable of participating tal hall, or should it conform to so- Worse yet, remote operators are re- personally in this reciprocating cycle cial convention and acknowledge the quired to operate the robot through of accountability—an issue that, of person in some way (“excuse me” or poor human–computer interfaces and course, concerns more than any sin- a nod of a camera pan-tilt)? Or if a in contexts where the operator can be gle agent’s capabilities in isolation. robot operating in public places in- fatigued, overloaded, or under high cluded two-way communication de- stress. As a result, when an abnormal Second Law vices, could a bystander recognize event occurs, they may be distracted Asimov’s second law of robotics states, or not fully engaged and thus might “A robot must obey orders given to it not respond adequately in time. The by human beings, except where such The goal of result for a robot is akin to expecting orders would conflict with the first an astronaut on a ’s surface to law.” Although the law itself takes no meaningful machine request and wait for permission from stand on how humans would give or- mission control to perform even sim- ders, Asimov’s robots relied on their participation in ple reflexes such as ducking. understanding of verbal directives. What is puzzling about today’s lim- Unfortunately, robust natural lan- open-ended ited attempts to conform to the third guage understanding still continues to law is that there are well-established lie just beyond the frontiers of today’s conversational contexts technological solutions for basic ro- AI.14 It’s true that, after decades of re- bot survival activities that work for search, computers can now construct remains elusive. autonomous and human-controlled words from phonemes with some con- robots. For instance, since the 1960s sistency—as improvements in voice we’ve had technology to assure dictation and call centers attest. Lan- that the robot provided a means to guarded motion, where the human guage-understanding capabilities also report a crime or a fire? drives the robot but onboard software work well for specific types of well- will not allow the robot to make po- structured tasks. However, the goal Third Law tentially dangerous moves (for exam- of meaningful machine participation The third law states, “A robot must ple, collide with obstacles or exceed in open-ended conversational con- protect its own existence as long as speed limits or boundaries) without texts remains elusive. Additionally, we such protection does not conflict with explicit orders (an implicit invocation must account for the fact that not all the first or second law.” Because to- of the second law). By the late 1980s, directives are given verbally. Humans day’s robots are expensive, you’d guarded motion was encapsulated use gestures and add affect through think designers would be naturally into tactical reactive behaviors, essen- body posture, facial expressions, and motivated to incorporate some form tially giving robots reflexes and tac- motions for clarification and empha- of the third law into their products. tical authority. Perhaps the most im- sis. Indeed, high-performance, expe- For example, even the inexpensive portant that guarded motion rienced teams use highly pointed and iRobot detects stairs that and reflexive behaviors haven’t been coded forms of verbal and nonverbal could cause a fatal fall. Surprisingly, more widely deployed is that they re- communication in fluid, interdepen- however, many expensive commercial quire additional sensors, which would dent, and idiosyncratic ways. robots lack the means to fully protect add to the cost. This increase in cost What’s more interesting about the their owners’ investment. may not appear to be justified to cus- second law from a human–robot in- An extreme example of this is in tomers, who tend to be wildly over- teraction standpoint is that at its core, the design of robots for military ap- confident that trouble and complexi- it almost captures the more important plications or bomb squads. Such ro- ties outside the bounds of expected idea that intelligent robots should no- bots are designed to be teleoperated behavior rarely arise.

16 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. The Alternative Three Laws giveness” to “permission” has grown functionality hasn’t yet become an of Responsible Robotics up in some research groups. Such at- expected component of built-in robot To address the difficulties of apply- titudes indicate a poor safety culture behavior. This means robots are oper- ing Asimov’s three laws to the cur- at universities that could, in turn, ating counter to reasonable and pru- rent generation of robots while re- propagate to government or industry. dent assumptions. Worse yet, when specting the laws’ general intent, we On the positive side, the robot com- they’re operating experimentally, ro- suggest the three laws of responsible petitions sponsored by the Associa- bots often encounter unanticipated robotics. tion for Unmanned Systems factors that affect their control. Sim- International are noteworthy in their ply saying an unfortunate event was Alternative First Law insistence on having safe areas of op- unpredictable doesn’t relieve the de- Our alternative to Asimov’s first law eration, clear emergency plans, and signers of responsibility. Even if a is “A human may not deploy a ro- safety officers present. specific disturbance is unpredictable bot without the human–robot work Meeting the minimal legal require- in detail, the fact that there will be system meeting the highest legal and disturbances is virtually guaranteed, professional standards of safety and and designing for resilience in the ethics.” Since robots are indeed sub- The fact that face of these is fundamental. ject to safety regulations and liability As a matter of professional com- laws, the requirement of meeting legal there will be disturbances mon , robot design should start standards for safety would seem self- with safety first, then add the inter- evident. For instance, the medical- is virtually guaranteed, esting software and hardware. Ro- device community has done extensive bots should carry “black boxes” or research to validate robot sensing of and designing for recorders to show what they were scalpel pressures and tissue contact doing when a disturbance occurred, parameters, and it invests in failure resilience in the face not only for the sake of an accident mode and effect analyses (consistent investigation but also to trace the ro- with FDA medical-device standards). of these is fundamental. bots’ behavior in context to aid diag- In contrast, mobile roboticists nosis and debugging. There should be have a somewhat infamous history a formal safety plan and checklists of disregarding regulations. For ex- ments is not enough—the alternative for contingencies. These do not have ample, robot operating on pub- first law demands the highest profes- to be extensive and time consuming lic roads, such as those used in the sional ethics in robot deployment. A to be effective. A litmus test for de- DARPA Urban Grand Challenge, are failure or accident involving a robot velopers might be “If a group of ex- considered by US federal and state can effectively end an entire branch perts from the IEEE were to write transportation regulations as experi- of robotics research, even if the op- about your robot after an accident, mental . Deploying such vehi- erators aren’t legally culpable. Re- what would they say about system cles requires voluminous and tedious sponsible communities should proac- safety and your professionalism?” permission applications. Regrettably, tively consider safety in the broadest Fundamentally, the alternative first the 1995 CMU “No Hands Across sense, and funding agencies should law places responsibility for safety America” team neglected to get all find ways to increase the priority and and efficacy on humans within the appropriate permissions while driv- scope of research funding specifically larger social and environmental con- ing autonomously from Pittsburgh aimed at relevant legal concerns. text in which robots are developed, to Los Angeles, and were stopped in The highest professional ethics deployed, and operated. Kansas. The US Federal Aviation Ad- should also be applied in product de- ministration makes a clear distinc- velopment and testing. Autonomous Alternative Second Law tion between flying unmanned aerial robots have known vulnerabilities to As an alternative to Asimov’s sec- vehicles as a hobby and flying them problems stemming from interrupted ond law, we propose the follow- for R&D or commercial practices, wireless communications. Signal re- ing: “A robot must respond to hu- effectively slowing or stopping many ception is impossible to predict, yet mans as appropriate for their roles.” R&D efforts. In response to these robust “return to home if signal lost” The capability to respond appropri- difficulties, a culture preferring “for- and “stop movement if GPS lost” ately—responsiveness—may be more

July/August 2009 www.computer.org/intelligent 17

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. important to human–robot interac- members, roles, and cues of a social control of the robot (for example, tion than the capability of autonomy. environment. when conditions require very short Not all robots will be fully autono- reaction times, a pilot may not be al- mous over all conditions. For exam- Alternative Third Law lowed to override some commands ple, a robot might be constrained to Our third law is “A robot must be generated by algorithms that attempt follow waypoints but will be expected endowed with sufficient situated au- to provide envelope protection for to generate appropriate responses to tonomy to protect its own existence the aircraft). This in turn implies that people it encounters along the way. as long as such protection provides an aspect of the design of roles is the Responsiveness depends on the social smooth transfer of control to other identification of classes of situations environment, the kinds of people and agents consistent with the first and that demand transfer of control, so their expectations that a robot might second laws.” This law specifies that that the exchange processes can be encounter in its work envelope. Rather a human–robot system should be able specified as part of roles. This is when than assume the relationship is hier- to transition smoothly from whatever the human takes control from the ro- archical with the human as the supe- bot for a specialized aspect of the mis- rior and the robot as the subordinate sion in anticipation of conditions that so that all communication is a type Increased capability will challenge the limits of the robot’s of order, the alternative second law capabilities, or in an emergency. De- states that robots must be built so that for autonomy and cades of human factors research on the interaction fits the relationships human out-of-the-loop control prob- and roles of each member in a given authority leads to the lems, handling of anomalies, cascades environment. The relationship deter- of disturbances, situation awareness, mines the degree to which a robot is need to participate in and autopilot/pilot transfer of control obligated to respond. It might ignore can inform such designs. a hacker completely. Orders exceeding more sophisticated forms To be consistent with the first law the authority of the speaker might be requires designers to explicitly ad- disposed of politely (“please have your of coordinated activity. dress what is the appropriate situated superior confirm your request”) or autonomy (for example, identifying with a warning (“interference with a when the robot is better informed or law enforcement robot may be a viola- degree of autonomy or roles the ro- more capable than the human owing tion”). Note that defining “appropri- bots and humans were inhabiting to a to latency, sensing, and so on) and ate response” may address concerns new control relationship given the na- to provide mechanisms that permit about robots being abused.16 ture of the disruption, impasse, or op- smooth transfer of control. To disre- The relationship also determines the portunity encountered or anticipated. gard the large body of literature on mode of the response. How the robot When developers focus narrowly on resilience and failure due to bumpy signals or expresses itself should be the goal of isolated autonomy and fall transfer of control would violate the consistent with that relationship. Ca- prey to overconfidence by underesti- designers’ ethical obligation. sual relationships might rely on natu- mating the potential for surprises to The alternative second and third ral language, whereas trained teams occur, they tend to minimize the im- laws encourage some forms of in- performing specific tasks could coor- portance of transfer of control. But creased autonomy related to respon- dinate activities through other signals bumpy transfers of control have been siveness and the ability to engage such as body position and gestures. noted as a basic difficulty in human in various forms of smooth transfer The requirement for responsive- interaction with that can of control. To be able to engage in ness captures a new form of autonomy contribute to failures.17 these activities with people in vari- (not as isolated action but the more The alternative third law addresses ous roles, the robot will need more difficult behavior of engaging appro- situated autonomy and smooth trans- situated intelligence. The result is an priately with others). However, devel- fer of control, both of which interact irony that has been noted before: in- oping robots’ capability for respon- with the prescriptions of the other creased capability for autonomy and siveness requires a significant research laws. To be consistent with the second authority leads to the need to partici- effort, particularly in how robots can law requires that humans in a given pate in more sophisticated forms of perceive and identify the different role might not always have complete coordinated activity.8

18 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. Table 1. Asimov’s laws of robotics versus the alternative laws of responsible robotics Asimov’s laws Alternative laws 1 A robot may not injure a human being or, through inaction, A human may not deploy a robot without the human–robot work system allow a human being to come to harm. meeting the highest legal and professional standards of safety and ethics.

2 A robot must obey orders given to it by human beings, A robot must respond to humans as appropriate for their roles. except where such orders would conflict with the first law.

3 A robot must protect its own existence as long as such A robot must be endowed with sufficient situated autonomy to protect protection does not conflict with the first or second law. its own existence as long as such protection provides smooth transfer of control to other agents consistent the first and second laws.

Discussion of human systems design principles on robots’ functional morality and Our critique reveals that robots need and to take responsibility proac- full moral agency, and the alternative two key capabilities: responsiveness tively for the consequences of errors laws, which focus on system respon- and smooth transfer of control. Our and failures in human–robot systems. sibility and resilience, illustrates why proposed alternative laws remind ro- Standard tools from the aerospace, the robotics community should re- botics researchers and developers of medical, and chemical sist public pressure to frame current their legal and professional responsi- safety cultures, including training, human–robot interaction in terms of bilities. They suggest how people can formal processes, checklists, black Asimov’s laws. Asimov’s laws dis- conduct human–robot interaction re- boxes, and safety officers, can be ad- tract from capturing the diversity of search safely, and they identify criti- opted. Network and physical security robotic missions and initiative. Un- cal research questions. should be incorporated into robots, derstanding these diversities and Table 1 places Asimov’s three laws even during development. complexities is critical for designing side by side with our three alternative The alternative second and third the “right” interaction scheme for a laws. Asimov’s laws assume functional laws require new research directions given domain. morality—that robots are capable of for robotics to leverage and build on Ironically, Asimov’s laws really are making (or are permitted to make) existing results in social cognition, robot-centric because most of the ini- their own decisions—and ignore the cognitive , and resilience tiative for safety and efficacy lies in legal and professional responsibility engineering. The laws suggest that the robot as an autonomous agent. of those who design and deploy them the ability for robots to express re- The alternative laws are human- (operational morality). More impor- lationships and obligations through centered because they take a systems tantly for human–robot interaction, social roles will be essential to all approach. They emphasize that Asimov’s laws ignore the complexity human–robot interaction. For exam- and dynamics of relationships and re- ple, work on entertainment robots • responsibility for the consequences sponsibilities between robots and peo- and social robots provides insights of robots’ successes and failures lies ple and how those relationships are about how robots can express emo- in the human groups that have a expressed. In contrast, the alternative tions or affect appropriate to people stake in the robots’ activities, and three laws emphasize responsibility they encounter. The extensive litera- • capable robotic agents still exist in and resilience, starting with enlight- ture from cognitive engineering on a web of dynamic social and cogni- ened, safety-oriented designs (alter- transfer of control and general human tive relationships. native first law), then adding respon- out-of-the-loop control problems can siveness (alternative second law) and be redirected at robotic systems. The Ironically, meeting the requirements smooth transfer of control (alternative techniques for resilience engineering of the alternative laws leads to the third law). are beginning to identify new control need for robots to be more capable The alternative laws are designed architectures for distributed, multi- agents—that is, more responsive to to be more feasible to implement than echelon systems that include systems others and better at interaction with Asimov’s laws given current technol- that include robots. others. ogy, although they also raise critical We propose the alternative laws as questions for research. For example, a way to stimulate debate about ro- the alternative first law isn’t concerned bots’ accountability when their ac- with technology per se but with the The fundamental difference be- tions can harm people or human need for robot developers to be aware tween Asimov’s laws, which focus interests. We also hope that these

July/August 2009 www.computer.org/intelligent 19

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. laws can serve to direct R&D to en- 2006; www.cs.bham.ac.uk/research/ 11. M.F. Rose et al., Technology Develop- hance human–robot systems. Finally, projects/cogaff/misc/asimov-three-laws. ment for Army Unmanned Ground while perhaps not as entertaining as html. Vehicles, Nat’l Academy Press, 2002. Asimov’s laws, we hope the alterna- 3. C. Allen, W. Wallach, and I. Smit, “Why 12. D. Woods, “Conflicts between Learn- tive laws of responsible robotics can ?” IEEE Intelligent ing and Accountability in Patient better communicate to the general Systems, vol. 21, no. 4, 2006, pp. 12–17. Safety,” DePaul Law Rev., vol. 54, public the complex mix of opportu- 4. M. Moran, “ 2005, pp. 485–502. nities and challenges of robots in to- and Surgery,” J. Endourology, vol. 22, 13. S.W.A. Dekker, Just Culture: Bal- day’s world. no. 8, 2008, pp. 1557–1560. ancing Safety and Accountability, 5. R. Clarke, “Asimov’s Laws of Robotics: Ashgate, 2008. Implications for Information Technol- 14. J. Allen et al., “Towards Conversa- ogy Part 1,” Computer, vol. 26, no. 12, tional Human–Computer Interaction,” Acknowledgments 1993, pp. 53–61. AI Magazine, vol. 22, no. 4, 2001, pp. We thank Jeff Bradshaw, Cindy Bethel, 6. R. Clarke, “Asimov’s Laws of Robotics: 27–38. Jenny Burke, Victoria Groom, and Leila Implications for Information Technol- 15. J.M. Bradshaw et al., “Dimensions Takayama for their helpful feedback and ogy Part 2,” Computer, vol. 27, no. 1, of Adjustable Autonomy and Mixed- Sung Huh for additional references. The 1994, pp. 57–66. Initiative Interaction,” Agents and second author’s contribution was based on 7. W. Wallach and C. Allen, Moral Ma- Computational Autonomy: Potential, participation in the Advanced Decision Ar- chines: Teaching Robots Right from Risks, and Solutions, M. Nickles, M. chitectures Collaborative Technology Alli- Wrong, Oxford Univ. Press, 2009. Rovatsos, and G. Weiss, eds., LNCS ance, sponsored by the US Army Research 8. D. Woods and E. Hollnagel, Joint 2969, Springer, 2004, pp. 17–39. under cooperative agreement Cognitive Systems: Patterns in Cogni- 16. B. Whitby, “Sometimes It’s Hard to DAAD19-01-2-0009. tive Systems Engineering, Taylor and Be a Robot: A Call for Action on the Francis, 2006. Ethics of Abusing Artificial Agents,” 9. R.C. Arkin and L. Moshkina, “Le- Interacting with Computers, vol. 20, References thality and Autonomous Robots: An no. 3, 2008, pp. 326–333. 1. S.L. Anderson, “Asimov’s ‘Three Laws Ethical Stance,” Proc. IEEE Int’l Symp. 17. D.D. Woods and N. Sarter, “Learning of Robotics’ and Machine Metaethics,” Technology and Society (ISTAS 07), from Automation Surprises and ‘Going AI and Society, vol. 22, no. 4, 2008, IEEE Press, 2007, pp. 1–3. Sour’ Accidents,” Cognitive Engineer- pp. 477–493. 10. N. Sharkey, “The Ethical Frontiers of ing in the Aviation Domain, N.B. Sarter 2. A. Sloman, “Why Asimov’s Three Laws Robotics,” Science, vol. 322, no. 5909, and R. Amalberti, eds., Nat’l Aeronau- of Robotics are Unethical,” 27 July 2008, pp. 1800–1801. tics and Space Administration, 1998.

IEEE Intelligent Systems delivers the latest peer-reviewed research on all aspects of , focusing on practical, fielded applications. Contributors include leading experts in Robin Murphy is the Raytheon Profes- sor of Computer Science and Engineer- IEEE ing at Texas A&M University. Contact • Intelligent Agents • The Semantic Web him at [email protected]. • Natural Language Processing THE #1 ARTIFICIAL • Robotics • Machine Learning David D. Woods is a professor in the INTELLIGENCE Human Systems Integration section of Visit us on the Web at the Department of Integrated Systems MAGAZINE! www.computer.org/intelligent Engineering at Ohio State University. Contact him at [email protected].

20 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply.