
HISTORIESHUMAN-CENTERED AND FUTURES COMPUTING Editors: Robert R. Hoffman, Jeffrey M. Bradshaw, and Kenneth M. Ford, Institute for Human and Machine Cognition, [email protected] Beyond Asimov: The Three Laws of Responsible Robotics Robin R. Murphy, Texas A&M University David D. Woods, Ohio State University ince their codifi cation in 1947 in the col- With few notable exceptions,5,6 there has been lection of short stories I, Robot, Isaac Asi- relatively little discussion of whether robots, now S or in the near future, will have suffi cient percep- mov’s three laws of robotics have been a staple tual and reasoning capabilities to actually follow of science fi ction. Most of the stories assumed the laws. And there appears to be even less serious that the robot had complex perception and rea- discussion as to whether the laws are actually vi- soning skills equivalent to a child and that robots able as a framework for human–robot interaction, were subservient to humans. Although the laws outside of cultural expectations. were simple and few, the stories attempted to dem- Following the defi nitions in Moral Machines: onstrate just how diffi cult they were to apply in Teaching Robots Right from Wrong,7 Asimov’s various real-world situations. In most situations, laws are based on functional morality, which as- although the robots usually behaved “logically,” sumes that robots have suffi cient agency and cog- they often failed to do the “right” thing, typically nition to make moral decisions. Unlike many of because the particular context of application re- his successors, Asimov is less concerned with the quired subtle adjustments of judgment on the part details of robot design than in exploiting a clever of the robot (for example, determining which law literary device that lets him take advantage of the took priority in a given situation, or what consti- large gaps between aspiration and reality in robot tuted helpful or harmful behavior). autonomy. He uses the situations as a foil to ex- The three laws have been so successfully incul- plore issues such as cated into the public consciousness through enter- tainment that they now appear to shape society’s • the ambiguity and cultural dependence of lan- expectations about how robots should act around guage and behavior—for example, whether humans. For instance, the media frequently refer what appears to be cruel in the short run can to human–robot interaction in terms of the three actually become a kindness in the longer term; laws. They’ve been the subject of serious blogs, • social utility—for instance, how different indi- events, and even scientifi c publications. The Sin- viduals’ roles, capabilities, or backgrounds are gularity Institute organized an event and Web valuable in different ways with respect to each site, “Three Laws Unsafe,” to try to counter pub- other and to society; and lic expectations of robots in the wake of the movie • the limits of technology—for example, the im- I, Robot. Both the philosophy1 and AI2 commu- possibility of assuring timely, correct actions in nities have discussed ethical considerations of ro- all situations and the omnipresence of trade-offs. bots in society using the three laws as a reference, with a recent discussion in IEEE Intelligent Sys- In short, in a variety of ways the stories test the tems.3 Even medical doctors have considered ro- lack of resilience in human–robot interactions. botic surgery in the context of the three laws.4 The assumption of functional morality, while ef- 14 1541-1672/09/$26.00 © 2009 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. fective for entertaining storytelling, the military’s weaponization of ro- today’s robots are literal-minded neglects operational morality. Oper- bots, and discussions are now shifting agents—that is, they can’t tell if their ational morality links robot actions to the question of whether weaponized world model is the world they’re and inactions to the decisions, as- robots can be “humane.”9,10 Such really in. sumptions, analyses, and investments weaponization is no longer limited to All this aside, the biggest problem of those who invent and make ro- situations in which humans remain in with the first law is that it views safety botic systems and of those who com- the loop for control. The South Ko- only in terms of the robot—that is, mission, deploy, and handle robots in rean government has published vid- the robot is the responsible safety operational contexts. No matter how eos on YouTube of robotic border- agent in all matters of human–robot far the autonomy of robots ultimately security guards. Scenarios have been interaction. While some speculate on advances, the important challenges of proposed where it would be permis- what it would mean for a robot to be these accountability and liability link- sible for a military robot to fire upon able to discharge this responsibility, ages will remain.8 anything moving (presumably target- there are serious practical, theoreti- This essay reviews the three laws cal, social-cognitive, and legal limi- and briefly summarizes some of the tations.8,12 For example, from a legal practical shortcomings—and even Asimov’s laws are based perspective the robot is a product, so dangers—of each law for framing it’s not the responsible agent. Rather, human–robot relationships, includ- on functional morality, the robot’s owner or manufacturer is ing reminders about what robots liable for its actions. Unless robots can’t do. We then propose an alter- which assumes that are granted a person-equivalent sta- native, parallel set of laws based on tus, somewhat like corporations are what humans and robots can real- robots have sufficient now legally recognized as individual istically accomplish in the foresee- entities, it’s difficult to imagine stan- able future as joint cognitive systems, agency and cognition dard product liability law not apply- and their mutual accountability for ing to them. When a failure occurs, their actions from the perspectives of to make moral decisions. violating Asimov’s first law, the hu- human-centered design and human– man stakeholders affected by that robot interaction. failure will engage in the processes ing humans) without direct human of causal attribution. Afterwards, Applying Asimov’s permission.11 they’ll see the robot as a device and Laws to Today’s Robots Even if current events hadn’t made will look for the person or group who When we try to apply Asimov’s laws the law irrelevant, it’s moot because set up or instructed the device erro- to today’s robots, we immediately robots cannot infallibly recognize hu- neously or who failed to supervise run into problems. Just as for Asi- mans, perceive their intent, or reli- (that is, stop) the robot before harm mov in his short stories, these prob- ably interpret contextualized scenes. occurred. It’s still commonplace af- lems arise from the complexities of A quick review of the computer vi- ter accidents for manufacturers and situations where we would use ro- sion literature shows that scientists organizations to claim the result was bots, the limits of physical systems continue to struggle with many fun- due only to human error, even when acting with limited resources in un- damental perceptual processes. Cur- the system in question was operating certain changing situations, and the rent commercial security packages autonomously.8,13 interplay between the different social for recognizing the face of a person Accountability is bound up with roles as different agents pursue mul- standing in a fixed position continue the way we maintain our social re- tiple goals. to fall short of expectations in prac- lationships. Human decision-making tice. Many robots that “recognize” always occurs in a context of expec- First Law humans use indirect cues such as tations that one might be called to Asimov’s first law of robotics states, heat and motion, which only work account for his or her decisions. Ex- “A robot may not injure a human be- in constrained contexts. These prob- pectations for what’s considered an ing or, through inaction, allow a hu- lems confirm Norbert Wiener’s warn- adequate explanation and the con- man being to come to harm.” This ings about such failure possibilities.8 sequences for people when their law is already an anachronism given Just as he envisioned many years ago, explanation is judged inadequate are JULY/AUGUST 2009 www.computer.org/intelligent 15 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. critical parts of accountability sys- tice and take stock of humans (and by a person who bears full responsi- tems—a reciprocating cycle of being that the people robots encounter or bility for all safety matters. Human- prepared to provide an accounting for interact with can notice pertinent as- factors studies show that remote one’s actions and being called by oth- pects of robots’ behavior).15 For ex- operators are immediately at a dis- ers to provide an account. To be con- ample, is it acceptable for a robot to advantage, working through a me- sidered moral agents, robots would merely not hit a person in a hospi- diated interface with a time delay. have to be capable of participating tal hall, or should it conform to so- Worse yet, remote operators are re- personally in this reciprocating cycle cial convention and acknowledge the quired to operate the robot through of accountability—an issue that, of person in some way (“excuse me” or poor human–computer interfaces and course, concerns more than any sin- a nod of a camera pan-tilt)? Or if a in contexts where the operator can be gle agent’s capabilities in isolation.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-