Beyond Asimov: the Three Laws of Responsible Robotics

Total Page:16

File Type:pdf, Size:1020Kb

Beyond Asimov: the Three Laws of Responsible Robotics HISTORIESHUMAN-CENTERED AND FUTURES COMPUTING Editors: Robert R. Hoffman, Jeffrey M. Bradshaw, and Kenneth M. Ford, Institute for Human and Machine Cognition, [email protected] Beyond Asimov: The Three Laws of Responsible Robotics Robin R. Murphy, Texas A&M University David D. Woods, Ohio State University ince their codifi cation in 1947 in the col- With few notable exceptions,5,6 there has been lection of short stories I, Robot, Isaac Asi- relatively little discussion of whether robots, now S or in the near future, will have suffi cient percep- mov’s three laws of robotics have been a staple tual and reasoning capabilities to actually follow of science fi ction. Most of the stories assumed the laws. And there appears to be even less serious that the robot had complex perception and rea- discussion as to whether the laws are actually vi- soning skills equivalent to a child and that robots able as a framework for human–robot interaction, were subservient to humans. Although the laws outside of cultural expectations. were simple and few, the stories attempted to dem- Following the defi nitions in Moral Machines: onstrate just how diffi cult they were to apply in Teaching Robots Right from Wrong,7 Asimov’s various real-world situations. In most situations, laws are based on functional morality, which as- although the robots usually behaved “logically,” sumes that robots have suffi cient agency and cog- they often failed to do the “right” thing, typically nition to make moral decisions. Unlike many of because the particular context of application re- his successors, Asimov is less concerned with the quired subtle adjustments of judgment on the part details of robot design than in exploiting a clever of the robot (for example, determining which law literary device that lets him take advantage of the took priority in a given situation, or what consti- large gaps between aspiration and reality in robot tuted helpful or harmful behavior). autonomy. He uses the situations as a foil to ex- The three laws have been so successfully incul- plore issues such as cated into the public consciousness through enter- tainment that they now appear to shape society’s • the ambiguity and cultural dependence of lan- expectations about how robots should act around guage and behavior—for example, whether humans. For instance, the media frequently refer what appears to be cruel in the short run can to human–robot interaction in terms of the three actually become a kindness in the longer term; laws. They’ve been the subject of serious blogs, • social utility—for instance, how different indi- events, and even scientifi c publications. The Sin- viduals’ roles, capabilities, or backgrounds are gularity Institute organized an event and Web valuable in different ways with respect to each site, “Three Laws Unsafe,” to try to counter pub- other and to society; and lic expectations of robots in the wake of the movie • the limits of technology—for example, the im- I, Robot. Both the philosophy1 and AI2 commu- possibility of assuring timely, correct actions in nities have discussed ethical considerations of ro- all situations and the omnipresence of trade-offs. bots in society using the three laws as a reference, with a recent discussion in IEEE Intelligent Sys- In short, in a variety of ways the stories test the tems.3 Even medical doctors have considered ro- lack of resilience in human–robot interactions. botic surgery in the context of the three laws.4 The assumption of functional morality, while ef- 14 1541-1672/09/$26.00 © 2009 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. fective for entertaining storytelling, the military’s weaponization of ro- today’s robots are literal-minded neglects operational morality. Oper- bots, and discussions are now shifting agents—that is, they can’t tell if their ational morality links robot actions to the question of whether weaponized world model is the world they’re and inactions to the decisions, as- robots can be “humane.”9,10 Such really in. sumptions, analyses, and investments weaponization is no longer limited to All this aside, the biggest problem of those who invent and make ro- situations in which humans remain in with the first law is that it views safety botic systems and of those who com- the loop for control. The South Ko- only in terms of the robot—that is, mission, deploy, and handle robots in rean government has published vid- the robot is the responsible safety operational contexts. No matter how eos on YouTube of robotic border- agent in all matters of human–robot far the autonomy of robots ultimately security guards. Scenarios have been interaction. While some speculate on advances, the important challenges of proposed where it would be permis- what it would mean for a robot to be these accountability and liability link- sible for a military robot to fire upon able to discharge this responsibility, ages will remain.8 anything moving (presumably target- there are serious practical, theoreti- This essay reviews the three laws cal, social-cognitive, and legal limi- and briefly summarizes some of the tations.8,12 For example, from a legal practical shortcomings—and even Asimov’s laws are based perspective the robot is a product, so dangers—of each law for framing it’s not the responsible agent. Rather, human–robot relationships, includ- on functional morality, the robot’s owner or manufacturer is ing reminders about what robots liable for its actions. Unless robots can’t do. We then propose an alter- which assumes that are granted a person-equivalent sta- native, parallel set of laws based on tus, somewhat like corporations are what humans and robots can real- robots have sufficient now legally recognized as individual istically accomplish in the foresee- entities, it’s difficult to imagine stan- able future as joint cognitive systems, agency and cognition dard product liability law not apply- and their mutual accountability for ing to them. When a failure occurs, their actions from the perspectives of to make moral decisions. violating Asimov’s first law, the hu- human-centered design and human– man stakeholders affected by that robot interaction. failure will engage in the processes ing humans) without direct human of causal attribution. Afterwards, Applying Asimov’s permission.11 they’ll see the robot as a device and Laws to Today’s Robots Even if current events hadn’t made will look for the person or group who When we try to apply Asimov’s laws the law irrelevant, it’s moot because set up or instructed the device erro- to today’s robots, we immediately robots cannot infallibly recognize hu- neously or who failed to supervise run into problems. Just as for Asi- mans, perceive their intent, or reli- (that is, stop) the robot before harm mov in his short stories, these prob- ably interpret contextualized scenes. occurred. It’s still commonplace af- lems arise from the complexities of A quick review of the computer vi- ter accidents for manufacturers and situations where we would use ro- sion literature shows that scientists organizations to claim the result was bots, the limits of physical systems continue to struggle with many fun- due only to human error, even when acting with limited resources in un- damental perceptual processes. Cur- the system in question was operating certain changing situations, and the rent commercial security packages autonomously.8,13 interplay between the different social for recognizing the face of a person Accountability is bound up with roles as different agents pursue mul- standing in a fixed position continue the way we maintain our social re- tiple goals. to fall short of expectations in prac- lationships. Human decision-making tice. Many robots that “recognize” always occurs in a context of expec- First Law humans use indirect cues such as tations that one might be called to Asimov’s first law of robotics states, heat and motion, which only work account for his or her decisions. Ex- “A robot may not injure a human be- in constrained contexts. These prob- pectations for what’s considered an ing or, through inaction, allow a hu- lems confirm Norbert Wiener’s warn- adequate explanation and the con- man being to come to harm.” This ings about such failure possibilities.8 sequences for people when their law is already an anachronism given Just as he envisioned many years ago, explanation is judged inadequate are JULY/AUGUST 2009 www.computer.org/intelligent 15 Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL. Downloaded on August 24, 2009 at 11:27 from IEEE Xplore. Restrictions apply. critical parts of accountability sys- tice and take stock of humans (and by a person who bears full responsi- tems—a reciprocating cycle of being that the people robots encounter or bility for all safety matters. Human- prepared to provide an accounting for interact with can notice pertinent as- factors studies show that remote one’s actions and being called by oth- pects of robots’ behavior).15 For ex- operators are immediately at a dis- ers to provide an account. To be con- ample, is it acceptable for a robot to advantage, working through a me- sidered moral agents, robots would merely not hit a person in a hospi- diated interface with a time delay. have to be capable of participating tal hall, or should it conform to so- Worse yet, remote operators are re- personally in this reciprocating cycle cial convention and acknowledge the quired to operate the robot through of accountability—an issue that, of person in some way (“excuse me” or poor human–computer interfaces and course, concerns more than any sin- a nod of a camera pan-tilt)? Or if a in contexts where the operator can be gle agent’s capabilities in isolation.
Recommended publications
  • Introduction to Robotics. Sensors and Actuators
    Introduction to Computer Vision and Robotics Florian Teich and Tomas Kulvicius* Introduction to Robotics. Sensors and Actuators Large portion of slides are adopted from Florentin Wörgötter, John Hallam and Simon Reich *[email protected] Perception-Action loop Environment Object Eyes Action Perception Arm Brain Perception-Action loop (cont.) Environment Sense Object Cameras Action Perception Robot-arm Computer Act Plan Outline of the course • L1.CV1: Introduction to Computer Vision. Thresholding, Filtering & Connected Coomponents • L2.CV2: Bilateral Filtering, Morphological Operators & Edge Detection • L3.CV3: Corner Detection & Non-Local Filtering • L4.R1: Introduction to Robotics. Sensors and actuators • L5.R2: Movement generation methods • L6.R3: Path planning algorithms • L7.CV4: Line/Circle Detection, Template Matching & Feature Detection • L8.CV5: Segmentation • L9.CV6: Fate Detection, Pedestrian Tracking & Computer Vision in 3D • L10.R4: Robot kinematics and control • L11.R5: Learning algorithms in robotics I: Supervised and unsupervised learning • L12.R6: Learning algorithms in robotics II: Reinforcement learning and genetic algorithms Introduction to Robotics History of robotics • Karel Čapek was one of the most influential Czech writers of the 20th century and a Nobel Prize nominee (1936). • He introduced and made popular the frequently used international word robot, which first appeared in his play R.U.R. (Rossum's Universal Robots) in 1921. 1890-1938 • “Robot” comes from the Czech word “robota”, meaning “forced labor” • Karel named his brother Josef Čapek as the true inventor of the word robot. History of robotics (cont.) • The word "robotics" also comes from science fiction - it first appeared in the short story "Runaround" (1942) by American writer Isaac Asimov.
    [Show full text]
  • 2.1: What Is Robotics? a Robot Is a Programmable Mechanical Device
    2.1: What is Robotics? A robot is a programmable mechanical device that can perform tasks and interact with its environment, without the aid of human interaction. Robotics is the science and technology behind the design, manufacturing and application of robots. The word robot was coined by the Czech playwright Karel Capek in 1921. He wrote a play called “Rossum's Universal Robots” that was about a slave class of manufactured human-like servants and their struggle for freedom. The Czech word robota loosely means "compulsive servitude.” The word robotics was first used by the famous science fiction writer, Isaac Asimov, in 1941. 2.1: What is Robotics? Basic Components of a Robot The components of a robot are the body/frame, control system, manipulators, and drivetrain. Body/frame: The body or frame can be of any shape and size. Essentially, the body/frame provides the structure of the robot. Most people are comfortable with human-sized and shaped robots that they have seen in movies, but the majority of actual robots look nothing like humans. Typically, robots are designed more for function than appearance. Control System: The control system of a robot is equivalent to the central nervous system of a human. It coordinates and controls all aspects of the robot. Sensors provide feedback based on the robot’s surroundings, which is then sent to the Central Processing Unit (CPU). The CPU filters this information through the robot’s programming and makes decisions based on logic. The same can be done with a variety of inputs or human commands.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • THESIS ANXIETIES and ARTIFICIAL WOMEN: DISASSEMBLING the POP CULTURE GYNOID Submitted by Carly Fabian Department of Communicati
    THESIS ANXIETIES AND ARTIFICIAL WOMEN: DISASSEMBLING THE POP CULTURE GYNOID Submitted by Carly Fabian Department of Communication Studies In partial fulfillment of the requirements For the Degree of Master of Arts Colorado State University Fort Collins, Colorado Fall 2018 Master’s Committee: Advisor: Katie L. Gibson Kit Hughes Kristina Quynn Copyright by Carly Leilani Fabian 2018 All Rights Reserved ABSTRACT ANXIETIES AND ARTIFICIAL WOMEN: DISASSEMBLING THE POP CULTURE GYNOID This thesis analyzes the cultural meanings of the feminine-presenting robot, or gynoid, in three popular sci-fi texts: The Stepford Wives (1975), Ex Machina (2013), and Westworld (2017). Centralizing a critical feminist rhetorical approach, this thesis outlines the symbolic meaning of gynoids as representing cultural anxieties about women and technology historically and in each case study. This thesis draws from rhetorical analyses of media, sci-fi studies, and previously articulated meanings of the gynoid in order to discern how each text interacts with the gendered and technological concerns it presents. The author assesses how the text equips—or fails to equip—the public audience with motives for addressing those concerns. Prior to analysis, each chapter synthesizes popular and scholarly criticisms of the film or series and interacts with their temporal contexts. Each chapter unearths a unique interaction with the meanings of gynoid: The Stepford Wives performs necrophilic fetishism to alleviate anxieties about the Women’s Liberation Movement; Ex Machina redirects technological anxieties towards the surveilling practices of tech industries, simultaneously punishing exploitive masculine fantasies; Westworld utilizes fantasies and anxieties cyclically in order to maximize its serial potential and appeal to impulses of its viewership, ultimately prescribing a rhetorical placebo.
    [Show full text]
  • Building Multiversal Semantic Maps for Mobile Robot Operation
    Draft Version. Final version published in Knowledge-Based Systems Building Multiversal Semantic Maps for Mobile Robot Operation Jose-Raul Ruiz-Sarmientoa,∗, Cipriano Galindoa, Javier Gonzalez-Jimeneza aMachine Perception and Intelligent Robotics Group System Engineering and Auto. Dept., Instituto de Investigaci´onBiom´edicade M´alaga (IBIMA), University of M´alaga, Campus de Teatinos, 29071, M´alaga, Spain. Abstract Semantic maps augment metric-topological maps with meta-information, i.e. semantic knowledge aimed at the planning and execution of high-level robotic tasks. Semantic knowledge typically encodes human-like concepts, like types of objects and rooms, which are connected to sensory data when symbolic representations of percepts from the robot workspace are grounded to those concepts. This symbol grounding is usually carried out by algorithms that individually categorize each symbol and provide a crispy outcome – a symbol is either a member of a category or not. Such approach is valid for a variety of tasks, but it fails at: (i) dealing with the uncertainty inherent to the grounding process, and (ii) jointly exploiting the contextual relations among concepts (e.g. microwaves are usually in kitchens). This work provides a solution for probabilistic symbol grounding that overcomes these limitations. Concretely, we rely on Conditional Random Fields (CRFs) to model and exploit contextual relations, and to provide measurements about the uncertainty coming from the possible groundings in the form of beliefs (e.g. an object can be categorized (grounded) as a microwave or as a nightstand with beliefs 0:6 and 0:4, respectively). Our solution is integrated into a novel semantic map representation called Multiversal Semantic Map (MvSmap ), which keeps the different groundings, or universes, as instances of ontologies annotated with the obtained beliefs for their posterior exploitation.
    [Show full text]
  • History of Robotics: Timeline
    History of Robotics: Timeline This history of robotics is intertwined with the histories of technology, science and the basic principle of progress. Technology used in computing, electricity, even pneumatics and hydraulics can all be considered a part of the history of robotics. The timeline presented is therefore far from complete. Robotics currently represents one of mankind’s greatest accomplishments and is the single greatest attempt of mankind to produce an artificial, sentient being. It is only in recent years that manufacturers are making robotics increasingly available and attainable to the general public. The focus of this timeline is to provide the reader with a general overview of robotics (with a focus more on mobile robots) and to give an appreciation for the inventors and innovators in this field who have helped robotics to become what it is today. RobotShop Distribution Inc., 2008 www.robotshop.ca www.robotshop.us Greek Times Some historians affirm that Talos, a giant creature written about in ancient greek literature, was a creature (either a man or a bull) made of bronze, given by Zeus to Europa. [6] According to one version of the myths he was created in Sardinia by Hephaestus on Zeus' command, who gave him to the Cretan king Minos. In another version Talos came to Crete with Zeus to watch over his love Europa, and Minos received him as a gift from her. There are suppositions that his name Talos in the old Cretan language meant the "Sun" and that Zeus was known in Crete by the similar name of Zeus Tallaios.
    [Show full text]
  • Satisfaction Guaranteed™ the Pleasure Bot, the Gynoid, The
    Satisfaction Guaranteed™ The Pleasure Bot, the Gynoid, the Electric-Gigolo, or my personal favorite the Romeo Droid are just some of Science Fiction's contributions to the development of the android as sex worker. Notably (and as any Sci-fi aficionado would remind us) such technological foresight is often a precursor to our own - not too distant future. It will therefore come as no surprise that the development of artificial intelligence and virtual reality are considered to be the missing link within the sex industry and the manufacture of technologically-enhanced products and experiences. Similarly, many esteemed futurologists are predicting that by 2050 (not 2049) artificial intelligence will have become so integrated within society that it will be commonplace for humans to have sex with robots… Now scrub that image out of your head and let’s remind ourselves that all technology (if we listen to Charlie Brooker) should come with a warning sign. Sexnology (that’s Sex + Technology) is probably pretty high up there on the cautionary list, but whether we like it or not people ‘The robots are coming’ – no pun intended! Actually the robots (those of a sexual nature) have already arrived, although calling them robots may be a little premature. Especially when considering how film has created expectations of human resemblance, levels of functionality and in this context modes of interaction. The pursuit of creating in our own image has historically unearthed many underlying questions in both fiction and reality about what it means to be human. During such times, moral implications may surface although history also tells us that human traits such as the desire for power and control often subsume any humane/humanoid considerations.
    [Show full text]
  • Emerging Legal and Policy Trends in Recent Robot Science Fiction
    Emerging Legal and Policy Trends in Recent Robot Science Fiction Robin R. Murphy Computer Science and Engineering Texas A&M University College Station, TX 77845 [email protected] Introduction This paper examines popular print science fiction for the past five years (2013-2018) in which robots were essential to the fictional narrative and the plot depended on a legal or policy issue related to robots. It follows in the footsteps of other works which have examined legal and policy trends in science fiction [1] and graphic novels [2], but this paper is specific to robots. An analysis of five books and one novella identified four concerns about robots emerging in the public consciousness: enabling false identities through telepresence, granting robot rights, outlawing artificial intelligence for robots, and ineffectual or missing product liability. Methodolology for Selecting the Candidate Print Fiction While robotics is a popular topic in print science fiction, fictional treatments do not necessarily touch on legal or policy issues. Out of 44 candidate works, only six involved legal or policy issues. Candidates for consideration were identified in two ways. One, the nominees for the 2013-2018 Hugo and Nebulas awards were examined for works dealing with robots. The other was a query of science fiction robot best sellers at Amazon. A candidate work of fiction had to contain at least one robot that served either a character or contributed to the plot such that the robot could not be removed without changing the story. For example, in Raven Stratagem, robots did not appear to be more than background props throughout the book but suddenly proved pivotal to the ending of the novel.
    [Show full text]
  • Lawyers and Engineers Should Speak the Same Robot Language
    Lawyers and Engineers Should Speak the Same Robot Language Bryant Walker Smith Introduction Engineering and law have much in common. Both require careful assessment of system boundaries to compare costs with benefits and to identify causal relationships. Both engage similar concepts and similar terms, although some of these are the monoglot equivalent of a false friend. Both are ultimately concerned with the actual use of the products that they create or regulate. And both recognize that the use depends in large part on the human user. This chapter emphasizes the importance of these four concepts—systems, language, use, and users—to the development and regulation of robots. It argues for thoughtfully coordinating the technical and legal domains without thoughtlessly conflating them. Both developers and regulators must understand their interconnecting roles with respect to an emerging system of robotics that, when properly defined, includes humans as well as machines. Although the chapter applies broadly to robotics, motor vehicle automation provides the primary example for this system. I write as a legal scholar with a background in engineering, and I recognize that efforts to reconcile the domains might in some ways distort them. To ground the discussion, the chapter frequently references four technical documents: SAE J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, released by SAE International (formerly the Society of Automotive and Aerospace Engineers).1 I serve on the committee and subcommittee that drafted this report, which defines SAE’s levels of vehicle automation. Preliminary Statement of Policy Concerning Automated Vehicles, released by the National Highway Traffic Safety Administration (NHTSA).2 This document defines NHTSA’s levels of vehicle automation, which differ somewhat from SAE’s levels.
    [Show full text]
  • Brain Uploading-Related Group Mind Scenarios
    MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Coalescing Minds: Brain Uploading-Related Group Mind Scenarios Kaj Sotala University of Helsinki, MIRI Research Associate Harri Valpola Aalto University Abstract We present a hypothetical process of mind coalescence, where artificial connections are created between two or more brains. This might simply allow for an improved formof communication. At the other extreme, it might merge the minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain in- tegrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary. Sotala, Kaj, and Harri Valpola. 2012. “Coalescing Minds: Brain Uploading-Related Group Mind Scenarios.” International Journal of Machine Consciousness 4 (1): 293–312. doi:10.1142/S1793843012400173. This version contains minor changes. Kaj Sotala, Harri Valpola 1. Introduction Mind uploads, or “uploads” for short (also known as brain uploads, whole brain emu- lations, emulations or ems) are hypothetical human minds that have been moved into a digital format and run as software programs on computers. One recent roadmap chart- ing the technological requirements for creating uploads suggests that they may be fea- sible by mid-century (Sandberg and Bostrom 2008).
    [Show full text]
  • The (Care) Robot in Science Fiction: a Monster Or a Tool for the Future?
    Confero | Vol. 4 | no. 2 | 2016 | pp. 97-109 | doi: 10.3384/confero.2001-4562.161212 The (care) robot in science fiction: A monster or a tool for the future? Aino-Kaisa Koistinen ccording to Mikkonen, Mäyrä and Siivonen, our lives are so pervaded with technology that it becomes important to ask questions considering human relations to technology and the boundaries between us and the various technological A appliances that we interact with on a daily basis: For example, as pacemakers and contact lenses technology has become such an intimate thing that it can be said to be a foundational aspect of our humanity. It is hard, or even impossible, to understand the meaning of our human existence if the role of machines in our humanness is not taken into account. Pointedly, we can ask: “Are we humans machines – or at least turning into ones?” Or in reverse: “Can machines become humans, thinking and feeling beings?” What is essential is not how realistic or believable the assumptions considering humanization of machines or the mechanization of humans inherent to these questions are. What is essential is that these questions are asked altogether.1 There is one fictional genre, that of science fiction, that is particularly suitable for asking these kinds of questions. Indeed, science fiction, as the name of the genre already suggests, is preoccupied with the imaginations of scientific explorations. These explorations are often realized through stories of 1 Mikkonen et al., 1997, 9, transl. by the author, emphasis added. 97 Aino-Kaisa Koistinen technology, such as different kinds of robotic creatures.
    [Show full text]
  • Moral and Ethical Questions for Robotics Public Policy Daniel Howlader1
    Moral and ethical questions for robotics public policy Daniel Howlader1 1. George Mason University, School of Public Policy, 3351 Fairfax Dr., Arlington VA, 22201, USA. Email: [email protected]. Abstract The roles of robotics, computers, and artifi cial intelligences in daily life are continuously expanding. Yet there is little discussion of ethical or moral ramifi cations of long-term development of robots and 1) their interactions with humans and 2) the status and regard contingent within these interactions– and even less discussion of such issues in public policy. A focus on artifi cial intelligence with respect to differing levels of robotic moral agency is used to examine the existing robotic policies in example countries, most of which are based loosely upon Isaac Asimov’s Three Laws. This essay posits insuffi ciency of current robotic policies – and offers some suggestions as to why the Three Laws are not appropriate or suffi cient to inform or derive public policy. Key words: robotics, artifi cial intelligence, roboethics, public policy, moral agency Introduction Roles for robots Robots, robotics, computers, and artifi cial intelligence (AI) Robotics and robotic-type machinery are widely used in are becoming an evermore important and largely un- countless manufacturing industries all over the globe, and avoidable part of everyday life for large segments of the robotics in general have become a part of sea exploration, developed world’s population. At present, these daily as well as in hospitals (1). Siciliano and Khatib present the interactions are now largely mundane, have been accepted current use of robotics, which include robots “in factories and even welcomed by many, and do not raise practical, and schools,” (1) as well as “fi ghting fi res, making goods moral or ethical questions.
    [Show full text]