Definitions and Asimov's Three Laws of Robotics

Total Page:16

File Type:pdf, Size:1020Kb

Definitions and Asimov's Three Laws of Robotics MTH 117 Project The Background In order to use logic in mathematics (in particular, writing proofs) we must use definitions for which there is no ambiguity. Finding such definitions can be difficult. For example, say I define a cat to be an animal with four legs, a tail, and whiskers. There are many questions which immediately arise. Here just a few: 1. What is an animal? 2. What are legs? 3. What is a tail? 4. What are whiskers? It may be the case that we have already defined these terms and so these questions are not really a prob- lem. However, depending on how we define some of these terms, could we conclude that a catfish is a cat? Certainly catfish are animals. Also, maybe the definition of leg was so vague that it could include fins. Moreover, it is possible that the definition tail was so vague that it included the back fin of a fish. Finally, catfish have whiskers (hence the name). We could therefore conclude that catfish are cats (something we probably did not intend to do). Maybe we could fix this problem by adding more attributes of cats. For example, we could add having fur to the definition of cat. However, what if we had what we would normally call a hairless cat? Is it still a cat? It doesn't have fur, so according to our definition it is not a cat despite the fact that most of us would agree that it is indeed a cat. The Three Laws of Robotics Isaac Asimov was a science fiction writer who is famous for introducing the three laws of robotics. The idea was that the robots would be made so that they had to obey these laws. Moreover, the laws were meant to be a set of rules which would protect humans from robots. The laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Some of Asimov's stories described ways in which robots could \break" the laws. In truth the robots did not break the laws (they couldn't, they are logical machines). However, it appeared that they broke the laws because they did not have good definitions. As an example, suppose we programmed a robot to believe all humans have hair on their head. If a robot comes across someone we would call bald, do the three laws still apply to this bald person? The Project You will create a story involving robots and Asimov's three laws. In the story something is to go wrong and at least one of the laws should be \broken". This \broken law" must occur because of faulty definitions given to the robot(s). How you want to tell the story is up to you. Some possible choices are: a short story (4-6 pages double- spaced), a short video (10 to 15 minutes), a short play script (10 to 15 minutes if acted out), a computer game (come talk to me about how \long" the game needs to be). If you have another platform to tell the story you would like to use, please come to me so we can discuss it. However you decide to tell the story, you must also submit a two page to three page (double spaced) summary of the story which includes what the faulty definitions were and how they were used to \break" the laws. 1 This summary should also include how the definitions could be fixed so that the law(s) wouldn't be broken. Finally, you should discuss if this new definition could lead to new unintended consequences. You must work in groups on this project. The minimum and maximum size of the group depends on the medium in which you are telling the story. 1. Written short story: 3-5 people 2. Written short play script: 3-5 people. If you are going to perform the play for the class, then you can have 3-8 people. 3. Short video: 5-8 people 4. Computer game: 3-6 people 5. Other medium: meet with me to discuss group size. Regardless how much actual work is being done by each group member, the grade received will be the same for each member. Keep this in mind when forming your groups. Grading The total project is worth 100 points. The actual story is worth 45 points, the summary is worth 45 points, and the email updates (see below) are worth 10 points. Extra Credit You will have the opportunity to receive and extra 20 points of extra credit for this project. To receive the extra credit, you will tell your story in front of the class. If you wrote a short story, you can do a dramatic reading of it. If you made a video, you can show it to the class. If you wrote a play, you can perform it. If you wrote a computer game, you can show it to the class. Doing the performance in front of the class is completely optional, but there is no other way to receive extra credit on this project. I will let everyone know when the performances will be scheduled later in the semester. Dates Nov 1: By this date, one person from each group needs to email me telling me who is in their group and what medium they plan to use to tell their story. (5 pts) Nov 15: By this date, one person from each group needs to email me telling me their groups progress on the project. (5 pts) Nov 30: The project is due. References If you would like to read some of Asimov's work, here is a short list of stories he has written which include the three laws. This list is nowhere near exhaustive. • Liar!: A story about a robot which tells lies so as not to harm humans and thus obey the first law. • Galley Slave: A courtroom drama in which a professor is suing the manufacturer a robot used to proofread his work. The claim is that the robot purposely changed the words to make it look like the professor was incompetent. • Little Lost Robot: A story in which a robot has its first law modified by removing the clause about inaction. • Caves of Steel: A detective story involving the 3 laws of robotics. 2.
Recommended publications
  • History of Robotics: Timeline
    History of Robotics: Timeline This history of robotics is intertwined with the histories of technology, science and the basic principle of progress. Technology used in computing, electricity, even pneumatics and hydraulics can all be considered a part of the history of robotics. The timeline presented is therefore far from complete. Robotics currently represents one of mankind’s greatest accomplishments and is the single greatest attempt of mankind to produce an artificial, sentient being. It is only in recent years that manufacturers are making robotics increasingly available and attainable to the general public. The focus of this timeline is to provide the reader with a general overview of robotics (with a focus more on mobile robots) and to give an appreciation for the inventors and innovators in this field who have helped robotics to become what it is today. RobotShop Distribution Inc., 2008 www.robotshop.ca www.robotshop.us Greek Times Some historians affirm that Talos, a giant creature written about in ancient greek literature, was a creature (either a man or a bull) made of bronze, given by Zeus to Europa. [6] According to one version of the myths he was created in Sardinia by Hephaestus on Zeus' command, who gave him to the Cretan king Minos. In another version Talos came to Crete with Zeus to watch over his love Europa, and Minos received him as a gift from her. There are suppositions that his name Talos in the old Cretan language meant the "Sun" and that Zeus was known in Crete by the similar name of Zeus Tallaios.
    [Show full text]
  • Moral and Ethical Questions for Robotics Public Policy Daniel Howlader1
    Moral and ethical questions for robotics public policy Daniel Howlader1 1. George Mason University, School of Public Policy, 3351 Fairfax Dr., Arlington VA, 22201, USA. Email: [email protected]. Abstract The roles of robotics, computers, and artifi cial intelligences in daily life are continuously expanding. Yet there is little discussion of ethical or moral ramifi cations of long-term development of robots and 1) their interactions with humans and 2) the status and regard contingent within these interactions– and even less discussion of such issues in public policy. A focus on artifi cial intelligence with respect to differing levels of robotic moral agency is used to examine the existing robotic policies in example countries, most of which are based loosely upon Isaac Asimov’s Three Laws. This essay posits insuffi ciency of current robotic policies – and offers some suggestions as to why the Three Laws are not appropriate or suffi cient to inform or derive public policy. Key words: robotics, artifi cial intelligence, roboethics, public policy, moral agency Introduction Roles for robots Robots, robotics, computers, and artifi cial intelligence (AI) Robotics and robotic-type machinery are widely used in are becoming an evermore important and largely un- countless manufacturing industries all over the globe, and avoidable part of everyday life for large segments of the robotics in general have become a part of sea exploration, developed world’s population. At present, these daily as well as in hospitals (1). Siciliano and Khatib present the interactions are now largely mundane, have been accepted current use of robotics, which include robots “in factories and even welcomed by many, and do not raise practical, and schools,” (1) as well as “fi ghting fi res, making goods moral or ethical questions.
    [Show full text]
  • Dr. Asimov's Automatons
    Dr. Asimov’s Automatons Take on a Life of their Own Twenty years after his death, author Isaac Asimov’s robot fiction offers a blueprint to our robotic future...and the problems we could face by Alan S. Brown HIS PAST April, the University of Miami School These became the Three Laws. Today, they are the starting of Law held We Robot, the first-ever legal and point for any serious conversation about how humans and policy issues conference about robots. The name robots will behave around one another. of the conference, which brought together lawyers, As the mere fact of lawyers discussing robot law shows, engineers, and technologists, played on the title the issue is no longer theoretical. If robots are not yet of the most famous book intelligent, they are increasingly t ever written about robots, autonomous in how they carry out I, Robot, by Isaac Asimov. tasks. At the most basic level, every The point was underscored by day millions of robotic Roombas de- Laurie Silvers, president of Holly- cide how to navigate tables, chairs, wood Media Corp., which sponsored sofas, toys, and even pets as they the event. In 1991, Silvers founded vacuum homes. SyFy, a cable channel that specializ- At a more sophisticated level, es in science fiction. Within moments, autonomous robots help select in- she too had dropped Asimov’s name. ventory in warehouses, move and Silvers turned to Asimov for ad- position products in factories, and vice before launching SyFy. It was a care for patients in hospitals. South natural choice. Asimov was one of the Korea is testing robotic jailers.
    [Show full text]
  • The Turing Test Q: Can Machines Think?
    The Turing Test Q: Can machines think? I compute, therefore I am. Problem: We don’t know what ―think‖ means. Q: What is intelligence? Problem: We can’t define ―intelligence‖. But, we usually ―know it when we see it‖. The Turing Test Q: Which of the following can think? The Turing Test Q: Can you distinguish a machine from a person? ≡ Can a machine impersonate a person? The Turing Test • The first deep investigation into whether machines can ―behave intelligently‖ • Helped usher in field of AI • Decoupled ―intelligence‖ from ―human‖ • Based ―intelligence‖ on I/O, not entity’s ―look and feel‖ • Proposed a practical, formal test for intelligence • Definitions & test are operational & easily implementable • Turing test variants: ―immortality‖, ―fly-on-wall‖, ―meta‖, ―reverse‖, ―subject matter expert‖, ―compression‖, ―minimum intelligent signal‖ Turing Test Milestones Turk (von Kempelen), 1770: • Played a strong game of chess • Toured the world for 84 years • Generated much interest in automata • Was a hoax (hidden human operator) Turing Test Milestones Eliza (Weizenbaum), 1966: • First ―chatterbot‖ • Named after ―Eliza Doolittle‖ of Shaw’s Pygmalion • Simulated Rogerian psychotherapist • Often convinced people it is human • Influence computer games (e.g., Dungeon, Adventure) • Inspired Arthur C. Clarke’s HAL 9000 • Chatterbots appear today in chatrooms and dating services • Chatterbot competitions (Turing Test format): – Loebner Prize – Chatterbox Challenge, 2010 Turing Test Milestones Elbot, 2008: • Won 2008 Loebner Prize • Fooled 3 of 12 judges (Turing: >30%) • Typical excerpt: Judge: What's your favourite novel? Elbot: I often read telephone books, or dadaist poetry: Brä brä brä Judge: Typical computer! Elbot: You tend towards generalizations.
    [Show full text]
  • The Three Laws of Robotics Machine Ethics (Or Machine Morality) Is A
    The Three Laws of Robotics Machine ethics (or machine morality) is a part of the ethics of AI concerned with the moral behavior of artificially intelligent beings. Machine ethics contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings. Machine ethics should not be confused with computer ethics, which focuses on professional behavior towards computers and information. Isaac Asimov considered the issue in the 1950s in his novel: I, Robot. He proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. Asimov’s laws are still mentioned as a template for guiding our development of robots ; but given how much robotics has changed, since they appeared, and will continue to grow in the future, we need to ask how these rules could be updated for a 21st century version of artificial intelligence. Asimov’s suggested laws were devised to protect humans from interactions with robots. They are: A robot may not injure a human being or, through inaction, allow a human being to come to harm A robot must obey the orders given it by human beings except where such orders would conflict with the First Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws As mentioned, one of the obvious issues is that robots today appear to be far more varied than those in Asimov’s stories, including some that are far more simple.
    [Show full text]
  • Three Laws of Robotics
    Three Laws of Robotics From Wikipedia, the free encyclopedia The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These form an organizing principle and unifying theme for Asimov's robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres. This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics.
    [Show full text]
  • Robotics in Germany and Japan DRESDEN PHILOSOPHY of TECHNOLOGY STUDIES DRESDNER STUDIEN ZUR PHILOSOPHIE DER TECHNOLOGIE
    Robotics in Germany and Japan DRESDEN PHILOSOPHY OF TECHNOLOGY STUDIES DRESDNER STUDIEN ZUR PHILOSOPHIE DER TECHNOLOGIE Edited by /Herausgegeben von Bernhard Irrgang Vol./Bd. 5 Michael Funk / Bernhard Irrgang (eds.) Robotics in Germany and Japan Philosophical and Technical Perspectives Bibliographic Information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the internet at http://dnb.d-nb.de. Library of Congress Cataloging-in-Publication Data Robotics in Germany and Japan : philosophical and technical perspectives / Michael Funk, Bernhard Irrgang (eds.). pages cm ----- (Dresden philosophy of technology perspectives, ISSN 1861- -- 423X ; v. 5) ISBN 978-3-631-62071-7 ----- ISBN 978-3-653-03976-4 (ebook) 1. Robotics-----Germany----- Popular works. 2. Robotics----- Japan--Popular works. 3. Robotics-----Philosophy. I. Funk, Michael, 1985- -- editor of compilation. II. Irrgang, Bernhard, editor of compilation. TJ211.15.R626 2014 629.8'920943----- dc23 2013045885 Cover illustration: Humanoid Robot “ARMAR” (KIT, Germany), Photograph: Michael Funk An electronic version of this book is freely available, thanks to the support of libraries working with Knowledge Unlatched. KU is a collaborative initiative designed to make high quality books Open Access for the public good. More information about the initiative and links to the Open Access version can be found at www.knowledgeunlatched.org ISSN 1861-423X • ISBN 978-3-631-62071-7 (Print) E-ISBN 978-3-653-03976-4 (E-PDF) • E-ISBN 978-3-653-99964-8 (EPUB) E-ISBN 978-3-653-99963-1 (MOBI) • DOI 10.3726/978-3-653-03976-4 Open Access: This work is licensed under a Creative Commons Attribution NonCommercial NoDerivatives 4.0 unported license.
    [Show full text]
  • The First Law of Robotics ( a Call to Arms )
    :-0 appear, AAAI-94 The First Law of Robotics ( a call to arms ) Daniel Weld Oren Etzioni* Department of Computer Science and Engineering University of Washington Seattle, WA 98195 {weld, etzioni}~cs.washington.edu Abstract should not slavishly obey human commands-its fore- Even before the advent of Artificial Intelligence, sci- most goal should be to avoid harming humans. Con- ence fiction writer Isaac Asimov recognized that an sider the following scenarios: agent must place the protection of humans from harm .A construction robot is instructed to fill a pothole at a higher priority than obeying human orders. In- in the road. Although the robot repairs the cavity, spired by Asimov, we pose the following fundamental it leavesthe steam roller, chunks of tar, and an oil questions: (I) How should one formalize the rich, but slick in the middle of a busy highway. informal, notion of "harm"? (2) How can an agent avoid performing harmful actions, and do so in a com- .A softbot (software robot) is instructed to reduce putationally tractable manner? (3) How should an disk utilization below 90%. It succeeds, but in- agent resolve conflict between its goals and the need spection revealsthat the agent deleted irreplaceable to avoid harm? (4) When should an agent prevent a IjcTEXfiles without backing them up to tape. human from harming herself? While we address some of these questions in technical detail, the primary goal While less dramatic than Asimov's stories, the sce- of this paper is to focus attention on Asimov's concern: narios illustrate his point: not all ways of satisfying a society will reject autonomous agents unless we have human order are equally good; in fact, sometimesit is some credible means of making them safe! better not to satisfy the order at all.
    [Show full text]
  • Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines
    religions Article Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines James McGrath 1,* ID and Ankur Gupta 2 1 Butler University, Department of Philosophy, Religion, and Classics, 4600 Sunset Avenue, Indianapolis, IN 46208, USA 2 Butler University, Department of Computer Science, 600 Sunset Avenue, Indianapolis, IN 46208, USA; [email protected] * Correspondence: [email protected]; Tel.: +1-317-940-9364 Received: 31 July 2018; Accepted: 7 August 2018; Published: 9 August 2018 Abstract: The moral and ethical challenges of living in community pertain not only to the intersection of human beings one with another, but also our interactions with our machine creations. This article explores the philosophical and theological framework for reasoning and decision-making through the lens of science fiction, religion, and artificial intelligence (both real and imagined). In comparing the programming of autonomous machines with human ethical deliberation, we discover that both depend on a concrete ordering of priorities derived from a clearly defined value system. Keywords: ethics; Isaac Asimov; Jesus; Confucius; Socrates; Euthyphro; commandments; robots; artificial intelligence; programming; driverless cars 1. Introduction Ethical reasoning, when it is done effectively, involves prioritizing between competing options. Philosophical thought experiments such as the “trolley problem” highlight the human tendency to avoid making decisions about situations in which there is no good outcome. However, the history of both religion
    [Show full text]
  • Why the Three Laws of Robotics Do Not Work
    International Journal of Research in Engineering and Innovation Vol-2, Issue-2 (2018), 121-126 __________________________________________________________________________________________________________________________________ International Journal of Research in Engineering and Innovation (IJREI) journal home page: http://www.ijrei.com ISSN (Online): 2456-6934 _______________________________________________________________________________________ Why the three laws of robotics do not work Chris Stokes School of Philosophy, Wuhan University, Wuhan, China _______________________________________________________________________________________ Abstract This paper will be exploring the issue of safeguards around artificial intelligence. AI is a technological innovation that could potentially be created in the next few decades. There must be have controls in place before the creation of 'strong', sentient AI to avoid potentially catastrophic risks. Many AI researchers and computer engineers believe that the 'Three Laws of Robotics', written by Isaac Asimov, are sufficient controls. This paper aims to show that the Three Laws are actually inadequate to the task. This paper will look at the Three Laws of Robotics and explain why they are insufficient in terms of the safeguards that are required to protect humanity from rogue or badly programmed AI. It looks at each law individually and explain why it fails. The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer. The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves. The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws. The ‘Zeroth’ Law, like the first, fails because of ambiguous ideology.
    [Show full text]
  • Introduction to Robotics
    Introduction to Robotics Vikram Kapila, Associate Professor, Mechanical Engineering Outline • Definition • Types •Uses • History • Key components • Applications • Future • Robotics @ MPCRL Robot Defined • Word robot was coined by a Czech novelist Karel Capek in a 1920 play titled Rassum’s Universal Robots (RUR) • Robot in Czech is a word for worker or servant Karel Capek zDefinition of robot: –Any machine made by by one our members: Robot Institute of America - –A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks: Robot Institute of America, 1979 Types of Robots: I Manipulator Types of Robots: II Legged Robot Wheeled Robot Types of Robots: III Autonomous Underwater Vehicle Unmanned Aerial Vehicle Robot Uses: I Jobs that are dangerous for humans Decontaminating Robot Cleaning the main circulating pump housing in the nuclear power plant Robot Uses: II Repetitive jobs that are boring, stressful, or labor- intensive for humans Welding Robot Robot Uses: III Menial tasks that human don’t want to do The SCRUBMATE Robot Laws of Robotics • Asimov proposed three “Laws of Robotics” and later added the “zeroth law” • Law 0: A robot may not injure humanity or through inaction, allow humanity to come to harm • Law 1: A robot may not injure a human being or through inaction, allow a human being to come to harm, unless this would violate a higher order law • Law 2: A robot must obey orders given to it by human beings, except where such orders would conflict with a higher order law • Law 3: A robot must protect its own existence as long as such protection does not conflict with a higher order law History of Robotics: I • The first industrial robot: UNIMATE • 1954: The first programmable robot is designed by George Devol, who coins the term Universal Automation.
    [Show full text]
  • Our AI Overlord: the Cultural Persistence of Isaac Asimov's Three
    Our AI Overlord: The Cultural Persistence of Isaac Asimov’s Three Laws of Robotics in Understanding Artificial Intelligence Gia Jung University of California, Santa Barbara Faculty Advisor: Rita Raley 2018 Jung 1 Introduction Artificial intelligence is everywhere. As a tinny voice in each phone, powering GPS, determining what appears on social media feeds, and rebelling on movie screens, artificial intelligence (AI) is a now-integral part of daily life. For an industry that has and will continue to have major potential effects on the economy through job loss and creation, huge investments, and transformation of productivity, there remains a cultural lack of understanding about the realities of AI. Scanning the news, it is clear that people are afraid and uncertain about this robotic revolution, continually talking about an oncoming technological singularity in which AI will reach hyper-intelligence, create more and more AI, and eventually take over the world. Paired with this is the expectation that AI will be human only to a malicious extent, and must therefore be controlled and restricted. In talking to Siri though, it is clear that this apocalypse is fictional at best and far off at worst. As created and evidenced by a malnourished representation of robots and other easily understandable notions of AI in popular fiction, there is a dearth in public consciousness about the possibilities and realities of artificial intelligence. In examining this reductive fictional perception of AI, most popular conceptions can be traced back to either Mary Shelley’s Frankenstein or Isaac Asimov’s I, Robot. ​ ​ ​ Historically, Asimov is undeniably important to the establishment of both the scientific and fictional realms of artificial intelligence.
    [Show full text]