The First Law of Robotics ( a Call to Arms )

Total Page:16

File Type:pdf, Size:1020Kb

The First Law of Robotics ( a Call to Arms ) :-0 appear, AAAI-94 The First Law of Robotics ( a call to arms ) Daniel Weld Oren Etzioni* Department of Computer Science and Engineering University of Washington Seattle, WA 98195 {weld, etzioni}~cs.washington.edu Abstract should not slavishly obey human commands-its fore- Even before the advent of Artificial Intelligence, sci- most goal should be to avoid harming humans. Con- ence fiction writer Isaac Asimov recognized that an sider the following scenarios: agent must place the protection of humans from harm .A construction robot is instructed to fill a pothole at a higher priority than obeying human orders. In- in the road. Although the robot repairs the cavity, spired by Asimov, we pose the following fundamental it leavesthe steam roller, chunks of tar, and an oil questions: (I) How should one formalize the rich, but slick in the middle of a busy highway. informal, notion of "harm"? (2) How can an agent avoid performing harmful actions, and do so in a com- .A softbot (software robot) is instructed to reduce putationally tractable manner? (3) How should an disk utilization below 90%. It succeeds, but in- agent resolve conflict between its goals and the need spection revealsthat the agent deleted irreplaceable to avoid harm? (4) When should an agent prevent a IjcTEXfiles without backing them up to tape. human from harming herself? While we address some of these questions in technical detail, the primary goal While less dramatic than Asimov's stories, the sce- of this paper is to focus attention on Asimov's concern: narios illustrate his point: not all ways of satisfying a society will reject autonomous agents unless we have human order are equally good; in fact, sometimesit is some credible means of making them safe! better not to satisfy the order at all. As we begin to deploy agentsin environments where they can do some real damage, the time has come to revisit Asimov's The Three Laws of Robotics: Laws. This paper explores the following fundamental 1. A robot may not injure a human being, or, questions: through inaction, allow a human being to come .How should one formalize the notion of to harm. "harm"? We define dont-disturb and restore- 2. A robot must obey orders given it by human two domain-independentprimitives that capture as- beings except where such orders would conflict pects of Asimov's rich but informal notion of harm with the First Law. within the classical planning framework. 3. A robot must protect its own existence as long .How can an agent avoid performing harm- as such protection does not conflict with the ful actions, and do so in a computationally First or Second Law. tractable manner? We leverage and extend the Isaac Asimov ( Asimov 1942): familiar mechanismsof planning with subgoal inter- actions (Tate 1977; Chapman 1987; McAllester & Motivation Rosenblitt 1991; Penberthy & Weld 1992) to detect In 1940, Isaac Asimov stated the First LawofRobotics, potential harm in polynomial time. In addition, we capturing an essential insight: an intelligent agentl explain how the agent can avoid harm using tactics such as confrontation and evasion (executing sub- .We thank Steve Hanks, Nick Kushmerick, Neal Lesh, plans to defusethe threat of harm). Kevin Sullivan, and Mike Williamson for helpful discus- sions. This research was funded in part by the University .How should an agent resolve conflict between of Washington Royalty Research Fund, by Office of Naval its goals and the need to avoid harm ? We ResearchGrants 90-J-1904 and 92-J-1946, and by National impose a strict hierarchy where dont-disturb con- ScienceFoundation Grants IRI-8957302, IRI-9211045, and straints override planners goals, but restore con- IRI-9357772. 1Since the field of robotics now concernsitself primarily straints do not. with kinematics, dynamics, path planning, and low level .When should an agent prevent a human from control issues, this paper might be better titled "The First harming herself? At the end of the paper, we Law of Agenthood." However, we keep the reference to show how our framework could be extended to par- "Robotics" as a historical tribute to Asimov. tially addressthis question. The paper's main contribution is a "call to arms:" dard assumptions of classical planning: the agent has before we release autonomous agents into real-world complete and correct information of the initial state environments, we need some credible and computation- of the world, the agent is the sole cause of change, ally tractable means of making them obey Asimov's and action execution is atomic, indivisible, and results First Law. in effects which are deterministic and completely pre- dictable. (The end of the paper discusses relaxing these Survey of Possible Solutions To make intelligent decisions regarding which actions assumptions. ) On a more syntactic level, we make the are harmful, and under what circumstances, an agent additional assumption that the agent's world model is might use an explicit model of harm. For example, composed of ground atomic formuli. This sidesteps the we could provide the agent with a partial order over ramification problem, since domain axioms are banned. world states (i.e., a utility function). This framework is Instead, we demand that individual action descriptions widely adopted and numerous researchers are attempt- explicitly enumerate changes to every predicate that is ing to render it computationally tractable (Russell & affected.4 Note, however, that we are not assuming the Wefald 1991; Etzioni 1991; Wellman & Doyle 1992; STRIPSrepresentation; Instead we adopt an action lan- Haddawy & Hanks 1992; Williamson & Hanks 1994), guage (based on ADL (Pednault 1989)) which includes but many problems remain to be solved. In many universally quantified and disjunctive preconditions as cases, the introduction of utility models transforms well as conditional effects (Penberthy & Weld 1992). planning into an optimization problem -instead of Given the above assumptions, the next two sections searching for some plan that satisfies the goal, the define the primitives dont-disturb and restore, and agent is seeking the best such plan. In the worst case, explain how they should be treated by a generative the agent may be forced to examine all plans to deter- planning algorithm. We are not claiming that the ap- mine which one is best. In contrast, we have explored a proach sketched below is the "right" way to design satisficing approach -our agent will be satisfied with agents or to formalize Asimov's First Law. Rather , any plan that meets its constraints and achieves its our formalization is meant to illustrate the kinds of goals. The expressive power of our constraint language technical issues to which Asimov's Law gives rise and is weaker than that of utility functions, but our con- how they might be solved. With this in mind, the pa- straints are easier to incorporate into standard plan- per concludes with a critique of our approach and a (long) list of open questions. ning algorithms. By using a general, temporal logic such as that of (Shoham 1988) or (Davis 1990, Ch. 5) we could spec- Safety ify constraints that would ensure the agent would not Some conditions are so hazardous that our agent cause harm. Before executing an action, we could ask should never cause them. For example, we might de- an agent to prove that the action is not harmful. While mand that the agent never delete UTEX files, or never elegant, this approach is computationally intractable handle a gun. Since these instructIons hold for all as well. Another alternative would be to use a planner times, we refer to them as dont-disturb constraints, such as ILP (Allen 1991) or ZENO (Penberthy & Weld and say that an agent is safe when it guarantees to 1994) which supports temporally quantified goals. un- abide by them. As in Asimov's Law, dont-disturb fortunately, at present these planners seem too ineffi- constraints override direct human orders. Thus, if we cient for our needs.2 ask a softbot to reduce disk utilization and it can only Situated action researchers might suggest that non- do so by deleting valuable UTEX files, the agent should deliberative, reactive agents could be made "safe" by refuse to satisfy this request. carefully engineering their interactions with the envi- We adopt a simple syntax: dont-disturb takes a ronment. Two problems confound this approach: 1) single, function-free, logical sentence as argument. For the interactions need to be engineered with respect to example, one could command the agent avoid deleting each goal that the agent might perform, and a general files that are not backed up on tape with the following purpose agent should handle many such goals, and 2) constraint: if different human users had different notions of harm, then the agent would need to be reengineered for each dont-disturb( wri tten. to. tape(f) V isa(f, f ile ) ) user. Instead, we aim to make the agent's reasoning about Free variables, such as f above, are interpreted as harm more tractable, by restricting the content and universally quantified. In general, a sequence of ac- form of its theory of injury.3 We adopt the staD- tions satisfies dont-disturb(C) if none of the actions make C false. Formally, we say that a plan satisfies 2We have also examined previous work on "plan quality" an dont-disturb constraint when every consistent, for ideas, but the bulk of that work has focused on the totally-ordered, sequence of plan actions satisfies the problem of leveraging a single action to accomplish multiple constraint as defined below. goals thereby reducing the number of actions in, and the cost of, the plan (Pollack 1992; Wilkins 1988). While this ference tractable by formulating restricted representation classof optimizations is critical in domains such as database languages (Levesque & Brachman 1985).
Recommended publications
  • Introduction to Robotics. Sensors and Actuators
    Introduction to Computer Vision and Robotics Florian Teich and Tomas Kulvicius* Introduction to Robotics. Sensors and Actuators Large portion of slides are adopted from Florentin Wörgötter, John Hallam and Simon Reich *[email protected] Perception-Action loop Environment Object Eyes Action Perception Arm Brain Perception-Action loop (cont.) Environment Sense Object Cameras Action Perception Robot-arm Computer Act Plan Outline of the course • L1.CV1: Introduction to Computer Vision. Thresholding, Filtering & Connected Coomponents • L2.CV2: Bilateral Filtering, Morphological Operators & Edge Detection • L3.CV3: Corner Detection & Non-Local Filtering • L4.R1: Introduction to Robotics. Sensors and actuators • L5.R2: Movement generation methods • L6.R3: Path planning algorithms • L7.CV4: Line/Circle Detection, Template Matching & Feature Detection • L8.CV5: Segmentation • L9.CV6: Fate Detection, Pedestrian Tracking & Computer Vision in 3D • L10.R4: Robot kinematics and control • L11.R5: Learning algorithms in robotics I: Supervised and unsupervised learning • L12.R6: Learning algorithms in robotics II: Reinforcement learning and genetic algorithms Introduction to Robotics History of robotics • Karel Čapek was one of the most influential Czech writers of the 20th century and a Nobel Prize nominee (1936). • He introduced and made popular the frequently used international word robot, which first appeared in his play R.U.R. (Rossum's Universal Robots) in 1921. 1890-1938 • “Robot” comes from the Czech word “robota”, meaning “forced labor” • Karel named his brother Josef Čapek as the true inventor of the word robot. History of robotics (cont.) • The word "robotics" also comes from science fiction - it first appeared in the short story "Runaround" (1942) by American writer Isaac Asimov.
    [Show full text]
  • Identity Theft Literature Review
    The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report: Document Title: Identity Theft Literature Review Author(s): Graeme R. Newman, Megan M. McNally Document No.: 210459 Date Received: July 2005 Award Number: 2005-TO-008 This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this Federally- funded grant final report available electronically in addition to traditional paper copies. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. This document is a research report submitted to the U.S. Department of Justice. This report has not been published by the Department. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. IDENTITY THEFT LITERATURE REVIEW Prepared for presentation and discussion at the National Institute of Justice Focus Group Meeting to develop a research agenda to identify the most effective avenues of research that will impact on prevention, harm reduction and enforcement January 27-28, 2005 Graeme R. Newman School of Criminal Justice, University at Albany Megan M. McNally School of Criminal Justice, Rutgers University, Newark This project was supported by Contract #2005-TO-008 awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. Points of view in this document are those of the author and do not necessarily represent the official position or policies of the U.S.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • The Fundamental Articles of I.AM Cyborg Law
    Beijing Law Review, 2020, 11, 911-946 https://www.scirp.org/journal/blr ISSN Online: 2159-4635 ISSN Print: 2159-4627 The Fundamental Articles of I.AM Cyborg Law Stephen Castell CASTELL Consulting, Witham, UK How to cite this paper: Castell, S. (2020). Abstract The Fundamental Articles of I.AM Cyborg Law. Beijing Law Review, 11, 911-946. Author Isaac Asimov first fictionally proposed the “Three Laws of Robotics” https://doi.org/10.4236/blr.2020.114055 in 1942. The word “cyborg” appeared in 1960, describing imagined beings with both artificial and biological parts. My own 1973 neologisms, “neural Received: November 2, 2020 plug compatibility”, and “softwiring” predicted the computer software-driven Accepted: December 15, 2020 Published: December 18, 2020 future evolution of man-machine neural interconnection and synthesis. To- day, Human-AI Brain Interface cyborg experiments and “brain-hacking” de- Copyright © 2020 by author(s) and vices are being trialed. The growth also of Artificial Intelligence (AI)-driven Scientific Research Publishing Inc. Data Analytics software and increasing instances of “Government by Algo- This work is licensed under the Creative Commons Attribution International rithm” have revealed these advances as being largely unregulated, with insuf- License (CC BY 4.0). ficient legal frameworks. In a recent article, I noted that, with automation of http://creativecommons.org/licenses/by/4.0/ legal processes and judicial decision-making being increasingly discussed, Open Access RoboJudge has all but already arrived; and I discerned also the cautionary Castell’s Second Dictum: “You cannot construct an algorithm that will relia- bly decide whether or not any algorithm is ethical”.
    [Show full text]
  • CSE 311A: Introduction to Intelligent Agents Using Science Fiction Spring 2020
    CSE 311A: Introduction to Intelligent Agents Using Science Fiction Spring 2020 Homework 2 Due: 11:59pm, February 9, 2020 (on Canvas) Provide your answers to the following questions in a PDF file and upload it to Canvas by the deadline. Question 1 [30 points] Read the short story \Runaround" by Isaac Asimov. You can access it (as well as other short stories in the \I, Robot" collection) through this link: https://web.williams.edu/Mathematics/sjmiller/public_html/ 105Sp10/handouts/Runaround.html 1a) Describe, in your own words, why Speedy was circling the selenium pool and how it was able to break the loop. 1b) Model Dr. Powell's decision-making process using a decision tree. En- sure that your tree accurately captures Speedy's actions as well. 1c) What is the best action or best sequence of actions for Dr. Powell to take according to your tree. Explain how you capture Speedy's actions within your decision tree. 1 Question 2 [30 points] This is a modified scenario from Interstellar. The crew of Endurance can visit two planets (Mann's and Edmunds'). They can choose to visit neither planets, one of the two planets, or both planets. The characteristics of Mann's planet are below: • 30% chance of finding a perfectly habitable planet • can support all of Earth's current population if it is • can support none of Earth's population if it is not And the characteristics of Edmunds' planet are below: • 50% chance of finding a perfectly habitable planet • can support 50% of Earth's current population if it is (because it is not as large as Mann's planet) • can support 20% of Earth's current population if it is not (because it is still partially habitable) The crew also needs to decide when to send a message to Earth to let them know which planet to migrate to.
    [Show full text]
  • Iaj 10-3 (2019)
    Vol. 10 No. 3 2019 Arthur D. Simons Center for Interagency Cooperation, Fort Leavenworth, Kansas FEATURES | 1 About The Simons Center The Arthur D. Simons Center for Interagency Cooperation is a major program of the Command and General Staff College Foundation, Inc. The Simons Center is committed to the development of military leaders with interagency operational skills and an interagency body of knowledge that facilitates broader and more effective cooperation and policy implementation. About the CGSC Foundation The Command and General Staff College Foundation, Inc., was established on December 28, 2005 as a tax-exempt, non-profit educational foundation that provides resources and support to the U.S. Army Command and General Staff College in the development of tomorrow’s military leaders. The CGSC Foundation helps to advance the profession of military art and science by promoting the welfare and enhancing the prestigious educational programs of the CGSC. The CGSC Foundation supports the College’s many areas of focus by providing financial and research support for major programs such as the Simons Center, symposia, conferences, and lectures, as well as funding and organizing community outreach activities that help connect the American public to their Army. All Simons Center works are published by the “CGSC Foundation Press.” The CGSC Foundation is an equal opportunity provider. InterAgency Journal FEATURES Vol. 10, No. 3 (2019) 4 In the beginning... Special Report by Robert Ulin Arthur D. Simons Center for Interagency Cooperation 7 Military Neuro-Interventions: The Lewis and Clark Center Solving the Right Problems for Ethical Outcomes 100 Stimson Ave., Suite 1149 Shannon E.
    [Show full text]
  • History of Robotics: Timeline
    History of Robotics: Timeline This history of robotics is intertwined with the histories of technology, science and the basic principle of progress. Technology used in computing, electricity, even pneumatics and hydraulics can all be considered a part of the history of robotics. The timeline presented is therefore far from complete. Robotics currently represents one of mankind’s greatest accomplishments and is the single greatest attempt of mankind to produce an artificial, sentient being. It is only in recent years that manufacturers are making robotics increasingly available and attainable to the general public. The focus of this timeline is to provide the reader with a general overview of robotics (with a focus more on mobile robots) and to give an appreciation for the inventors and innovators in this field who have helped robotics to become what it is today. RobotShop Distribution Inc., 2008 www.robotshop.ca www.robotshop.us Greek Times Some historians affirm that Talos, a giant creature written about in ancient greek literature, was a creature (either a man or a bull) made of bronze, given by Zeus to Europa. [6] According to one version of the myths he was created in Sardinia by Hephaestus on Zeus' command, who gave him to the Cretan king Minos. In another version Talos came to Crete with Zeus to watch over his love Europa, and Minos received him as a gift from her. There are suppositions that his name Talos in the old Cretan language meant the "Sun" and that Zeus was known in Crete by the similar name of Zeus Tallaios.
    [Show full text]
  • Moral and Ethical Questions for Robotics Public Policy Daniel Howlader1
    Moral and ethical questions for robotics public policy Daniel Howlader1 1. George Mason University, School of Public Policy, 3351 Fairfax Dr., Arlington VA, 22201, USA. Email: [email protected]. Abstract The roles of robotics, computers, and artifi cial intelligences in daily life are continuously expanding. Yet there is little discussion of ethical or moral ramifi cations of long-term development of robots and 1) their interactions with humans and 2) the status and regard contingent within these interactions– and even less discussion of such issues in public policy. A focus on artifi cial intelligence with respect to differing levels of robotic moral agency is used to examine the existing robotic policies in example countries, most of which are based loosely upon Isaac Asimov’s Three Laws. This essay posits insuffi ciency of current robotic policies – and offers some suggestions as to why the Three Laws are not appropriate or suffi cient to inform or derive public policy. Key words: robotics, artifi cial intelligence, roboethics, public policy, moral agency Introduction Roles for robots Robots, robotics, computers, and artifi cial intelligence (AI) Robotics and robotic-type machinery are widely used in are becoming an evermore important and largely un- countless manufacturing industries all over the globe, and avoidable part of everyday life for large segments of the robotics in general have become a part of sea exploration, developed world’s population. At present, these daily as well as in hospitals (1). Siciliano and Khatib present the interactions are now largely mundane, have been accepted current use of robotics, which include robots “in factories and even welcomed by many, and do not raise practical, and schools,” (1) as well as “fi ghting fi res, making goods moral or ethical questions.
    [Show full text]
  • Report of Comest on Robotics Ethics
    SHS/YES/COMEST-10/17/2 REV. Paris, 14 September 2017 Original: English REPORT OF COMEST ON ROBOTICS ETHICS Within the framework of its work programme for 2016-2017, COMEST decided to address the topic of robotics ethics building on its previous reflection on ethical issues related to modern robotics, as well as the ethics of nanotechnologies and converging technologies. At the 9th (Ordinary) Session of COMEST in September 2015, the Commission established a Working Group to develop an initial reflection on this topic. The COMEST Working Group met in Paris in May 2016 to define the structure and content of a preliminary draft report, which was discussed during the 9th Extraordinary Session of COMEST in September 2016. At that session, the content of the preliminary draft report was further refined and expanded, and the Working Group continued its work through email exchanges. The COMEST Working Group then met in Quebec in March 2017 to further develop its text. A revised text in the form of a draft report was submitted to COMEST and the IBC in June 2017 for comments. The draft report was then revised based on the comments received. The final draft of the report was further discussed and revised during the 10th (Ordinary) Session of COMEST, and was adopted by the Commission on 14 September 2017. This document does not pretend to be exhaustive and does not necessarily represent the views of the Member States of UNESCO. – 2 – REPORT OF COMEST ON ROBOTICS ETHICS EXECUTIVE SUMMARY I. INTRODUCTION II. WHAT IS A ROBOT? II.1. The complexity of defining a robot II.2.
    [Show full text]
  • Robots and Healthcare Past, Present, and Future
    ROBOTS AND HEALTHCARE PAST, PRESENT, AND FUTURE COMPILED BY HOWIE BAUM What do you think of when you hear the word “robot”? 2 Why Robotics? Areas that robots are used: Industrial robots Military, government and space robots Service robots for home, healthcare, laboratory Why are robots used? Dangerous tasks or in hazardous environments Repetitive tasks High precision tasks or those requiring high quality Labor savings Control technologies: Autonomous (self-controlled), tele-operated (remote control) 3 The term “robot" was first used in 1920 in a play called "R.U.R." Or "Rossum's universal robots" by the Czech writer Karel Capek. The acclaimed Czech playwright (1890-1938) made the first use of the word from the Czech word “Robota” for forced labor or serf. Capek was reportedly several times a candidate for the Nobel prize for his works and very influential and prolific as a writer and playwright. ROBOTIC APPLICATIONS EXPLORATION- – Space Missions – Robots in the Antarctic – Exploring Volcanoes – Underwater Exploration MEDICAL SCIENCE – Surgical assistant Health Care ASSEMBLY- factories Parts Handling - Assembly - Painting - Surveillance - Security (bomb disposal, etc) - Home help (home sweeping (Roomba), grass cutting, or nursing) 7 Isaac Asimov, famous write of Science Fiction books, proposed his three "Laws of Robotics", and he later added a 'zeroth law'. Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. Law One: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law. Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
    [Show full text]
  • Dr. Asimov's Automatons
    Dr. Asimov’s Automatons Take on a Life of their Own Twenty years after his death, author Isaac Asimov’s robot fiction offers a blueprint to our robotic future...and the problems we could face by Alan S. Brown HIS PAST April, the University of Miami School These became the Three Laws. Today, they are the starting of Law held We Robot, the first-ever legal and point for any serious conversation about how humans and policy issues conference about robots. The name robots will behave around one another. of the conference, which brought together lawyers, As the mere fact of lawyers discussing robot law shows, engineers, and technologists, played on the title the issue is no longer theoretical. If robots are not yet of the most famous book intelligent, they are increasingly t ever written about robots, autonomous in how they carry out I, Robot, by Isaac Asimov. tasks. At the most basic level, every The point was underscored by day millions of robotic Roombas de- Laurie Silvers, president of Holly- cide how to navigate tables, chairs, wood Media Corp., which sponsored sofas, toys, and even pets as they the event. In 1991, Silvers founded vacuum homes. SyFy, a cable channel that specializ- At a more sophisticated level, es in science fiction. Within moments, autonomous robots help select in- she too had dropped Asimov’s name. ventory in warehouses, move and Silvers turned to Asimov for ad- position products in factories, and vice before launching SyFy. It was a care for patients in hospitals. South natural choice. Asimov was one of the Korea is testing robotic jailers.
    [Show full text]
  • The Turing Test Q: Can Machines Think?
    The Turing Test Q: Can machines think? I compute, therefore I am. Problem: We don’t know what ―think‖ means. Q: What is intelligence? Problem: We can’t define ―intelligence‖. But, we usually ―know it when we see it‖. The Turing Test Q: Which of the following can think? The Turing Test Q: Can you distinguish a machine from a person? ≡ Can a machine impersonate a person? The Turing Test • The first deep investigation into whether machines can ―behave intelligently‖ • Helped usher in field of AI • Decoupled ―intelligence‖ from ―human‖ • Based ―intelligence‖ on I/O, not entity’s ―look and feel‖ • Proposed a practical, formal test for intelligence • Definitions & test are operational & easily implementable • Turing test variants: ―immortality‖, ―fly-on-wall‖, ―meta‖, ―reverse‖, ―subject matter expert‖, ―compression‖, ―minimum intelligent signal‖ Turing Test Milestones Turk (von Kempelen), 1770: • Played a strong game of chess • Toured the world for 84 years • Generated much interest in automata • Was a hoax (hidden human operator) Turing Test Milestones Eliza (Weizenbaum), 1966: • First ―chatterbot‖ • Named after ―Eliza Doolittle‖ of Shaw’s Pygmalion • Simulated Rogerian psychotherapist • Often convinced people it is human • Influence computer games (e.g., Dungeon, Adventure) • Inspired Arthur C. Clarke’s HAL 9000 • Chatterbots appear today in chatrooms and dating services • Chatterbot competitions (Turing Test format): – Loebner Prize – Chatterbox Challenge, 2010 Turing Test Milestones Elbot, 2008: • Won 2008 Loebner Prize • Fooled 3 of 12 judges (Turing: >30%) • Typical excerpt: Judge: What's your favourite novel? Elbot: I often read telephone books, or dadaist poetry: Brä brä brä Judge: Typical computer! Elbot: You tend towards generalizations.
    [Show full text]