Robots and Moral Concepts

Total Page:16

File Type:pdf, Size:1020Kb

Robots and Moral Concepts ROBOTS AND MORAL CONCEPTS Are Asimov’s three laws of robotics sufficient to guide the actions of contemporary robots or do they need moral concepts to guide their actions? Date: September 22nd, 2016 Subject: Master thesis Universty: Tilburg University Program: MA Philosophy Track: Ethiek van Bedrijf en Organisatie Supervisor: dr. Alfred Archer Second reader: dr. Bart Engelen Student & anr: Matthijs Smakman, 759122 ACKNOWLEDGEMENTS For making this journey possible I would like to thank the Institute of ICT at the HU University of Applied Sciences Utrecht. For the many papers they reviewed and the discussions that followed I am especially thankful to my colleague Alex Bongers and my dear friend Frank Wassenaar. Also, I am grateful to Willemijn for her love and patience, since it is not always easy living with a student of philosophy. For his guidance and enthusiasm during this process and his encouragement to present some of my work at the Designing Moral Technologies conference in Ascona and the Research Conference Robophilosophy in Aarhus, I am thankful to my thesis supervisor Alfred Archer. Finally, I would like to thank al my colleagues, friends and family who supported me during the writing process of this thesis: without your backing, this work would not have been done. INTRODUCTION To answer the research question “Are Asimov’s three laws of robotics sufficient to guide the actions of contemporary robots or do they need moral concepts to guide their actions?”, this thesis will use four chapters. In the first chapter, I will outline the field of contemporary robotics to familiarise the reader with the subject at hand. This will meanly be done by describing and analysing existing work in the field social robotics and machine ethics. In the second chapter, I will examine Asimov’s three laws of robotics and provide the reader with the philosophical and practical issues concerning these laws. In the third chapter, I will use arguments for well-being as the ultimate source of moral reasoning, to argue that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. In chapter four, I will argue that implementing well-being as a guiding principle might be problematic in the field of robotics. Secondly, I will examine arguments given by Ross (1961) that the complexity of moral judgement requires more than just a concept of well-being, and then argue that there is no need for these extra concepts. Finally, I will argue that certain roles bring certain obligations, role obligations. Therefore, robots need different “moral” programming for the different roles robots are going to perform, but that even in such a theory there is no need for moral concepts. Roles bring certain reasons to act in specific ways, but robots do not need concepts such as moral obligation to apply this. 2 CHAPTER 1 - ROBOTS In this first chapter, I will firstly give a working definition of what a robot is and give some examples of contemporary robots. Secondly, I will analyse why people, especially in the last 50 years, have written extensively about their fears of robots, although the particular circumstances they feared were, and according to some still are, decades away. Thirdly, I will state that this thesis is about moral action, and not moral agency. The main purpose of this chapter is to show the relevance of this thesis and to familiarise the reader with the subject at hand. WHAT IS A ROBOT? A robot is an “engineered machine that senses, thinks and acts” (Lin, Abney, & Bekey, 2011). A machine senses using sensors. These sensors are used by the machine to obtain data about the external world, for example by using cameras, GPS-receivers or other ways to acquire data. This data needs to be processed if the robot is to react. This process is called thinking. It can be argued that this classification of 'thinking' is false, but the classification will be adequate for the question of this thesis because it will not effect the arguments. The software process that I call thinking uses rules to process the data from its sensors and to make decisions on how to react. At the basis of these rules is a human, but on the basis of the human programmed code, the robot can learn how to react to new unknown situations. Machine learning software enables machines to learn from past experiences. The definition of machine learning is: “A computer program is said to learn from experience ‘E’ with respect to some class of task ‘T’ and performance measure ‘P’, if its performance at tasks in ‘T’, as measured by ‘P’, improves with experience ‘E’ (Mitchell, 1997). In other words, if a robot can improve how it performs a certain task based on past experience, then it has learned. Robots that are programmed to use machine learning improve their actions, therefore situations can arise which the engineer or programmer of that robot could possibly be unaware of. Next to the abilities of a robot to sense and think, it must be able to react autonomously. This reaction follows from its ability to sense and to think. The reaction of the robot happens in the real world, the world humans are living in. Since robots can act autonomously without being under the direct control of a human and make decisions based on sense data, a number of questions arise. For example, how do we ensure that robots don’t harm humans? Lin et. al. (2011) describe an interesting list of ethical questions such as: “whose ethics and law ought to be the standard in robotics”, “if we could program a code of ethics to regulate robotic behaviour, which ethical theory should we use” “should robots merely be considered tools, such as guns and computers, and 3 regulated accordingly” and “will robotic companionship (that could replace human or animal companionship) for other purposes, such as drinking buddies, pets, other forms of entertainment, or sex, be morally problematic?”. If robots are not programmed with the capacity to make moral decisions disastrous situations can arise, for example, robots forcing people against their will. Because of this Wallach & Allen (2009) argue that robots should be provided with the ability for ethical reasoning and ethical decision- making. There have been many that have taken on the challenge of analysing and implementing moral decision making into robots, such as: Anderson (2008), Coeckelbergh, (2010), Crnkovic, & Çürüklü (2012), Malle (2015), Murphy and Woods (2009), and Wallach (2010). One possible strategy for the implementation of morality and for answering some of the ethical questions formulated by Lin et. al. (2011) is to program robots in such a way that they are guided by (the software equivalents of) moral concepts such as moral obligation, morally wrong, morally right, fairness, virtues and kindness. Another scenario to implement morality is to create strict, deontic like laws that a robot is to uphold. In the 1940’s Isaac Asimov (1942), in his novel Runaround, proposed three laws of robots that were to guide the action of robots in such a way that their action would by adequate and safe. Both strategies will be discussed in this thesis as ways to implement morality into machines. EXAMPLES OF CONTEMPORARY ROBOTS As robots will become part of our daily lives in the next decades it is necessary to ensure that their behaviour is morally adequate (Crnkovic & Çürüklü, 2012). A recent case shows how the behaviour of a robot with artificial intelligence (AI) can go wrong. In 2016 Microsoft launched an AI chat robot on Twitter called Tay. Tay learned by the conversations it had with real humans. After only 24 hours it was taken offline because of tweets like “Bush did 9/11” and “Hitler would have done a better job than the monkey we have got now” (Horton, 2016). Some suggest that hackers deliberately provided Tay with information so that it would learn immoral statements, but this is not confirmed. What this example shows is that a robot, without the right moral guidelines, can start to act immorally. Up until this day Microsoft has not yet re-launched Tay. While in the 1950s robots were mainly used as industrial tools, today robots have become more social. Already robots are being developed in areas such as entertainment, research and education, personal care and companions, military and security (Lin et. al. 2011). In this paragraph, I will introduce contemporary robots that are being applied in the field of education and healthcare. 4 In the field of education, experiments are being done in which robots, partially, or completely replace teachers. One example is an experiment at a secondary school in Uithoorn, the Netherlands. Although the scientific results of this experiment are not jet published the researchers, Elly Konijn and Johan Hoorn, gave an interesting interview to Martin Kuiper from the NRC (Kuiper, 2016). In the experiment, a small robot called Zora, made by the French company Aldebaran and programmed by the Belgian company QBMT, helps children practice with mathematics. According to Konijn and Hoorn, Zora can help with routine tasks done previously by the teacher, for example explaining a mathematical formula over and over again (Kuiper, 2016). Although more research needs to be done in the area of human-robot interaction studies suggest that college students are more likely to follow behavioural suggestions offered by an autonomous social robot in comparison to a robot directly controlled by a human (Edwards, Edwards, Spence, & Harris, 2016). Experiments using artificial and actual lecturers in insets in video lectures show that: “participants who saw the inset video of the actual lecturer replaced by an animated human lecturer recalled less information than those who saw the recording of the human lecturer” (Li, Rene,́ Jeremy, & Wendy, 2015).
Recommended publications
  • Introduction to Robotics. Sensors and Actuators
    Introduction to Computer Vision and Robotics Florian Teich and Tomas Kulvicius* Introduction to Robotics. Sensors and Actuators Large portion of slides are adopted from Florentin Wörgötter, John Hallam and Simon Reich *[email protected] Perception-Action loop Environment Object Eyes Action Perception Arm Brain Perception-Action loop (cont.) Environment Sense Object Cameras Action Perception Robot-arm Computer Act Plan Outline of the course • L1.CV1: Introduction to Computer Vision. Thresholding, Filtering & Connected Coomponents • L2.CV2: Bilateral Filtering, Morphological Operators & Edge Detection • L3.CV3: Corner Detection & Non-Local Filtering • L4.R1: Introduction to Robotics. Sensors and actuators • L5.R2: Movement generation methods • L6.R3: Path planning algorithms • L7.CV4: Line/Circle Detection, Template Matching & Feature Detection • L8.CV5: Segmentation • L9.CV6: Fate Detection, Pedestrian Tracking & Computer Vision in 3D • L10.R4: Robot kinematics and control • L11.R5: Learning algorithms in robotics I: Supervised and unsupervised learning • L12.R6: Learning algorithms in robotics II: Reinforcement learning and genetic algorithms Introduction to Robotics History of robotics • Karel Čapek was one of the most influential Czech writers of the 20th century and a Nobel Prize nominee (1936). • He introduced and made popular the frequently used international word robot, which first appeared in his play R.U.R. (Rossum's Universal Robots) in 1921. 1890-1938 • “Robot” comes from the Czech word “robota”, meaning “forced labor” • Karel named his brother Josef Čapek as the true inventor of the word robot. History of robotics (cont.) • The word "robotics" also comes from science fiction - it first appeared in the short story "Runaround" (1942) by American writer Isaac Asimov.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • History of Robotics: Timeline
    History of Robotics: Timeline This history of robotics is intertwined with the histories of technology, science and the basic principle of progress. Technology used in computing, electricity, even pneumatics and hydraulics can all be considered a part of the history of robotics. The timeline presented is therefore far from complete. Robotics currently represents one of mankind’s greatest accomplishments and is the single greatest attempt of mankind to produce an artificial, sentient being. It is only in recent years that manufacturers are making robotics increasingly available and attainable to the general public. The focus of this timeline is to provide the reader with a general overview of robotics (with a focus more on mobile robots) and to give an appreciation for the inventors and innovators in this field who have helped robotics to become what it is today. RobotShop Distribution Inc., 2008 www.robotshop.ca www.robotshop.us Greek Times Some historians affirm that Talos, a giant creature written about in ancient greek literature, was a creature (either a man or a bull) made of bronze, given by Zeus to Europa. [6] According to one version of the myths he was created in Sardinia by Hephaestus on Zeus' command, who gave him to the Cretan king Minos. In another version Talos came to Crete with Zeus to watch over his love Europa, and Minos received him as a gift from her. There are suppositions that his name Talos in the old Cretan language meant the "Sun" and that Zeus was known in Crete by the similar name of Zeus Tallaios.
    [Show full text]
  • The Complete Stories
    The Complete Stories by Franz Kafka a.b.e-book v3.0 / Notes at the end Back Cover : "An important book, valuable in itself and absolutely fascinating. The stories are dreamlike, allegorical, symbolic, parabolic, grotesque, ritualistic, nasty, lucent, extremely personal, ghoulishly detached, exquisitely comic. numinous and prophetic." -- New York Times "The Complete Stories is an encyclopedia of our insecurities and our brave attempts to oppose them." -- Anatole Broyard Franz Kafka wrote continuously and furiously throughout his short and intensely lived life, but only allowed a fraction of his work to be published during his lifetime. Shortly before his death at the age of forty, he instructed Max Brod, his friend and literary executor, to burn all his remaining works of fiction. Fortunately, Brod disobeyed. Page 1 The Complete Stories brings together all of Kafka's stories, from the classic tales such as "The Metamorphosis," "In the Penal Colony" and "The Hunger Artist" to less-known, shorter pieces and fragments Brod released after Kafka's death; with the exception of his three novels, the whole of Kafka's narrative work is included in this volume. The remarkable depth and breadth of his brilliant and probing imagination become even more evident when these stories are seen as a whole. This edition also features a fascinating introduction by John Updike, a chronology of Kafka's life, and a selected bibliography of critical writings about Kafka. Copyright © 1971 by Schocken Books Inc. All rights reserved under International and Pan-American Copyright Conventions. Published in the United States by Schocken Books Inc., New York. Distributed by Pantheon Books, a division of Random House, Inc., New York.
    [Show full text]
  • Report of Comest on Robotics Ethics
    SHS/YES/COMEST-10/17/2 REV. Paris, 14 September 2017 Original: English REPORT OF COMEST ON ROBOTICS ETHICS Within the framework of its work programme for 2016-2017, COMEST decided to address the topic of robotics ethics building on its previous reflection on ethical issues related to modern robotics, as well as the ethics of nanotechnologies and converging technologies. At the 9th (Ordinary) Session of COMEST in September 2015, the Commission established a Working Group to develop an initial reflection on this topic. The COMEST Working Group met in Paris in May 2016 to define the structure and content of a preliminary draft report, which was discussed during the 9th Extraordinary Session of COMEST in September 2016. At that session, the content of the preliminary draft report was further refined and expanded, and the Working Group continued its work through email exchanges. The COMEST Working Group then met in Quebec in March 2017 to further develop its text. A revised text in the form of a draft report was submitted to COMEST and the IBC in June 2017 for comments. The draft report was then revised based on the comments received. The final draft of the report was further discussed and revised during the 10th (Ordinary) Session of COMEST, and was adopted by the Commission on 14 September 2017. This document does not pretend to be exhaustive and does not necessarily represent the views of the Member States of UNESCO. – 2 – REPORT OF COMEST ON ROBOTICS ETHICS EXECUTIVE SUMMARY I. INTRODUCTION II. WHAT IS A ROBOT? II.1. The complexity of defining a robot II.2.
    [Show full text]
  • Robots and Healthcare Past, Present, and Future
    ROBOTS AND HEALTHCARE PAST, PRESENT, AND FUTURE COMPILED BY HOWIE BAUM What do you think of when you hear the word “robot”? 2 Why Robotics? Areas that robots are used: Industrial robots Military, government and space robots Service robots for home, healthcare, laboratory Why are robots used? Dangerous tasks or in hazardous environments Repetitive tasks High precision tasks or those requiring high quality Labor savings Control technologies: Autonomous (self-controlled), tele-operated (remote control) 3 The term “robot" was first used in 1920 in a play called "R.U.R." Or "Rossum's universal robots" by the Czech writer Karel Capek. The acclaimed Czech playwright (1890-1938) made the first use of the word from the Czech word “Robota” for forced labor or serf. Capek was reportedly several times a candidate for the Nobel prize for his works and very influential and prolific as a writer and playwright. ROBOTIC APPLICATIONS EXPLORATION- – Space Missions – Robots in the Antarctic – Exploring Volcanoes – Underwater Exploration MEDICAL SCIENCE – Surgical assistant Health Care ASSEMBLY- factories Parts Handling - Assembly - Painting - Surveillance - Security (bomb disposal, etc) - Home help (home sweeping (Roomba), grass cutting, or nursing) 7 Isaac Asimov, famous write of Science Fiction books, proposed his three "Laws of Robotics", and he later added a 'zeroth law'. Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. Law One: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law. Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
    [Show full text]
  • Evitable Conflicts, Inevitable Technologies?
    LCH0010.1177/1743872113509443Law, Culture and the HumanitiesKerr and Szilagyi 509443research-article2014 LAW, CULTURE AND THE HUMANITIES MINI SYMPOSIUM: SCIENCE, SCIENCE FICTION AND INTERNATIONAL LAW Law, Culture and the Humanities 2018, Vol. 14(1) 45 –82 Evitable Conflicts, Inevitable © The Author(s) 2014 Reprints and permissions: Technologies? The Science sagepub.co.uk/journalsPermissions.nav https://doi.org/10.1177/1743872113509443DOI: 10.1177/1743872113509443 and Fiction of Robotic journals.sagepub.com/home/lch Warfare and IHL Ian Kerr University of Ottawa, Canada Katie Szilagyi McCarthy Tétrault LLP, Canada Abstract This article contributes to a special symposium on science fiction and international law, examining the blurry lines between science and fiction in the policy discussions concerning the military use of lethal autonomous robots. In response to projects that attempt to build military robots that comport with international humanitarian law [IHL], we investigate whether and how the introduction of lethal autonomous robots might skew international humanitarian norms. Although IHL purports to be a technologically-neutral approach to calculating a proportionate, discriminate, and militarily necessary response, we contend that it permits a deterministic mode of thinking, expanding the scope of that which is perceived of as “necessary” once the technology is adopted. Consequently, we argue, even if lethal autonomous robots comport with IHL, they will operate as a force multiplier of military necessity, thus skewing the proportionality
    [Show full text]
  • Robot Visions - Isaac Asimov
    Robot Visions - Isaac Asimov ISAAC ASIMOV ROBOT VISIONS ILLUSTRATIONS BY RALPH McQUARRIE file:///E|/Documents%20and%20Settings/Princess%20D...20Visions/Robot%20Visions%20-%20Isaac%20Asimov.htm (1 of 222)11/19/2005 3:59:53 AM Robot Visions - Isaac Asimov To Gardner Dozois and Stan Schmidt, colleagues and friends CONTENTS Introduction: The Robot Chronicles STORIES Robot Visions Too Bad! Robbie Reason Liar! Runaround Evidence Little Lost Robot The Evitable Conflict Feminine Intuition The Bicentennial Man Someday Think! Segregationist Mirror Image Lenny Galley Slave Christmas Without Rodney ESSAYS Robots I Have Known The New Teachers Whatever You Wish The Friends We Make Our Intelligent Tools The Laws Of Robotics Future Fantastic The Machine And The Robot The New Profession The Robot As Enemy? file:///E|/Documents%20and%20Settings/Princess%20D...20Visions/Robot%20Visions%20-%20Isaac%20Asimov.htm (2 of 222)11/19/2005 3:59:53 AM Robot Visions - Isaac Asimov Intelligences Together My Robots The Laws Of Humanics Cybernetic Organism The Sense Of Humor Robots In Combination Introduction: The Robot Chronicles What is a robot? We might define it most briefly and comprehensively as “an artificial object that resembles a human being.” When we think of resemblance, we think of it, first, in terms of appearance. A robot looks like a human being. It could, for instance, be covered with a soft material that resembles human skin. It could have hair, and eyes, and a voice, and all the features and appurtenances of a human being, so that it would, as far as outward appearance is concerned, be indistinguishable from a human being.
    [Show full text]
  • Dr. Asimov's Automatons
    Dr. Asimov’s Automatons Take on a Life of their Own Twenty years after his death, author Isaac Asimov’s robot fiction offers a blueprint to our robotic future...and the problems we could face by Alan S. Brown HIS PAST April, the University of Miami School These became the Three Laws. Today, they are the starting of Law held We Robot, the first-ever legal and point for any serious conversation about how humans and policy issues conference about robots. The name robots will behave around one another. of the conference, which brought together lawyers, As the mere fact of lawyers discussing robot law shows, engineers, and technologists, played on the title the issue is no longer theoretical. If robots are not yet of the most famous book intelligent, they are increasingly t ever written about robots, autonomous in how they carry out I, Robot, by Isaac Asimov. tasks. At the most basic level, every The point was underscored by day millions of robotic Roombas de- Laurie Silvers, president of Holly- cide how to navigate tables, chairs, wood Media Corp., which sponsored sofas, toys, and even pets as they the event. In 1991, Silvers founded vacuum homes. SyFy, a cable channel that specializ- At a more sophisticated level, es in science fiction. Within moments, autonomous robots help select in- she too had dropped Asimov’s name. ventory in warehouses, move and Silvers turned to Asimov for ad- position products in factories, and vice before launching SyFy. It was a care for patients in hospitals. South natural choice. Asimov was one of the Korea is testing robotic jailers.
    [Show full text]
  • The Three Laws of Robotics Machine Ethics (Or Machine Morality) Is A
    The Three Laws of Robotics Machine ethics (or machine morality) is a part of the ethics of AI concerned with the moral behavior of artificially intelligent beings. Machine ethics contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings. Machine ethics should not be confused with computer ethics, which focuses on professional behavior towards computers and information. Isaac Asimov considered the issue in the 1950s in his novel: I, Robot. He proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. Asimov’s laws are still mentioned as a template for guiding our development of robots ; but given how much robotics has changed, since they appeared, and will continue to grow in the future, we need to ask how these rules could be updated for a 21st century version of artificial intelligence. Asimov’s suggested laws were devised to protect humans from interactions with robots. They are: A robot may not injure a human being or, through inaction, allow a human being to come to harm A robot must obey the orders given it by human beings except where such orders would conflict with the First Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws As mentioned, one of the obvious issues is that robots today appear to be far more varied than those in Asimov’s stories, including some that are far more simple.
    [Show full text]
  • Brains, Minds, and Computers in Literary and Science Fiction Neuronarratives
    BRAINS, MINDS, AND COMPUTERS IN LITERARY AND SCIENCE FICTION NEURONARRATIVES A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy. by Jason W. Ellis August 2012 Dissertation written by Jason W. Ellis B.S., Georgia Institute of Technology, 2006 M.A., University of Liverpool, 2007 Ph.D., Kent State University, 2012 Approved by Donald M. Hassler Chair, Doctoral Dissertation Committee Tammy Clewell Member, Doctoral Dissertation Committee Kevin Floyd Member, Doctoral Dissertation Committee Eric M. Mintz Member, Doctoral Dissertation Committee Arvind Bansal Member, Doctoral Dissertation Committee Accepted by Robert W. Trogdon Chair, Department of English John R.D. Stalvey Dean, College of Arts and Sciences ii TABLE OF CONTENTS Acknowledgements ........................................................................................................ iv Chapter 1: On Imagination, Science Fiction, and the Brain ........................................... 1 Chapter 2: A Cognitive Approach to Science Fiction .................................................. 13 Chapter 3: Isaac Asimov’s Robots as Cybernetic Models of the Human Brain ........... 48 Chapter 4: Philip K. Dick’s Reality Generator: the Human Brain ............................. 117 Chapter 5: William Gibson’s Cyberspace Exists within the Human Brain ................ 214 Chapter 6: Beyond Science Fiction: Metaphors as Future Prep ................................. 278 Works Cited ...............................................................................................................
    [Show full text]
  • Liable Machines 7 CHAPTER 7
    Liable Machines 7 CHAPTER 7 After lighting a cigarette, Alfred Lanning, declared, “It reads minds all right.”1 Lanning was a recurrent character in Isaac Asimov’s science fiction. In this particular story, the director of a plant of U.S. Robots and Mechanical Men was talking about Her- bie, a robot with “a positronic brain of supposedly ordinary vintage.” Herbie had the ability to “tune in on thought waves,” leaving Lanning and his colleagues baffled by his ability to read minds. Herbie was “the most important advance in robotics in decades.” But neither Lanning nor his team knew how it happened. Lanning’s team included Peter Bogert, a mathematician and second-in-command to Lanning; Milton Ashe, a young officer at U.S. Robots and Mechanical Men; and Dr. Susan Calvin, a robopsychologist (who happened to be in love with Ashe). Lanning asked Dr. Calvin to study Herbie first. She sat down with the robot, who had recently finished reading a pile of science books. “It’s your fiction that interests me,” said Herbie. “Your studies of the interplay of human motives and emotions.” As Dr. Calvin listened, she begun to think about Milton Ashe. “He loves you,”—the robot whispered. 150 | LIABLE MACHINES “For a full minute, Dr. Calvin did not speak. She merely stared.” “You are mistaken! You must be. Why should he?” “But he does. A thing like that cannot be hidden, not from me.” Then he supported his statement with irresistible rationality: “He looks deeper than the skin and admires intellect in others. Milton Ashe is not the type to marry a head of hair and a pair of eyes.” She was convinced.
    [Show full text]