Why the Three Laws of Robotics Do Not Work
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Research in Engineering and Innovation Vol-2, Issue-2 (2018), 121-126 __________________________________________________________________________________________________________________________________ International Journal of Research in Engineering and Innovation (IJREI) journal home page: http://www.ijrei.com ISSN (Online): 2456-6934 _______________________________________________________________________________________ Why the three laws of robotics do not work Chris Stokes School of Philosophy, Wuhan University, Wuhan, China _______________________________________________________________________________________ Abstract This paper will be exploring the issue of safeguards around artificial intelligence. AI is a technological innovation that could potentially be created in the next few decades. There must be have controls in place before the creation of 'strong', sentient AI to avoid potentially catastrophic risks. Many AI researchers and computer engineers believe that the 'Three Laws of Robotics', written by Isaac Asimov, are sufficient controls. This paper aims to show that the Three Laws are actually inadequate to the task. This paper will look at the Three Laws of Robotics and explain why they are insufficient in terms of the safeguards that are required to protect humanity from rogue or badly programmed AI. It looks at each law individually and explain why it fails. The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer. The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves. The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws. The ‘Zeroth’ Law, like the first, fails because of ambiguous ideology. All of the Laws also fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law. © 2018 ijrei.com. All rights reserved Key words: Robot Ethics, Control Problem, Artificial Intelligence _________________________________________________________________________________________________________ 1. Introduction [1]. AI can be split into two camps: soft and hard (also called general) AI. Richard Watson says, “'Strong AI' is the term 1.1 What is Robot Ethics generally used to describe true thinking machines.’Weak AI' […] is intelligence designed to supplement rather than exceed Human lives are heavily influenced by machines. From human intelligence [2]” In other words, 'strong' AI is a fully machines that help farmers grow and harvest crops, to software autonomous, sentient agent. 'Weak' AI would only ever be the that runs power plants and dams, to computers that manage appearance of sentience. 'Weak' AI is an advanced program, traffic and airports. Humans still control these machines, but 'strong' AI is an artificial living thing. this is something that is changing. Driver-less cars are already Soft AI is similar to the kind of programming that exists in being trialed on public roads in America and in the United smart phones and in search engines already. It is allows a Kingdom. Software that connects GPS to a tractor allows crops person to ask a question and have a computer answer it. At the to be planted at the optimum depth and soil, without requiring moment the question must be relatively specific. For example, input from the farmer themselves. These machines are still a person can press a button on an iPhone and ask it to convert essentially at the mercy of humans. A human can take control x amount of US dollars into pounds sterling. A program on the of a driver-less car or a self-planting tractor. A human phone (in this case Siri) looks for certain words and phrases, intelligence can override these machines. But eventually, that and knows that to solve this problem involves connecting to could change. the internet and looking up currency exchange rates. It then The ultimate goal for robotics is undoubtedly artificial shows the person the most up to date information. It does all intelligence (AI). Artificial intelligence is a highly intelligent this without any input or instruction from the user, beyond the piece of software that can solve tasks without human initial question. This kind of AI is relatively limited however, interaction. At its most basic AI is “the attempt to build and/or and cannot solve complex problems. program computers to do the sorts of things that minds can do Hard AI is a much more sophisticated kind of program. The sometimes, though not necessarily, in the same sort of way” creation of hard AI is the creation of a program that is sentient Corresponding author: Chris Stokes Email Id: [email protected] 121 Chris Stokes / International journal of research in engineering and innovation (IJREI), vol 2, issue 2 (2018), 121-126 in its own right. This kind of AI is many years away, but there 3. A robot must protect its own existence as long as such are currently not the safeguards in place that there needs to be protection does not conflict with the First or Second if AI research is to continue. Programs that can connect to the Law.[3]” internet without direct human instruction already exist, and Isaac Asimov later added a fourth, or “zeroth” law, that most of human infrastructure can be accessed through digital preceded the others in terms of priority: means. This infrastructure could be at risk from an AI that is 4. “A robot may not harm humanity, or, by inaction, allow improperly or maliciously programmed. humanity to come to harm [4]”. 1.2 Why Safeguards Are Necessary The Three Laws were designed to be programmed into an AI in such a way that they are unbreakable. They are intended to In biochemical laboratories there are safeguards to protect be the first thing programmed into any robot and are inserted against accidental harm. Laboratories that research vaccines in such a way that they supersede all other programming that have safeguards that are act as a firewall between the research goes into the robot or AI. The AI must follow these laws. For and the outside world and also have controls for how both the example, if an AI saw a human in danger and the only way to research and the final product must be controlled. These save the human was to sacrifice it's own life it would safeguards exist from the very beginning of research, even automatically sacrifice itself because of the first and third laws. before anything dangerous has been created. They exist just in Note that these laws are talking about 'robots'. In Isaac case something goes wrong. Research into AI does not have Asimov's fiction robots are a mixture of strong and weak AIs. the corresponding safeguards in place to protect humanity The robots with weak AI would usually servants, walking dogs against the effects of something going wrong. or cooking food. But there also exist in Asimov's writings some Improper safeguards at a biochemical lab could potentially robots with strong AI. These laws are programmed into all result in the release of a deadly virus, potentially killing several such AIs to act as a safeguard against accidental or deliberate thousand. Improper safeguards at a construction site might acts of violence to humans and ensure human control over AI. result in an accident where perhaps a dozen or so workmen are Two things to note before continuing on. First, there is a killed. Improper safeguards in terms of creating a general AI problem relating to how these laws are even instituted into the could result in an unknown and potentially hostile intelligence AI. Anything that can be programmed into a computer, an AI in control over the automated aspects of human food, can change. For the sake of argument, let us imagine that there transportation and power supply and potentially killing is some solution to this problem. This paper will be looking at millions. the laws themselves, not problems around physical In order to create adequate safeguards for humanity, implementation. researchers must first dispel with existing safeguards that are The second thing to be noted is that even within Asimov's inadequate. Research into 'strong' artificial intelligence is still fictional universe these laws do not work. Robots malfunction, theoretical at this point in time, and such safeguards that do humans commit errors, or the law's are changed somehow. exist are also not uniformly enforced. Such safeguards that do This is for a very obvious reason: Asimov was writing stories. exist are largely informal, held in the minds of the computer He was creating a work of fiction that would entertain, and so technicians and engineers who conduct the research. The three he was using artistic license to explore issues in robotics. This Laws of Robotics are one such safeguard, and this safeguard is paper will be looking at the laws as they stand, and will not adequate to protect against a rogue AI. imagine that the laws have been put into the AI perfectly without any mistranslation, corruption or manipulation. Let us 2. What are the three laws of robotics? finally turn to the laws themselves. This paper will go through the laws one by one and explain why it is an inadequate Robot ethics is a growing field within philosophy. It has been safeguard. influenced heavily by science fiction writers. The Three Laws of Robotics were written by Isaac Asimov to act as a safeguard 2.1 The First Law against a potentially dangerous artificial intelligence. Many computer engineers use the three laws as a tool for how they A robot may not injure a human being or, through inaction, think about programming AI. The Three Laws are seen by allow a human being to come to harm.