Liability Law for Present and Future Robotics Technology Trevor N
Total Page:16
File Type:pdf, Size:1020Kb
Liability Law for Present and Future Robotics Technology Trevor N. White and Seth D. Baum Global Catastrophic Risk Institute http://gcrinstitute.org * http://sethbaum.com Published in Robot Ethics 2.0, edited by Patrick Lin, George Bekey, Keith Abney, and Ryan Jenkins, Oxford University Press, 2017, pages 66-79. This version 10 October 2017 Abstract Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, robots have been implicated in a variety of harms. However, current and near-future robots pose no significant challenge for liability law: they can be readily handled with existing liability law or minor variations thereof. We show this through examples from medical technology, drones, and consumer robotics. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable. Liability law for robot persons could draw on certain precedents, such as animal liability. However, legal innovations will be needed, in particular for determining which robots merit legal personhood. Finally, a major challenge comes from the possibility of future robots that could cause major global catastrophe. As with other global catastrophic risks, liability law could not apply, because there would be no post- catastrophe legal system to impose liability. Instead, law must be based on pre-catastrophe precautionary measures. Introduction In June 2005, a surgical robot at a hospital in Philadelphia malfunctioned during a prostate surgery, possibly injuring the patient.1 In June 2015, a worker at a Volkswagen plant in Germany was crushed to death by a robot that was part of the assembly process.2 In November 2015, a self-driving car in California made a complete stop at an intersection and then was hit by a car with a human driver, apparently because the self-driving car followed traffic law but not traffic norms.3 These are just some of the ways that robots are already implicated in harms. As robots become more sophisticated and more widely adopted, the potential for harm will get even larger. Robots even show potential for causing harm at massive catastrophic scales. How should robot harms be governed? In general, liability law governs harms in which someone or something else is responsible. Liability law is used to punish those who have caused harms, particularly those that could have and should have been avoided. The threat of punishment further serves to discourage those who could cause harm. Liability law is thus an important legal tool for serving justice and advancing the general welfare of society and its members. The value of liability law holds for robotics just as it does for any other harm-causing technology. But robots are not just any other technology. Robots are (or at least can be) intelligent, autonomous actors moving about in the physical world. They can cause harms through actions that they choose to make, actions that no human told them to make and, indeed, that may surprise 1 their human creators. Perhaps robots should be liable for their harms. This is a historic moment: humans creating technology that could potentially be liable for its own actions. Furthermore, robots can have the strength of industrial machinery and the intelligence of advanced computer systems. Robots can also be mass produced and connected to each other and to other technological systems. This creates the potential for robots to cause unusually great harm. This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Three types of cases are distinguished, each with very different implications. First are cases in which some human party is liable, such as the manufacturer or the human using the robot. These cases pose no novel challenges for liability law: they are handled the same way as with other technologies in comparable circumstances. Second are cases in which the robot itself is liable. These cases require dramatic revision to liability law, including standards to assess when robots can be held liable and principles for dividing liability between the robot and the humans who designed, built, and used it. Third are cases in which the robot poses a major catastrophic risk. These cases merit separate attention because a sufficiently large catastrophe would destroy the legal system and thus the potential to hold anyone or anything liable. The three types of cases differ across two dimensions as shown in Figure 1. One dimension is the robot’s degree of legal personhood, meaning the extent to which a robot shows attributes that qualify it for independent standing in a court of law. As we discuss, a robot can be held liable in the eyes of the law to the extent that it merits legal personhood. The other dimension shows the size of the harm the robot causes. Harms of extreme severity cannot be handled by liability law. However, there is no strict distinction between the three cases. Instead, there is a continuum, as shown by the regions in which a robot can have partial liability or more-than-human liability and in which liability law works to a limited extent. Figure 1. Classification scheme for the applicability of liability law to various sizes of harms caused by various types of robots. I - A Human Party Is Liable In a detailed study of robot law, Weaver (2014, 21–27) identifies four types of parties that could be liable for harm caused by a robot: (1) people who were using the robot or overseeing its use; (2) other people who were not using the robot but otherwise came into contact with it, which can include people harmed by the robot; (3) some party involved in the robot’s production and distribution, such as the company that manufactured the robot; or (4) the robot itself. 2 For the first three types of parties, liability applies the same as for other technologies. A surgical robot, for example, can be misused by the surgeon (type 1), bumped into by a hospital visitor who wandered into a restricted area (type 2), or poorly built by the manufacturer (type 3). The same situations can also arise for other, non-robotic medical technologies. In each case, the application of liability law is straightforward. Or rather, to the extent that the application of liability law is not straightforward, the challenges faced are familiar. The fourth type—when the robot is liable—is the only one that poses novel challenges for liability law. To see this, consider one of the thornier cases of robot liability, that of lethal autonomous weapon systems (LAWSs). These are weapons that decide for themselves whom to kill. Sparrow (2007) argues that there could be no one liable for certain LAWS harms—for example, if a LAWS decides to kill civilians or soldiers who have surrendered. A sufficiently autonomous LAWS could make its own decisions, regardless of how humans designed and deployed it. In this case, Sparrow argues, it would be unfair to hold the designer or deployer liable (or the manufacturer or other human parties). It might further be inappropriate to hold the robot itself liable, if it is not sufficiently advanced in legally relevant ways (more on this in Section II). In this case, who or what to hold liable is ambiguous. This ambiguous liability is indeed a challenge, but it is a familiar one. In the military context, precedents include child soldiers (Sparrow 2007, 73–74) and landmines (Hammond 2015, note 62). Child soldiers can make their own decisions, disobey orders, and cause harm in the process. Landmines can linger long after a conflict, making it difficult or impossible to identify who is responsible for their placement. In both cases, it can be difficult or perhaps impossible to determine who is liable. So too for LAWSs. This ambiguous liability can be a reason to avoid or even ban the use of child soldiers, landmines, and LAWSs in armed conflict. Regardless, even for this relatively thorny case of robot liability, robotics technology raises no new challenges for liability law. In the United States, agencies such as the Department of Defense produce regulations on the use of LAWSs which are not dramatically different than for other weapons. Internationally, bodies like the UN’s International Court of Justice could hold a state liable for authorizing drone strikes that caused excessive civilian casualties. Meanwhile, commercial drones can be regulated as other aircraft are now: by a combination of the FAA and corporate oversight by their creators (McFarland 2015). The handling of such relatively simple robots under liability law will thus be familiar if not straightforward. The above LAWSs examples also resemble how liability law handles non-human animals, which has prompted proposals for robots to be given legal status similar to non-human animals (e.g., Kelley et al. 2010). Suppose someone gets a pet dog and then the dog bites someone, despite the owner trying to stop it. If this person had absolutely no idea the dog would bite someone, then she would not be liable for that bite. However, having seen the dog bite someone, she now knows the dog is a biter, and is now expected to exercise caution with it in the future.