Why Machine Ethics?
Total Page:16
File Type:pdf, Size:1020Kb
www.computer.org/intelligent Why Machine Ethics? Colin Allen, Indiana University Wendell Wallach, Yale University Iva Smit, E&E Consultants Vol. 21, No. 4 July/August 2006 This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. © 2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. For more information, please see www.ieee.org/portal/pages/about/documentation/copyright/polilink.html. Machine Ethics Why Machine Ethics? Colin Allen, Indiana University Wendell Wallach, Yale University Iva Smit, E&E Consultants runaway trolley is approaching a fork in the tracks. If the trolley runs on its cur- A rent track, it will kill a work crew of five. If the driver steers the train down the other branch, the trolley will kill a lone worker. If you were driving the trolley, what would you do? What would a computer or robot do? Trolley cases, first introduced by philosopher which decisions must be made? It’s easy to argue Philippa Foot in 19671 and a staple of introductory from a position of ignorance that such a goal is impos- Machine ethics isn’t ethics courses, have multiplied in the past four sible to achieve. But precisely what are the challenges decades. What if it’s a bystander, rather than the dri- and obstacles for implementing machine ethics? The merely science fiction; ver, who has the power to switch the trolley’s course? computer revolution is continuing to promote reliance What if preventing the five deaths requires pushing on automation, and autonomous systems are coming it’s a topic that another spectator off a bridge onto the tracks? These whether we like it or not. Will they be ethical? variants evoke different intuitive responses. requires serious Given the advent of modern “driverless” train sys- Good and bad artificial agents? tems, which are now common at airports and begin- This isn’t about the horrors of technology. Yes, the consideration, given ning to appear in more complicated situations such machines are coming. Yes, their existence will have as the London Underground and the Paris and unintended effects on our lives, not all of them good. the rapid emergence of Copenhagen Metro systems, could trolley cases be But no, we don’t believe that increasing reliance on one of the first frontiers for machine ethics? Machine autonomous systems will undermine our basic increasingly complex ethics (also known as machine morality, artificial humanity. Neither will advanced robots enslave or morality, or computational ethics) is an emerging exterminate us, in the best traditions of science fiction. autonomous software field that seeks to implement moral decision-mak- We humans have always adapted to our technological ing faculties in computers and robots. Is it too soon products, and the benefits of having autonomous agents and robots. to be broaching this topic? We don’t think so. machines will most likely outweigh the costs. Driverless systems put machines in the position of But optimism doesn’t come for free. We can’t just making split-second decisions that could have life or sit back and hope things will turn out for the best. death implications. As a rail network’s complexity We already have semiautonomous robots and soft- increases, the likelihood of dilemmas not unlike the ware agents that violate ethical standards as a mat- basic trolley case also increases. How, for example, ter of course. A search engine, for example, might do we want our automated systems to compute where collect data that’s legally considered to be private, to steer an out-of-control train? Suppose our driver- unbeknownst to the user who initiated the query. less train knew that there were five railroad workers Furthermore, with the advent of each new tech- on one track and a child on the other. Would we want nology, futuristic speculation raises public concerns the system to factor this information into its decision? regarding potential dangers (see the “Skeptics of Dri- The driverless trains of today are, of course, ethi- verless Trains” sidebar). In the case of AI and robot- cally oblivious. Can and should software engineers ics, fearful scenarios range from the future takeover attempt to enhance their software systems to explic- of humanity by a superior form of AI to the havoc itly represent ethical dimensions of situations in created by endlessly reproducing nanobots. While 12 1541-1672/06/$20.00 © 2006 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society Skeptics of Driverless Trains Engineers insist that driverless train systems are safe—safer Nevertheless, despite advances in technology, passengers than human drivers, in fact. But the public has always been remain skeptical. Parisian planners claimed that the only prob- skeptical. The London Underground first tested driverless trains lems with driverless trains are “political, not technical.”1 No more than four decades ago, in April 1964. But driverless trains doubt, some resistance can be overcome simply by installing faced political resistance from rail workers who believed their driverless trains and establishing a safety record, as is already jobs were threatened and from passengers who weren’t entirely beginning to happen in Koria, Barcelona, Paris, Copenhagen, convinced of the safety claims, so London Transport contin- and London. But we feel sure that most passengers would still ued to give human drivers responsibility for driving the trains think that there are crisis situations beyond the scope of any through the stations. But computers are now driving Central programming, where human judgment would be preferred. In Line trains in London through stations, even though human some of those situations, the relevant judgment would involve drivers remain in the cab. Most passengers likely believe that ethical considerations. human drivers are more flexible and able to deal with emer- gencies than the computerized controllers. But this might be Reference human hubris. Morten Sondergaard, in charge of safety for the Copenhagen Metro, asserts that “Automatic trains are 1. M. Knutton, “The Future Lies in Driverless Trains,” Int’l Railway J., 1 safe and more flexible in fall-back situations because of the June 2002; www.findarticles.com/p/articles/mi_m0BQQ/is_6_42/ai_ speed with which timetables can be changed.”1 88099079. some of these fears are farfetched, they filling tasks for the robot’s owner? (Should this Making ethics explicit underscore possible consequences of poorly be an owner-specified setting?) Should an Until recently, designers didn’t consider designed technology. To ensure that the pub- autonomous agent simply abdicate responsi- the ways in which they implicitly embedded lic feels comfortable accepting scientific bility to human controllers if all the options it values in the technologies they produced. An progress and using new tools and products, discerns might cause harm to humans? (If so, important achievement of ethicists has been we’ll need to keep them informed about new is it sufficiently autonomous?) to help engineers become aware of their technologies and reassure them that design When we talk about what’s good in this work’s ethical dimensions. There’s now a engineers have anticipated potential issues sense, we enter the domain of ethics and movement to bring more attention to unin- and accommodated for them. morality. It’s important to defer questions tended consequences resulting from the New technologies in the fields of AI, about whether a machine can be genuinely adoption of information technology. For genomics, and nanotechnology will combine ethical or even genuinely autonomous— example, the ease with which information in a myriad of unforeseeable ways to offer questions that typically presume that a gen- can be copied using computers has under- promise in everything from increasing pro- uine ethical agent acts intentionally, mined legal standards for intellectual-prop- ductivity to curing diseases. However, we’ll autonomously, and freely. The present engi- erty rights and forced a reevaluation of copy- need to integrate artificial moral agents into neering challenge concerns only artificial right law. Helen Nissenbaum, who has been these new technologies to manage their com- morality: ways of getting artificial agents to at the forefront of this movement, pointed out plexity. These AMAs should be able to make act as if they were moral agents. If we’re to the interplay between values and technology decisions that honor privacy, uphold shared trust multipurpose machines, operating when she wrote, “In such cases, we cannot ethical standards, protect civil rights and indi- untethered from their designers or owners simply align the world with the values and vidual liberty, and further the welfare of others. and programmed to respond flexibly in real principles we adhered to prior to the advent Designing such value-sensitive AMAs won’t or virtual environments, we must be confi- of technological challenges. Rather, we must be easy, but it’s necessary and inevitable. dent that their behavior satisfies appropriate grapple with the new demands that changes To avoid the bad consequences of auto- norms. This means something more than tra- wrought by the presence and use of infor- nomous artificial agents, we’ll need to direct ditional product safety. mation technology have placed on values and considerable effort toward designing agents Of course, robots that short-circuit and moral principles.”2 whose decisions and actions might be consid- cause fires are no more tolerable than toast- Attention to the values that are uncon- ered good.