<<

Machine The Nature, Importance, and Difficulty of Machine Ethics

James H. Moor, Dartmouth College

he question of whether machine ethics exists or might exist in the future is diffi- T cult to answer if we can’t agree on what counts as machine ethics. Some might argue that machine ethics obviously exists because humans are machines and humans have ethics. Others could argue that machine ethics obviously doesn’t exist because

ethics is simply emotional expression and machines which they were designed. With , all of can’t have emotions. us— and engineers included—are involved Implementations of A wide range of positions on machine ethics are in evaluation processes requiring the selection and possible, and a discussion of the issue could rapidly application of standards. In none of our professional machine ethics might propel us into deep and unsettled philosophical activities can we retreat to a world of pure facts, issues. Perhaps, understandably, few in the scientific devoid of subjective normative assessment. be possible in arena pursue the issue of machine ethics. You’re By its nature, computing technology is normative. unlikely to find easily testable hypotheses in the We expect programs, when executed, to proceed situations ranging murky waters of philosophy. But we can’t—and toward some objective—for example, to correctly shouldn’t—avoid consideration of machine ethics in compute our income taxes or keep an airplane on from maintaining today’s technological world. course. Their intended purpose serves as a for As we expand computers’decision-making roles in evaluation—that is, we assess how well the computer hospital records to practical matters, such as computers driving cars, eth- program calculates the tax or guides the airplane. ical considerations are inevitable. Computer scien- Viewing computers as technological agents is rea- overseeing disaster tists and engineers must examine the possibilities for sonable because they do jobs on our behalf. They’re machine ethics because, knowingly or not, they’ve normative agents in the limited sense that we can relief. But what is already engaged—or will soon engage—in some assess their performance in terms of how well they form of it. Before we can discuss possible imple- do their assigned jobs. machine ethics, and mentations of machine ethics, however, we need to After we’ve worked with a technology for a while, be clear about what we’re asserting or denying. the norms become second nature. But even after how can it be? they’ve become widely accepted as the way of doing Varieties of machine ethics the activity properly, we can have moments of real- When people speak of technology and values, ization and see a need to establish different kinds of they’re often thinking of ethical values. But not all norms. For instance, in the early days of computing, values are ethical. For example, practical, economic, using double digits to designate years was the stan- and aesthetic values don’t necessarily draw on ethi- dard and worked well. But, when the year 2000 cal considerations. A product of technology, such as approached, programmers realized that this norm a new sailboat, might be practically durable, eco- needed reassessment. Or consider a distinction involv- nomically expensive, and aesthetically pleasing, ing AI. In a November 1999 correspondence between absent consideration of any ethical values. We rou- Herbert Simon and Jacques Berleur,1 Berleur was tinely evaluate technology from these nonethical nor- asking Simon for his reflections on the 1956 Dart- mative viewpoints. Tool makers and users regularly mouth Summer Research Project on Artificial Intel- evaluate how well tools accomplish the purposes for ligence, which Simon attended. Simon expressed

18 1541-1672/06/$20.00 © 2006 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society some puzzlement as to why Trenchard More, ing, leaving Qatar vulnerable to economic writing code containing explicit ethical max- a conference attendee, had so strongly sanctions. ims. The machine acts ethically because its emphasized modal in his thesis. Simon The machine solution has been to develop internal functions implicitly promote ethical thought about it and then wrote back to robotic camel jockeys. The camel jockeys are behavior—or at least avoid unethical behav- Berleur, about two feet high and weigh 35 pounds. ior. Ethical behavior is the machine’s nature. The robotic jockey’s right hand handles the It has, to a limited extent, . My reply to you last evening left my mind whip, and its left handles the reins. It runs Computers are implicit ethical agents nagged by the question of why Trench Moore [sic], in his thesis, placed so much emphasis Linux, communicates at 2.4 GHz, and has a when the machine’s construction addresses on modal logics. The answer, which I thought GPS-enabled camel-heart-rate monitor. As safety or critical reliability concerns. For might interest you, came to me when I awoke Wired explained it, “Every robot camel jockey example, automated teller machines and this morning. Viewed from a computing stand- bopping along on its improbable mount Web banking software are agents for banks point (that is, discovery of proofs rather than verification), a standard is an indetermi- means one Sudanese boy freed from slavery and can perform many of the tasks of human nate : it tells you what you MAY and sent home.”Although this eliminates the tellers and sometimes more. Transactions legally do, but not what you OUGHT to do to camel jockey slave problem in Qatar, it doesn’t involving money are ethically important. find a proof. Moore [sic] viewed his task as improve the economic and social conditions Machines must be carefully constructed to building a modal logic of “oughts”—a strat- in places such as Sudan. give out or transfer the correct amount of egy for search—on top of the standard logic of verification. Computing technology often has impor- money every time a banking transaction tant ethical impact. The young boys replaced occurs. A line of code telling the computer Simon was articulating what he already to be honest won’t accomplish this. knew as one of the designers of the Logic The- suggested that humans could orist, an early AI program. A theorem prover obtain by developing habits. But with must not only generate a list of well-formed Frequently, what sparks debate is machines, we can build in the behavior with- formulas but must also find a sequence of out the need for a learning curve. Of course, well-formed formulas constituting a proof. whether you can put ethics into a such machine virtues are task specific and So, we need a procedure for doing this. rather limited. Computers don’t have the Modal logic distinguishes between what’s practical wisdom that Aristotle thought we permitted and what’s required. Of course, machine. Can a computer operate use when applying our virtues. both are norms for the subject matter. But Another example of a machine that’s an norms can have different levels of obligation, ethically because it’s internally implicit ethical agent is an airplane’s auto- as Simon stresses through capitalization. matic pilot. If an airline promises the plane’s Moreover, the norms he’s suggesting aren’t ethical in some way? passengers a destination, the plane must ethical norms. A typical theorem prover is a arrive at that destination on time and safely. normative agent but not an ethical one. These are ethical outcomes that engineers by robotic camel jockeys are freed from into the automatic pilot. Other built- Ethical-impact agents slavery. Computing frees many of us from in devices warn humans or machines if an You can evaluate computing technology monotonous, boring jobs. It can make our object is too close or the fuel supply is low. in terms of not only design norms (that is, lives better but can also make them worse. Or, consider pharmacy software that checks whether it’s doing its job appropriately) but For example, we can conduct business online for and reports on drug interactions. Doctor also ethical norms. easily, but we’re more vulnerable to identity and pharmacist duties of care (legal and eth- For example, Wired magazine reported an theft. Machine ethics in this broad sense is ical obligations) require that the drugs pre- interesting example of applied computer close to what we’ve traditionally called com- scribed do more good than harm. Software technology.2 Qatar is an oil-rich country in puter ethics. In one sense of machine ethics, with elaborate medication databases helps the Persian Gulf that’s friendly to and influ- computers do our bidding as surrogate agents them perform those duties responsibly. enced by the West while remaining steeped and impact ethical issues such as privacy, Machines’capability to be implicit ethical in Islamic tradition. In Qatar, these cultural property, and power. However, the term is agents doesn’t demonstrate their ability to be traditions sometimes mix without incident— often used more restrictively. Frequently, full-fledged ethical agents. Nevertheless, it for example, women may wear Western what sparks debate is whether you can put illustrates an important sense of machine clothing or a full veil. And sometimes the cul- ethics into a machine. Can a computer oper- ethics. Indeed, some would argue that soft- tures conflict, as illustrated by camel racing, ate ethically because it’s internally ethical in ware engineers must routinely consider a pastime of the region’s rich for centuries. some way? machine ethics in at least this implicit sense Camel jockeys must be light—the lighter the during software development. jockey, the faster the camel. Camel owners Implicit ethical agents enslave very young boys from poorer coun- If you wish to put ethics into a machine, Explicit ethical agents tries to ride the camels. Owners have histor- how would you do it? One way is to constrain Can ethics exist explicitly in a machine?3 ically mistreated the young slaves, including the machine’s actions to avoid unethical out- Can a machine represent ethical categories limiting their food to keep them lightweight. comes. You might satisfy machine ethics in and perform analysis in the sense that a The United Nations and the US State Depart- this sense by creating software that implic- computer can represent and analyze inven- ment have objected to this human traffick- itly supports ethical behavior, rather than by tory or tax information? Can a machine “do”

JULY/AUGUST 2006 www.computer.org/intelligent 19 Machine Ethics

ethics like a computer can play chess? Chess It uses a learning algorithm to adjust judgments and who would die. Some might say that programs typically provide representations of duty by taking into account both prima facie only humans should make such decisions, of the current board position, know which duties and past intuitions about similar or dis- but if (and of course this is a big assumption) moves are legal, and can calculate a good similar cases involving those duties. computer decision making could routinely next move. Can a machine represent ethics These examples are a good start toward save more lives in such situations than human explicitly and then operate effectively on the creating explicit ethical agents, but more decision making, we might have a good eth- basis of this knowledge? (For simplicity, I’m research is needed before a robust explicit ical basis for letting computers make the imaging the development of ethics in terms ethical agent can exist in a machine. What decisions.9 of traditional symbolic AI. However, I don’t would such an agent be like? Presumably, it want to exclude the possibility that the would be able to make plausible ethical judg- Full ethical agents machine’s architecture is connectionist, with ments and justify them. An explicit ethical A full ethical agent can make explicit eth- an explicit understanding of the ethics agent that was autonomous in that it could ical judgments and generally is competent to emerging from that. Compare Wendell Wal- handle real-life situations involving an unpre- reasonably justify them. An average adult lach, Colin Allen, and Iva Smit’s different dictable sequence of events would be most human is a full ethical agent. We typically senses of “bottom up” and “top down.”4) impressive. regard humans as having , Although clear examples of machines act- James Gips suggested that the develop- , and . Can a machine ing as explicit ethical agents are elusive, ment of an ethical robot be a computing be a full ethical agent? It’s here that the some current developments suggest interest- Grand Challenge.8 Perhaps DARPA could debate about machine ethics becomes most ing movements in that direction. Jeroen van heated. Many believe a bright line exists den Hoven and Gert-Jan Lokhorst blended between the senses of machine ethics dis- three kinds of advanced logic to serve as a cussed so far and a full ethical agent. For bridge between ethics and a machine: An average adult is a full ethical them, a machine can’t cross this line. The bright line marks a crucial ontological dif- • deontic logic for statements of permission agent. Can a machine be a full ference between humans and whatever and obligation, machines might be in the future. • epistemic logic for statements of beliefs ethical agent? It’s here that the The bright-line argument can take one or and knowledge, and both of two forms. The first is to argue that • action logic for statements about actions.5 debate about machine ethics only full ethical agents can be ethical agents. To argue this is to regard the other senses of Together, these logics suggest that a for- machine ethics as not really ethics involving mal apparatus exists that could describe eth- becomes most heated. agents. However, although these other senses ical situations with sufficient precision to are weaker, they can be useful in identifying make ethical judgments by machine. For more limited ethical agents. To ignore the example, you could use a combination of establish an explicit-ethical-agent project ethical component of ethical-impact agents, these logics to state explicitly what action is analogous to its autonomous-vehicle project implicit ethical agents, and explicit ethical allowed and what is forbidden in transferring (www.darpa.mil/grandchallenge/index.asp). agents is to ignore an important aspect of personal information to protect privacy.6 In a As military and civilian robots become machines. What might bother some is that hospital, for example, you’d program a com- increasingly autonomous, they’ll probably the ethics of the lesser ethical agents is puter to let some personnel access some need ethical capabilities. Given this likely derived from their human developers. How- information and to calculate which actions increase in robots’ , the develop- ever, this doesn’t mean that you can’t evalu- what person should take and who should be ment of a machine that’s an explicit ethical ate machines as ethical agents. Chess pro- informed about those actions. agent seems a fitting subject for a Grand grams receive their chess knowledge and Michael Anderson, Susan Anderson, and Challenge. abilities from humans. Still, we regard them Chris Armen implement two ethical theo- Machines that are explicit ethical agents as chess players. The fact that lesser ethical ries.7 Their first model of an explicit ethical might be the best ethical agents to have in sit- agents lack humans’ consciousness, inten- agent—Jeremy (named for Jeremy Ben- uations such as disaster relief. In a major dis- tionality, and free will is a basis for arguing tham)—implements Hedonistic Act Utilitar- aster, such as Hurricane Katrina in New that they shouldn’t have broad ethical respon- ianism. Jeremy estimates the likelihood of Orleans, humans often have difficulty track- sibility. But it doesn’t establish that they pleasure or displeasure for persons affected ing and processing information about who aren’t ethical in ways that are assessable or by a particular act. The second model is W.D. needs the most help and where they might that they shouldn’t have limited roles in func- (named for William D. Ross). Ross’s theory find effective relief. Confronted with a com- tions for which they’re appropriate. emphasizes prima facie duties as opposed to plex problem requiring fast decisions, com- The other form of bright-line argument is absolute duties. Ross considers no duty as puters might be more competent than to argue that no machine can become a full absolute and gives no clear ranking of his humans. (At least the question of a computer ethical agent—that is, no machine can have various prima facie duties. So, it’s unclear decision maker’s competence is an empiri- consciousness, intentionality, and free will. how to make ethical decisions under Ross’s cal issue that might be decided in favor of the This is metaphysically contentious, but the theory. Anderson, Anderson, and Armen’s computer.) These decisions could be ethical simple rebuttal is that we can’t say with cer- computer model overcomes this uncertainty. in that they would determine who would live tainty that future machines will lack these

20 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS features. Even John Searle, a major critic of strong AI, doesn’t argue that machines can’t The Author possess these features.10 He only denies that computers, in their capacity as purely syn- James H. Moor is a professor in Dartmouth College’s Department of Phi- losophy. His research interests include computing and ethics and the philos- tactic devices, can possess understanding. He ophy of . He received his PhD in history and the phi- doesn’t claim that machines can’t have under- losophy of science from Indiana University. He is editor in chief of Minds and standing, presumably including an under- Machines: Journal of Artificial Intelligence, Philosophy, and Cognitive Sci- standing of ethics. Indeed, for Searle, a mate- ence and president of the International Society for Ethics and Information rialist, humans are a kind of machine, just not Technology. Contact him at the Dept. of Philosophy, Dartmouth College, Hanover, NH 03755; [email protected]. a purely syntactic computer. Thus, both forms of the bright-line argument leave the possibility of machine ethics open. How much can be accomplished in machine ethics remains an empirical question. better than we do now. We’ve had significant : Bottom-Up and Top-Down Ap- successes in , but we’re still proaches for Modeling Human Moral Facul- far from having the child machine that Tur- ties,” Machine Ethics, M. Anderson, S.L. Anderson, and C. Armen, eds., AAAI Press, ing envisioned. 2005, pp. 94–102. e won’t resolve the question of Third, inadequately understood ethical the- Wwhether machines can become full ory and learning might be easier 5. J. van den Hoven and G.-J. Lokhorst, “Deon- ethical agents by philosophical argument or problems to solve than computers’ absence tic Logic and Computer-Supported ,” Cyberphilosophy: The Intersection empirical research in the near future. We of common sense and world knowledge. The of Computing and Philosophy, J.H. Moor and should therefore focus on developing limited deepest problems in developing machine T.W. Bynum, eds., Blackwell, 2002, pp. explicit ethical agents. Although they would ethics will likely be epistemological as much 280–289. fall short of being full ethical agents, they as ethical. For example, you might program could help prevent unethical outcomes. a machine with the classical imperative of 6. V. Wiegel, J. van den Hoven, and G.-J. Lokhorst, “Privacy, Deontic Epistemic Action I can offer at least three reasons why it’s physicians and Asimovian robots: First, do Logic and Software Agents,” Ethics of New important to work on machine ethics in the no harm. But this wouldn’t be helpful unless Information Technology, Proc. 6th Int’l Conf. sense of developing explicit ethical agents: the machine could understand what consti- Computer Ethics: Philosophical Enquiry tutes harm in the real world. This isn’t to sug- (CEPE 05), Center for Telematics and Infor- mation Technology, Univ. of Twente, 2005, • Ethics is important. We want machines to gest that we shouldn’t vigorously pursue pp. 419–434. treat us well. machine ethics. On the contrary, given its • Because machines are becoming more nature, importance, and difficulty, we should 7. M. Anderson, S.L. Anderson, and C. Armen, sophisticated and make our lives more dedicate much more effort to making progress “Towards Machine Ethics: Implementing Two enjoyable, future machines will likely in this domain. Action-Based Ethical Theories,” Machine Ethics, M. Anderson, S.L. Anderson, and C. have increased control and autonomy to Armen, eds., AAAI Press, 2005, pp. 1–7. do this. More powerful machines need more powerful machine ethics. 8. J. Gips, “Creating Ethical Robots: A Grand • Programming or teaching a machine to act Challenge,” presented at the AAAI Fall 2005 Symposium on Machine Ethics; www.cs.bc. ethically will help us better understand Acknowledgments edu/~gips/EthicalRobotsGrandChallenge. ethics. I’m indebted to many for helpful comments, pdf. particularly to Keith Miller, Vincent Wiegel, and this magazine’s anonymous referees and editors. The importance of machine ethics is clear. 9. J.H. Moor, “Are There Decisions Computers But, realistically, how possible is it? I also Should Never Make?” Nature and System, vol. 1, no. 4, 1979, pp. 217–229. offer three reasons why we can’t be too opti- mistic about our ability to develop machines 10. J.R. Searle, “Minds, Brains, and Programs,” to be explicit ethical agents. References Behavioral and Brain Sciences, vol. 3, no. 3, 1980, pp. 417–457. First, we have a limited understanding of 1. H. Simon, “Re: Dartmouth Seminar 1956” what a proper ethical theory is. Not only do (email to J. Berleur), Herbert A. Simon Col- people disagree on the subject, but individu- lection, Carnegie Mellon Univ. Archives, 20 als can also have conflicting ethical intuitions Nov. 1999. and beliefs. Programming a computer to be 2. J. Lewis, “Robots of Arabia,” Wired, vol. 13, ethical is much more difficult than program- no. 11, Nov. 2005, pp. 188–195; www.wired. ming a computer to play world-champion com/wired/archive/13.11/camel.html?pg=1& chess—an accomplishment that took 40 years. topic=camel&topi c_set=. Chess is a simple domain with well-defined 3. J.H. Moor, “Is Ethics Computable?” Metaphi- legal moves. Ethics operates in a complex losophy, vol. 26, nos. 1–2, 1995, pp. 1–21. For more information on this or any other com- domain with some ill-defined legal moves. puting topic, please visit our Digital Library at Second, we need to understand learning 4. W. Wallach, C. Allen, and I. Smit, “Machine www.computer.org/publications/dlib.

JULY/AUGUST 2006 www.computer.org/intelligent 21