<<

What has AI in Common with ?

John McCarthy Department Stanford, CA 94305, U.S.A. [email protected], http://www-formal.stanford.edu/jmc/

Abstract without having to understand fundamental physics. Causal relations must also be used for a to rea• AI needs many that have hitherto been son about the consequences of its possible actions. studied only by . This is because a robot, if it is to have human level 2. has to be understood a feature at a time. and ability to learn from its experience, needs There are systems with only a few beliefs and no a general world view in which to organize . belief that they have beliefs. Other systems will It turns out that many philosophical problems do extensive . Contrast this with the take new forms when about in terms attitude that unless a system has a whole raft of of how to design a robot. Some approaches to features it isn't a mind and therefore it can't have philosophy are helpful and others are not. beliefs. 3. Beliefs and are objects that can be for• 1 Introduction mally described. Artificial intelligence and philosophy have more in com• 4. It is legitimate to use approximate not ca• mon than a science usually has with the philosophy of pable of iff definition. Common language does and that science. This is because human level artificial in• will have to. telligence requires equipping a with a philosophy. The program must have built into it a 2 The Philosophy of Artificial of what knowledge is and how it is obtained. If Intelligence the program is to about what it can and cannot do, its designers will need an attitude to free will. If the One can expect there to be an academic subject called the philosophy of artificial intelligence analogous to the program is to be protected from performing unethical existing fields of and philosophy of actions, its designers will have to build in an attitude biology. By analogy it will be a philosophical study of about that. the methods of AI and will propose to clarify Unfortunately, in none of these areas is there any philosophical problems raised. 1 suppose it will take up philosophical attitude or system sufficiently well defined the methodological issues raised by and to provide the basis of a usable computer program. , even the that intelligence requires that Most AI work today does not require any philosophy, the system be made of meat. because the system being developed doesn't have to op• Presumably some philosophers of AI will do battle erate independently in the world and have a view of the with the idea that AI is impossible (Dreyfus), that it world. The designer of the program does the philosophy is immoral (Weizenbaum) and that the very concept is in advance and builds a restricted representation into the incoherent (Searle). program. Building a program requires no philos• It is unlikely to have any more effect on the practice ophy, and got by without even having a notion of AI research than generally has of processes taking place in time. Designing robots that on the practice of science. do what they think their owners want will involve their reasoning about wants. Not all philosophical positions are compatible with 3 Epistemological Adequacy what has to be built into intelligent programs. Here are Formalisms lor representing facts about the world have some of the philosophical attitudes that seem to me to to be adequate for representing the information actu• be required. ally available. A formalism that represented the state 1. Science and knowledge of the world of the world by the positions and velocities of molecules is inadequate if the system can't observe positions and must both be accepted. There are atoms, and there velocities, although it may be the best for deriving ther• are chairs. We can learn features of the world at modynamic laws. the intermediate size level on which humans operate

MCCARTHY 2041 The common sense world needs a language to describe 8 Counterfactuals objects, their relations and their changes quite different An intelligent program will have to use counterfac- from that used in physics and . The key dif• tual conditional sentences, but AI needs to concentrate ference is that the information is less complete. It needs on useful counterfactuals. An example is "If another to express what is actually known that can permit a car had come over the hill when you passed just now, robot to determine the expected consequences of the ac• there would have been a head-on collision." Believing tions it contemplates. this counterfactual might change one's driving habits, whereas the corresponding material conditional, obvi• 4 Free Will ously true in view of the false antecedent, could have no such effect. Counterfactuals permit systems to learn An attitude toward the free will problem needs to be from experiences they don't actually have. built into robots in which the robot can regard itself as Unfortunately, the Stalnaker-Lewis closest possible having choices to make, i.e. as having free will. world model of counterfactuals doesn't seem helpful in building programs that can formulate and use them.

5 Natural Kinds 9 Philosophical Pitfalls Natural kinds are described rather than defined. We There is one philosophical view that is attractive to peo• have learned about lemons and experienced them as ple doing AI but which limits what can be accomplished. small, yellow fruit. However, this knowledge does not This is logical which tempts AI people to permit an iff definition. Lemons differ from other fruit make systems that describe the world in terms of rela• in ways we don't yet know about. There is no continuous tions between the program's motor actions and its subse• gradation from lemons to oranges. On the other hand, quent . Particular situations are sometimes geneticists could manage to breed large blue lemons by simple enough to admit such relations, but a system that tinkering with the genes. only uses them will not even be able to represent facts about simple physical objects. It cannot have the capa• 6 Four Stances bility of a two week old baby.

Daniel Dennett named three stances one can take to• 10 Philosophers! Help! wards an object or system. The first is the physical Philosophers could help artificial intelligence more than stance in which the physical structure of the system is they have done if they would put some to some treated. The second is the in which more detailed conceptual problems such as the following: the system is understood in terms of its beliefs, goals and intentions. The third is the design stance in which the how What is the relation between naming an occurrence system is understood in terms of its composition out of and its suboccurences? He went to Boston. How? parts. One more stance we'll call the functional stance. He drove to the airport, parked and took UA 84- We take the functional stance toward an object when responsiveness When is the answer to a question re• we ask what it does without regard to its physics or sponsive? Thus "Vladimir's wife's husband's tele• composition. The example I like to give is a motel alarm phone number" is a true but not responsive answer clock. The user may not notice whether it is mechanical, to a request for Vladimir's telephone number. an electric motor timed by the power line or electronic useful counterfactuals What counterfactuals are use- timed by a quartz crystal.1 Each stance is appropriate ful and why? "// another car had come over the hill in certain conditions. when you passed, there would have Iteen a head-on collision." 7 Ontology and Reification References wrote that one's ontology coincides with the There is not space in this article nor have I had the ranges of the variables in one's formalism. This usage is time to prepare a proper bibliography. Such a bib• entirely appropriate for AI. Present philosophers, Quine liography would refer to a number of papers, some sometimes included, are often too stingy in the reifica- of mine being reprinted in my Formalizing Common tions they permit. It is sometimes necessary to quantify Sense Many are available via my Web page http://www- over beliefs, hopes and goals. formal.stanford.edu/jmc/. I would also refer to work by When programs interact with people or other pro• the following philosophers: , Daniel Den• grams they often perform speech acts in the sense stud• nett, W. V. O. Quine, , Paul Grice, John ied by Austin and Searle. Quantification over promises, Searle, Robert Stalnaker, David Lewis, . obligations, questions, answers to questions, offers, ac• Much of the bibliography in Aaron Sloman's previous ceptances and declinations are required. article is applicable to this one.

1 Acknowledgement: Work partly supported by ARPA 1 had called this the design stance, and I thank Aaron Slo- (ONR) grant N00014-94-1-0775 man for pointing out my mistake and suggesting functional stance.

2042 PANELS