<<

Is the Brain a Digital ? Author(s): John R. Searle Reviewed work(s): Source: Proceedings and Addresses of the American Philosophical Association, Vol. 64, No. 3 (Nov., 1990), pp. 21-37 Published by: American Philosophical Association Stable URL: http://www.jstor.org/stable/3130074 . Accessed: 19/08/2012 03:34

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp

. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

.

American Philosophical Association is collaborating with JSTOR to digitize, preserve and extend access to Proceedings and Addresses of the American Philosophical Association.

http://www.jstor.org IS THE BRAINA DIGITALCOMPUTER?

JohnR. Searle Universityof California/Berkeley

PresidentialAddress delivered before the Sixty-fourth Annual Pacific Division Meeting of the AmericanPhilosophical Association inLos Angeles, California, March 30, 1990.

I. Introduction,Strong AI, WeakAI and Cognitivism.

Thereare different ways to present a Presidential Address to the APA; the one I have chosenis simplyto reporton workthat I amdoing right now, on workin progress.I am goingto presentsome of my further explorations into the computational model of the mind.[1] Thebasic idea of the computer model of the mind is thatthe mind is theprogram and thebrain the hardware of a computationalsystem. A sloganone often sees is "themind is to thebrain as theprogram is to thehardware." [2] Letus beginour investigation ofthis claim by distinguishing three questions:

1. Is thebrain a digitalcomputer? 2. Is themind a computerprogram? 3. Can theoperations of thebrain be simulatedon a digitalcomputer?

I willbe addressing1 and not 2 or 3. I think2 can be decisivelyanswered in the negative.Since programs are defined purely formally orsyntactically and since minds have an intrinsicmental content, it followsimmediately that the program by itselfcannot constitutethe mind. The formalsyntax of the program does not by itself guarantee the presenceof mental contents. I showed this a decadeago in the Chinese Room Argument (Searle,1980). A computer,me for example, could run the steps of the program for some mentalcapacity, such as understandingChinese, without understanding a word of Chinese. The argumentrests on thesimple logical truth that syntax is notthe same as, nor is itby itselfsufficient for, semantics. So theanswer to thesecond question is obviously"No". The answerto 3. seemsto me equallyobviously "Yes", at leaston a natural interpretation.That is, naturally interpreted, thequestion means: Is theresome description ofthe brain such that under that description you could do a computationalsimulation of theoperations of thebrain. But since according to Church'sthesis, anything that can be givena preciseenough characterization as a setof steps can be simulatedon a digital computer,it follows trivially that the question has an affirmativeanswer. The operations ofthe brain can be simulatedon a digitalcomputer in thesame sense in whichweather systems,the behavior of the New York stock market or thepattern of airline flights over LatinAmerica can. So ourquestion is not, "Is the mind a program?"The answerto that

21 22 APA PROCEEDINGS,VOL. 64,NO.3 is,"No". Noris it,"Can the brain be simulated?"The answerto thatis, "Yes". The questionis, "Is thebrain a digitalcomputer?" And for purposes of thisdiscussion I am takingthat question as equivalentto: "Are brain processes computational?" One mightthink that this question would lose muchof itsinterest if question 2 receivesa negative answer. That is, one might suppose that unless the mind is a program, thereis no interestto the question whether the brain is a computer.But that is not really thecase. Evenfor those who agree that programs by themselves are not constitutive of mentalphenomena, there is still an importantquestion: Granted that there is moreto the mindthan the syntactical operations ofthe digital computer; nonetheless, itmight be the case thatmental states are at leastcomputational states and mentalprocesses are computationalprocesses operating over the formal structure ofthese mental states. This, in fact,seems to methe position taken by a fairlylarge number of people. I amnot saying that the view is fullyclear, but the idea is something like this: At somelevel of description brain processes are syntactical; there are so tospeak, "sentences in thehead". Theseneed not be sentencesin Englishor Chinese,but perhaps in the "Languageof Thought" (Fodor, 1975). Now,like any sentences, they have a syntactical structureand a semanticsor meaning,and the problem of syntax can be separatedfrom theproblem of semantics. The problemof semantics is: Howdo thesesentences in the head get theirmeanings? But thatquestion can be discussedindependently of the question:How does the brain work in processing these sentences? A typicalanswer to that latterquestion is: The brainworks as a digitalcomputer performing computational operationsover the syntactical structure ofsentences in thehead. Justto keep the terminology straight, I call the view that all thereis tohaving a mind is havinga program,Strong AI, theview that brain processes (and mental processes) can be simulatedcomputationally, Weak AI, and the view that the brain is a digitalcomputer, Cognitivism. Thispaper is about Cognitivism, and I hadbetter say at thebeginning what motivates it. Ifyou read books about the brain (say Shepherd (1983) or Kuffler and Nicholls (1976)) youget a certainpicture of what is goingon in the brain. If you then turn to books about computation(say Boolos and Jeffrey, 1989) you get a pictureof the logical structure ofthe theoryof computation. Ifyou then turn to booksabout cognitive science, (say Pylyshyn, 1985) theytell you that what the brain books describe is reallythe same as whatthe computabilitybooks were describing. Philosophically speaking, this does not smell right to meand I havelearned, at leastat thebeginning ofan investigation,to follow my sense of smell.

II. The PrimalStory

I wantto beginthe discussion by trying to stateas stronglyas I canwhy Cognitivism hasseemed intuitively appealing. There is a storyabout the relation of human intelligence to computationthat goes back at leastto Turing's classic paper (1950), and I believeit is thefoundation ofthe Cognitivist view. I willcall it thePrimal Story:

We beginwith two results in mathematicallogic, the Church-Turing thesis (sometimes calledChurch's thesis) and Turing'stheorem. For our purposes,the Church-Turing thesisstates that for any algorithm there is someTuring that can implement PRESIDENTIALADDRESSES 23

thatalgorithm. Turing's theorem says that there is a UniversalTuring Machine which can simulateany Turing Machine. Now if we putthese two together we havethe resultthat a UniversalTuring Machine can implementany algorithm whatever.

Butnow, what made this result so exciting?What made it send shivers up anddown the spinesof a wholegeneration of youngworkers in artificialintelligence is the following thought:Suppose the brain is a UniversalTuring Machine. Well,are there any good reasons for supposing the brain might be a UniversalTuring Machine?Let us continuewith the Primal Story.

It is clearthat at leastsome human mental abilities are algorithmic. For example, I canconsciously dolong division by going through the steps of an algorithmfor solving longdivision problems. It is furthermorea consequence of the Church-Turing thesis andTuring's theorem that anything a human can do algorithmicallycan be doneon a UniversalTuring Machine. I canimplement, forexample, the very same algorithm thatI use forlong division on a digitalcomputer. In sucha case,as describedby Turing(1950), both I, thehuman computer, and the mechanicalcomputer are implementingthesame algorithm; I am doing it consciously, the mechanical computer nonconsciously.Now it seems reasonable to suppose there might also be a wholelot ofmental processes going on in my brain nonconsciously which are also computational. Andif so, we could find out how the brain works by simulating these very processes ona digitalcomputer. Just as wegot a computersimulation ofthe processes for doing longdivision, sowe could get a computersimulation ofthe processes for understanding language,visual perception, categorization, etc.

"Butwhat about semantics? After all, programs are purely syntactical." Here another set oflogico-mathematical results comes into play in thePrimal Story.

The developmentofproof theory showed that within certain well known limits the semanticrelations between propositions can be entirelymirrored by the syntactic relationsbetween the sentences that express those propositions. Now suppose that mentalcontents in the head are expressed syntactically inthe head, then all we would needto accountfor mental processes would be computationalprocesses between the syntacticalelements in thehead. Ifwe get the proof theory right the semantics will takecare of itself; and that is whatcomputers do: theyimplement the proof theory.

We thushave a welldefined research program. We tryto discover the programs being implementedin the brain by programming to implementthe same programs. We do thisin turnby getting the mechanical computer to match the performance ofthe humancomputer (i.e. to passthe Turing Test) and then getting the psychologists tolook forevidence that the internal processes are the same in thetwo types of computer. Nowin what follows I would like the reader to keep this Primal Story in mind--notice especiallyTuring's contrast between the conscious implementation of the programby the humancomputer and thenonconscious implementation ofprograms, whether by the brain or by the mechanicalcomputer; notice furthermore the idea thatwe mightjust discover 24 APA PROCEEDINGS,VOL. 64,NO.3 programsrunning in nature,the very same programs that we putinto our mechanical computers. Ifone looks at thebooks and articles supporting Cognitivism one finds certain common assumptions,often unstated, but nonetheless pervasive. First, it is often assumed that the onlyalternative to theview that the brain is a digitalcomputer is some form of dualism. The ideais thatunless you believe in theexistence of immortal Cartesian souls, you must believethat the brain is a computer.Indeed, it oftenseems to be assumedthat the questionwhether the brain is a physicalmechanism determining our mental states and whetherthe brain is a digitalcomputer are the same question. Rhetorically speaking, the ideais to bullythe reader into thinking that unless he acceptsthe idea that the brain is somekind of computer, he is committedtosome weird antiscientific views. Recently the fieldhas openedup a bit to allowthat the brain might not be an old fashionedvon Neumannstyle digital computer, but rather a more sophisticated kind of parallel processing computationalequipment. Still, to denythat the brain is computationalis to risklosing yourmembership in the scientific community. Second,itis alsoassumed that the question whether brain processes are computational is justa plainempirical question. It is to be settledby factual investigation in the same waythat such questions as whetherthe heart is a pumpor whethergreen leaves do photosynthesiswere settled as mattersof fact. Thereis no roomfor logic chopping or conceptualanalysis, since we aretalking about matters of hard scientific fact. IndeedI thinkmany people who work in this field would doubt that the title of this paper poses an appropriatephilosophic question at all. "Isthe brain really a digitalcomputer?" isno more a philosophicalquestion than "Is the neurotransmitter at neuro-muscular junctions really acetylcholene?" Evenpeople who are unsympathetic toCognitivism, such as Penroseand Dreyfus, seem to treatit as a straightforwardfactual issue. They do notseem to be worriedabout the questionwhat sort of claim it mightbe thatthey are doubting. But I am puzzledby the question:What sort of fact about the brain could constitute its being a computer? Third,another stylistic feature of thisliterature is thehaste and sometimeseven carelessnesswith which the foundational questions are glossed over. What exactly are the anatomicaland physiological features of brains that are being discussed? What exactly is a digitalcomputer? And how are the answers to these two questions supposed to connect? The usual procedurein thesebooks and articlesis to makea fewremarks about O's and l's, givea popularsummary ofthe Church-Turing thesis, and then get on withthe more excitingthings such as computerachievements and failures.To mysurprise in readingthis literatureI have found that there seems to be a peculiarphilosophical hiatus. On theone hand,we have a veryelegant set of mathematicalresults ranging from Turing's theorem toChurch's thesis to recursive function theory. On theother hand, we have an impressive set of electronicdevices whichwe use everyday. Since we have such advanced mathematicsand such good ,we assumethat somehow somebody must have done thebasic philosophical work of connecting the mathematics to theelectronics. But as faras I can tell thatis not thecase. On thecontrary, we are in a peculiarsituation wherethere is littletheoretical agreement among the practitionerson such absolutely fundamentalquestions as, What exactlyis a digitalcomputer? What exactlyis a symbol? Whatexactly is a computationalprocess? Under what physical conditions exactly are two systemsimplementing the same program? PRESIDENTIALADDRESSES 25

m. The Definitionof Computation

Sincethere is no universal agreement on thefundamental questions, I believe it is best to go backto thesources, back to theoriginal definitions given by Alan Turing. Accordingto Turing,a Turing machine can carry out certain elementary operations: It can rewritea 0 on itstape as a 1,it can rewritea 1 on itstape as a 0, it canshift the tape1 squareto theleft, or it can shift the tape 1 squareto theright. It is controlledby a programof instructionand each instruction specifies a condition and an actionto be carriedout if the condition is satisfied. Thatis thestandard definition ofcomputation, but, taken literally, itis at leasta bit misleading.Ifyou open up your home computer you are most unlikely to find any O's and 1's oreven a tape. Butthis does not really matter for the definition. To findout if an objectis reallya digitalcomputer, itturns out that we do notactually have to look for O's and l's, etc.;rather we just have to lookfor something that we couldtreat as orcount as orcould be used to function as O's and l's. Furthermore,tomake the matter more puzzling, itturns out that this machine could be made out of just about anything. As Johnson-Laird says,"It could be madeout of cogs and levers like an oldfashioned ; it couldbe madeout of a hydraulicsystem through which water flows; it couldbe made outof etched into a siliconchip through which an electricalcurrent flows. It couldeven be carriedout by the brain. Each of these uses a differentmedium to representbinary symbols--the positions of cogs, the presence or absenceof water, the levelof the voltage and perhaps nerve impulses" (Johnson Laird, 1988, p. 39). Similarremarks are made by most of the people who write on thistopic. For example, Ned Block(Block, 1990), shows how we can haveelectrical gates where the 1's andO's areassigned to voltage levels of 4 voltsand 7 voltsrespectively. So we mightthink that weshould go and look for voltage levels. But Block tells us that1 isonly "conventionally" assignedto a certainvoltage level. The situation grows more puzzling when he informsus furtherthat we did not need to useelectricity at all butwe could have used an elaborate systemof cats and mice and cheese and make our gates in sucha waythat the cat will strainat theleash and pull open a gatewhich we can alsotreat as ifit werean 0 or 1. The point,as Blockis anxiousto insist,is "theirrelevance of hardwarerealization to computationaldescription. These gates work in differentways but they are nonetheless computationallyequivalent" (p. 260). In thesame vein, Pylyshyn says that a computation- al sequencecould be realizedby "a groupof pigeons trained to peck as a Turingmachine!" (Pylyshn,1985, p. 57) Butnow if we are trying to takeseriously the idea that the brain is a digitalcomputer, weget the uncomfortable result that we could make a systemthat does just what the brain doesout of pretty much anything. Computationally speaking, on thisview, you can make a "brain"that functions just like yours and mine out of cats and mice and cheese or levers orwater pipes or pigeonsor anythingelse provided the two systems are, in Block'ssense, "computationallyequivalent". You wouldjust need an awfullot ofcats, or pigeonsor waterpipes,or whatever it mightbe. The proponentsof Cognitivism report this result with sheerand unconcealeddelight. But I thinkthey ought to be worriedabout it, and I am goingto tryto showthat it is just the tipof a wholeiceberg of problems. 26 APA PROCEEDINGS, VOL. 64, NO.3

IV. FirstDifficulty: Syntax is not Intrinsicto Physics

Whyare the defenders of computationalism not worried by the implications of multiple realizability?The answeris thatthey think it is typicalof functionalaccounts that the same functionadmits of multiplerealizations. In thisrespect, computers are just like carburetorsand thermostats.Just as carburetorscan be made of brass or steel, so computerscan be madeof an indefiniterange of hardwarematerials. But thereis a difference:The classesof carburetorsand thermostatsare definedin termsof theproduction of certain physical effects. That is why,for example, nobody says you can make carburetorsout of pigeons. But the class of computersis defined syntacticallyin termsof the assignmentof O's and l's. The multiplerealizability is a consequencenot of the factthat the same physicaleffect can be achievedin different physicalsubstances, but that the relevant properties are purelysyntactical. The physicsis irrelevantexcept insofaras it admitsof the assignmentsof O's and l's and of state transitionsbetween them. But thishas twoconsequences which might be disastrous:

1. The same principlethat impliesmultiple realizability would seem to imply universalrealizability. If computationis definedin termsof the assignmentof syntaxthen everything would be a digitalcomputer, because any object whatever could have syntacticalascriptions made to it. You could describeanything in termsof O's and l's.

2. Worse yet,syntax is not intrinsicto physics. The ascriptionof syntactical propertiesis alwaysrelative to an agentor observerwho treatscertain physical phenomenaas syntactical.

Now whyexactly would these consequences be disastrous? Well, we wantedto knowhow the brainworks, specifically how it producesmental phenomena.And it wouldnot answerthat question to be toldthat the brain is a digital computerin thesense in which stomach, liver, heart, solar system, and thestate of Kansas are all digitalcomputers. The modelwe had was thatwe mightdiscover some fact about theoperation of the brainwhich would show that it is a computer.We wantedto know if therewas not some sensein whichbrains were intrinsically digital computers in a way thatgreen leaves intrinsicallyperform photosynthesis or hearts intrinsically pump blood. It is not a matterof us arbitrarilyor "conventionally"assigning the word "pump" to hearts or "photosynthesis"to leaves. Thereis an actual factof thematter. And whatwe were askingis, "Is therein thatway a factof the matterabout brains that would make them digitalcomputers?" It does not answerthat question to be told,yes, brains are digital computersbecause everything is a digitalcomputer. On thestandard textbook definition of computation,

1. For any object thereis some descriptionof that objectsuch that underthat descriptionthe object is a digitalcomputer. PRESIDENTIALADDRESSES 27

2. Forany program there is some sufficiently complex object such that there is some descriptionofthe object under which it is implementingtheprogram. Thus for examplethe wall behindmy back is rightnow implementing the Wordstar program,because there is some patternof moleculemovements which is isomorphicwith the formal structure ofWordstar. But if the wall is implementing Wordstarthen if it is a bigenough wall it is implementinganyprogram, including anyprogram implemented in the brain.

I thinkthe main reason that the proponents do notsee thatmultiple or universal realizabilityis a problemis thatthey do notsee it as a consequenceof a muchdeeper point,namely that the "syntax" is not the name of a physicalfeature, like mass or gravity. On thecontrary they talk of "syntactical engines" and even "semantic engines" as ifsuch talkwere like that of gasoline engines or diesel engines, as ifit could be just a plainmatter offact that the brain or anythingelse is a syntacticalengine. I thinkit is probably possible to block the result of universal realizability bytightening upour definition ofcomputation. Certainly we ought to respect the fact that programmers andengineers regard it as a quirkof Turing's original definitions and not as a realfeature of computation.Unpublished works by Brian Smith, Vinod Goel, and John Batali all suggestthat a morerealistic definition ofcomputation will emphasize such features as the causal relationsamong program states, programmability and controllabilityof the mechanism,and situatednessin thereal world. But thesefurther restrictions on the definitionofcomputation are no helpin thepresent discussion because the really deep problemis thatsyntax is essentially an observerrelative notion. The multiple realizability ofcomputationally equivalent processes indifferent physical media was not just a signthat theprocesses were abstract, but that they were not intrinsic to thesystem at all. They dependedon an interpretationfrom outside. We werelooking for some facts of the matter whichwould make brain processes computational; but given the way we havedefined computation,there never could be anysuch facts of the matter. We can't,on theone hand,say that anything is a digitalcomputer ifwe can assigna syntaxto it and then supposethere is a factualquestion intrinsic to itsphysical operation whether or not a naturalsystem such as thebrain is a digitalcomputer. Andif the word "syntax" seems puzzling, the same point can be statedwithout it. Thatis, someone might claim that the notions of "syntax" and "symbols" are just a manner ofspeaking and thatwhat we are reallyinterested in is theexistence of systems with discretephysical phenomena and state transitions between them. On thisview we don't reallyneed O's and I's; theyare just a convenientshorthand. But, I believe,this move is no help. A physicalstate of a systemis a computationalstate only relative to the assignmenttothat state of some computational role, function, orinterpretation. The same problemarises without O's and 1's becausenotions such as computation,algorithm and programdo notname intrinsic physical features ofsystems. Computational states are not discoveredwithin the physics, they are assigned to thephysics. Thisis a differentargument from the Chinese Room Argument and I shouldhave seen it tenyears ago butI didnot. The ChineseRoom Argument showed that semantics is not intrinsicto syntax.I am now makingthe separate and differentpoint that syntax is not intrinsicto physics.For thepurposes of theoriginal argument I was simplyassuming that thesyntactical characterization ofthe computer was unproblematic.But thatis a mistake. 28 APA PROCEEDINGS,VOL. 64,NO.3

Thereis no wayyou could discover that something is intrinsically a digital computer becausethe characterization ofit as a digitalcomputer is always relative to an observer whoassigns a syntacticalinterpretation to the purely physical features of the system. As appliedto theLanguage of Thought hypothesis, this has the consequence that the thesis is incoherent.There is no wayyou could discover that there are, intrinsically, unknown sentencesin your head because something is a sentenceonly relative to some agent or user who uses it as a sentence.As appliedto the computationalmodel generally, the characterizationofa processas computationalis a characterizationofa physicalsystem fromoutside; and the identification ofthe process as computationaldoes not identify an intrinsicfeature of the physics, it is essentiallyan observerrelative characterization. Thispoint has to be understoodprecisely. I am not saying there are a priorilimits on thepatterns we could discover in nature. We couldno doubt discover a pattern of events in mybrain that was isomorphicto the implementationof the vi programon this computer.But to saythat something isfunctioning as a computationalprocess is to say somethingmore than that a patternof physicalevents is occurring.It requiresthe assignmentof a computationalinterpretation bysome agent. Analogously,we might discoverin natureobjects which had the same sort of shape as chairsand which could thereforebe usedas chairs;but we couldnot discover objects in naturewhich were functioningas chairs, except relative to some agents who regarded them or usedthem as chairs.

V. SecondDifficulty: The HomunculusFallacy is Endemicto Cognitivism

So far,we seem to have arrived at a problem.Syntax is not part of physics. This has theconsequence that if computation is defined syntactically then nothing is intrinsically a digitalcomputer solely in virtueof its physical properties. Is thereany way out of this problem?Yes, there is, and it is a waystandardly taken in cognitive science, but it is out ofthe frying pan and into the fire. Most of the works I haveseen in thecomputational theoryof the mind commit some variation on thehomunculus fallacy. The ideaalways is to treatthe brain as ifthere were some agent inside it usingit to computewith. A typicalcase is DavidMarr (1982) who describes the task of vision as proceedingfrom a two-dimensionalvisual array on the retinato a three-dimensionaldescription of the externalworld as outputof thevisual system. The difficultyis: Who is readingthe description?Indeed, it looks throughout Marr's book, and in other standard works on the subject,as ifwe haveto invokea homunculusinside the system in orderto treatits operationsas genuinelycomputational. Manywriters feel that the homunculus fallacy is notreally a problem,because, with Dennett(1978), they feel that the homunculus can be "discharged". The idea is this: Since thecomputational operations of the computer can be analyzedinto progressively simpler units,until eventually we reach simple flip-flop, "yes-no", "1-0" patterns, itseems that the higher-levelhomunculi can be dischargedwith progressively stupider homunculi, until finallywe reachthe bottom level of a simpleflip-flop that involves no realhomunculus at all. The idea,in short,is thatrecursive decomposition will eliminate the homunculi. It took me a longtime to figureout what thesepeople were driving at, so in case someoneelse is similarlypuzzled I willexplain an examplein detail:Suppose that we have PRESIDENTIALADDRESSES 29 a computerthat multiplies six timeseight to getforty-eight. Now we ask "Howdoes it do it?" Well, the answermight be thatit addssix to itselfseven times. [3] But ifyou ask "How does it add six to itselfseven times?", the answer might be that,first, it convertsall ofthe numerals into binary notation, and second, it applies a simplealgorithm for operating on binarynotation until finally we reachthe bottom level at whichthe onlyinstructions are of the form,"Print a zero,erase a one." So, forexample, at the top level our intelligenthomunculus says "I knowhow to multiplysix timeseight to get forty-eight". But at the nextlower-level he is replacedby a stupiderhomunculus who says"I do not actuallyknow how to do multiplication,but I can do addition."Below him are some stupiderones who say 'We do notactually know how to do additionor multiplication,but we knowhow to convertdecimal to binary."Below these are stupiderones whosay "We do not knowanything about any of thisstuff, but we knowhow to operateon binary symbols."At thebottom level are a wholebunch of homunculiwho just say "Zeroone, zeroone". All of the higherlevels reduce to thisbottom level. Only the bottomlevel reallyexists; the top levelsare all justas-if. Variousauthors (e.g. Haugeland (1981), Block (1990)) describethis feature when they say thatthe systemis a syntacticalengine driving a semanticengine. But we stillmust facethe questionwe had before:What factsintrinsic to thesystem make it syntactical? What factsabout the bottom level or anyother level make these operations into zeros and ones? Withouta homunculusthat stands outside the recursive decomposition, we do noteven havea syntaxto operate with. The attemptto eliminatethe homunculusfallacy through recursivedecomposition fails, because the onlyway to get the syntaxintrinsic to the physicsis to put a homunculusin thephysics. Thereis a fascinatingfeature to all of this. Cognitivistscheerfully concede that the higherlevels of computation, e.g. "multiply 6 times 8" areobserver relative; there is nothing reallythere that correspondsdirectly to multiplication;it is all in the eye of the homunculus/beholder.But theywant to stop thisconcession at the lowerlevels. The electroniccircuit, they admit, does not reallymultiply 6X8 as such,but it reallydoes manipulateO's and l's andthese manipulations, so to speak,add up to multiplication.But to concedethat the higher levels of computation are not intrinsicto thephysics is already to concedethat the lower levels are notintrinsic either. So thehomunculus fallacy is still withus. For realcomputers of the kind you buy in thestore, there is no homunculusproblem, each useris the homunculusin question. But ifwe are to supposethat the brainis a digitalcomputer, we are stillfaced with the question"And who is the user?" Typical homunculusquestions in cognitivescience are such as thefollowing: "How does the visual systemcompute shape fromshading; how does it computeobject distance from size of retinalimage?" A parallelquestion would be, "How do nailscompute the distance they are to travelin theboard from the impact of the hammer and thedensity of thewood?" And theanswer is thesame in bothsorts of case: Ifwe are talkingabout how the system works intrinsicallyneither nails nor visual systems compute anything. We as outsidehomunculi mightdescribe them computationally, and it is oftenuseful to do so. But you do not understandhammering by supposingthat nails are somehowintrinsically implementing hammeringalgorithms and you do not understandvision by supposingthe systemis implementing,e.g., the shape from shading algorithm. 30 APA PROCEEDINGS, VOL. 64, NO.3

VI. ThirdDifficulty: Syntax Has No Causal Powers

Certainsorts of explanationsin the naturalsciences specify mechanisms which functioncausally in theproduction of the phenomenato be explained.This is especially commonin the biologicalsciences. Thinkof thegerm theory of disease,the accountof photosynthesis,the DNA theoryof inheritedtraits, and even the Darwiniantheory of naturalselection. In each case a causal mechanismis specified,and in each case the specificationgives an explanationof the outputof the mechanism.Now ifyou go back and look at thePrimal Story it seemsclear that this is thesort of explanation promised by Cognitivism.The mechanismsby which brain processes produce cognition are supposed to be computational,and byspecifying the programswe willhave specifiedthe causes of cognition.One beautyof thisresearch program, often remarked, is thatwe do not need to knowthe detailsof brainfunctioning in orderto explaincognition. Brain processes provideonly the hardware implementation ofthe cognitive programs, but the program level is wherethe real cognitive explanations are given.On thestandard account, as statedby Newell for example,there are three levels of explanation,hardware, program, and intentionality(Newell calls this last level, the knowledgelevel), and the special contributionof cognitivescience is madeat theprogram level. But ifwhat I havesaid so faris correct,then there is somethingfishy about this whole project. I used to believethat as a causal accountthe cognitivist'stheory was at least false;but I nowam havingdifficulty formulating a version of it thatis coherenteven to the pointwhere it couldbe an empiricalthesis at all. The thesisis thatthere is a whole lot of symbolsbeing manipulated in the brain,O's and 1's flashingthrough the brainat lightningspeed and invisiblenot onlyto the nakedeye but even to the mostpowerful electronmicroscope, and it is thesewhich cause cognition.But the difficultyis that the O's and 1's as suchhave no causal powersat all becausethey do not evenexist except in theeyes of the beholder. The implementedprogram has no causal powersother than those of the implementingmedium because the programhas no real existence,no ontology, beyondthat of the implementingmedium. Physically speaking there is no such thingas a separate"program level". You can see thisif you go back to the PrimalStory and remindyourself of the differencebetween the mechanical computer and Turing'shuman computer. In Turing's humancomputer there really is a programlevel intrinsic to thesystem and it is functioning causallyat thatlevel to convertinput to output.This is becausethe human is consciously followingthe rules for doing a certaincomputation, and this causally explainshis performance.But when we programthe mechanicalcomputer to performthe same computation,the assignmentof a computationalinterpretation is now relative to us, the outsidehomunculi. And thereis no longera levelof intentional causation intrinsic to the system.The humancomputer is consciouslyfollowing rules, and thisfact explains his behavior,but the mechanicalcomputer is not literallyfollowing any rulesat all. It is designedto behaveexactly as ifit werefollowing rules, and so forpractical, commercial purposesit does not matter.Now Cognitivismtells us thatthe brainfunctions like the commercialcomputer and this causes cognition. But withouta homunculus,both commercialcomputer and brain have only patterns and thepatterns have no causalpowers in additionto thoseof theimplementing media. So it seemsthere is no wayCognitivism couldgive a causal accountof cognition. PRESIDENTIALADDRESSES 31

Howeverthere is a puzzlefor my view. Anyonewho workswith computers even casuallyknows that we oftendo in factgive causal explanationsthat appeal to the program.For example,we can say thatwhen I hit thiskey I got such and such results becausethe machine is implementingthe vi programand not theemacs program; and this lookslike an ordinarycausal explanation.So thepuzzle is, howdo we reconcilethe fact that syntax,as such, has no causal powerswith the fact that we do give causal explanationsthat appeal to programs?And, more pressingly,would these sortsof explanationsprovide an appropriatemodel for Cognitivism, will they rescue Cognitivism? Couldwe forexample rescue the analogy with thermostats by pointing out thatjust as the notion"thermostat" figures in causal explanationsindependently of any referenceto the physicsof its implementation, so the notion "program", might be explanatorywhile equally independentof thephysics. To explorethis puzzle let us tryto makethe case forCognitivism by extending the PrimalStory to showhow the Cognitivist investigative procedures work in actualresearch practice. The idea, typically,is to programa commercialcomputer so thatit simulates somecognitive capacity, such as visionor language. Then, if we geta goodsimulation, one that givesus at least Turingequivalence, we hypothesizethat the braincomputer is runningthe same program as thecommercial computer, and to testthe hypothesis we look forindirect psychological evidence, such as reactiontimes. So it seems that we can causallyexplain the behaviorof thebrain computer by citing the programin exactlythe samesense in whichwe can explainthe behavior of the commercial computer. Now what is wrongwith that? Doesn't it soundlike a perfectlylegitimate scientific research program? We knowthat the commercial computer's conversion of inputto outputis explainedby a program,and in the brainwe discoverthe same program,hence we have a causal explanation. Two thingsought to worryus immediatelyabout this project. First, we wouldnever acceptthis mode of explanation for any function of the brain where we actuallyunderstood howit workedat theneurobiological level. Second,we wouldnot accept it forother sorts ofsystem that we can simulatecomputationally. To illustratethe first point, consider for example,the famous account of "What the Frog's Eye Tells theFrog's Brain" (Lettvin, et al. 1959 in McCulloch,1965). The accountis givenentirely in termsof theanatomy and physiologyof the frog'snervous system. A typicalpassage, chosen at random,goes like this:

1. SustainedContrast Detectors

An unmyelinatedaxon of thisgroup does notrespond when the general illumination is turnedon or off. If thesharp edge of an objecteither lighter or darkerthan the backgroundmoves into its fieldand stops,it dischargespromptly and continues discharging,no matterwhat the shape of theedge or whetherthe object is smalleror largerthan the receptive field. (p. 239).

I have neverheard anyone say thatall thisis just thehardware implementation, and thatthey should have figuredout whichprogram the frogwas implementing.I do not doubt thatyou could do a computersimulation of the frog's"bug detectors". Perhaps 32 APA PROCEEDINGS, VOL. 64, NO.3 someonehas done it. But we all knowthat once you understandhow the frog'svisual systemactually works, the "computational level" is justirrelevant. To illustratethe secondpoint, consider simulations of othersorts of systems.I am, forexample, typing these words on a machinethat simulatesthe behaviorof an old fashionedmechanical typewriter. [4] As simulationsgo, the wordprocessing program simulatesa typewriterbetter than any Al programI know of simulates the brain. But no sane person thinks:"At long last we understandhow typewriterswork, they are implementationsof wordprocessing programs." It is simplynot the case in generalthat computationalsimulations provide causal explanationsof thephenomena simulated. So whatis goingon? We do not in generalsuppose that computational simulations of brainprocesses give us any explanationsin place of or in additionto neurobiological accountsof how the brainactually works. And we do not in generaltake "X is a computationalsimulation of Y" to name a symmetricalrelation. That is, we do not supposethat because the computersimulates a typewriterthat thereforethe typewriter simulatesa computer.We do not supposethat because a weatherprogram simulates a hurricane,that the causal explanationof thebehavior of thehurricane is providedby the program.So whyshould we makean exceptionto theseprinciples where unknown brain processesare concerned?Are thereany good groundsfor making the exception?And whatkind of a causal explanationis an explanationthat cites a formalprogram? Here,I believe,is thesolution to ourpuzzle. Once youremove the homunculus from thesystem, you are leftonly with a patternof eventsto whichsomeone from the outside couldattach a computationalinterpretation. Now theonly sense in which the specification of the patternby itselfprovides a causal explanationis thatif you knowthat a certain patternexists in a systemyou know that some cause orother is responsiblefor the pattern. So you can, forexample, predict later stages from earlier stages. Furthermore,if you alreadyknow that the system has been programmedby an outsidehomunculus, you can giveexplanations that make reference to theintentionality of the homunculus. You can say,e.g., thismachine behaves the way it does because it is runningvi. That is like explainingthat this book begins with a bitabout happy families and does notcontain any long passagesabout a bunchof brothers,because it is Tolstoy'sAnna Kareninaand not Dostoevsky'sThe BrothersKaramazov. But youcannot explain a physicalsystem such as a typewriteror a brainby identifyinga pattern which it shareswith its computational simulation,because the existence of thepattern does notexplain how the system actually worksas a physicalsystem. In thecase ofcognition the pattern is at muchtoo higha level of abstractionto explainsuch concretemental (and thereforephysical) events as the occurrenceof a visualperception or the understandingof a sentence. Now,I thinkit is obviousthat we cannotexplain how typewriters and hurricaneswork bypointing to formalpatterns they share with their computational simulations. Why is it not obviousin thecase of thebrain? Here we come to thesecond part of our solutionto the puzzle. In makingthe case forCognitivism we weretacitly supposing that the brain might be implementingalgorithms forcognition, in the same sense that Turing'shuman computer and his mechanical computerimplement algorithms. But it is preciselythat assumption which we have seen to be mistaken.To see this,ask yourselfwhat happens when a systemimplements an algorithm.In thehuman computer the system consciously goes throughthe steps of the algorithm,so theprocess is bothcausal andlogical; logical, because the algorithm provides PRESIDENTIALADDRESSES 33

a setof rules for deriving the output symbols from the input symbols; causal, because the agentis makinga consciouseffort to go throughthe steps. Similarly in thecase of the mechanicalcomputer, the whole system includes an outsidehomunculus, and with the homunculusthe systemis bothcausal and logical;logical, because the homunculus providesan interpretationto the processes of the machine;and causal,because the hardwareof themachine causes it to go throughthe processes. But theseconditions cannotbe metby the brute, blind nonconscious neurophysiological operations ofthe brain. In thebrain computer there is no consciousintentional implementation ofthe algorithm as thereis in the mechanical computer either, because that requires an outside homunculus toattach a humancomputer, but there can't be any nonconscious implementation as there is in thecomputational interpretation to the physical events. The mostwe couldfind in thebrain is a patternof events which is formallysimilar to theimplemented program in themechanical computer, but that pattern, as such,has no causalpowers to callits own andhence explains nothing. In sum,the fact that the attribution ofsyntax identifies no furthercausal powers is fatalto theclaim that programs provide causal explanations ofcognition. To explorethe consequencesofthis, let us remind ourselves ofwhat Cognitivist explanations actually look like.Explanations such as Chomsky'saccount of the syntax of natural languages or Marr's accountof vision proceed by stating a set of rules according to whicha symbolicinput is convertedinto a symbolicoutput. In Chomsky'scase, for example, a single input symbol, S, is convertedinto any one of a potentiallyinfinite number of sentences by the repeated applicationof a set of syntacticalrules. In Marr'scase, representationsof a two dimensionalvisual array are converted into three dimensional "descriptions" ofthe world in accordancewith certain algorithms. Marr's tripartite distinctions between the compu- tationaltask, the algorithmic solution of the task and the hardware implementation ofthe algorithmhas (likeNewell's distinctions) become famous as a statementof thegeneral patternof the explanation. Ifyou take these explanations naively, as I do,it is bestto thinkof themas saying thatit is justas ifa manalone in a roomwere going through a set of steps of following rulesto generate English sentences or 3D descriptions,as the case might be. Butnow, let us askwhat facts in thereal world are supposed to correspondto theseexplanations as appliedto thebrain. In Chomsky'scase, for example, we arenot supposed to thinkthat theagent consciously goes through a set of repeatedapplications of rules;nor are we supposedto thinkthat he is unconsciouslythinking his way through the set of rules. Ratherthe rules are "computational" andthe brain is carrying out the computations. But whatdoes that mean? Well, we aresupposed to thinkthat it is justlike a commercial computer.The sort of thing that corresponds tothe ascription ofthe same set of rules to a commercialcomputer is supposed to correspondto the ascription of thoserules to the brain.But we have seen that in the commercial computer the ascription isalways observer relative,the ascription is maderelative to a homunculuswho assigns computational interpretationstothe hardware states. Without the homunculus there is nocomputation, just an electroniccircuit. So howdo we getcomputation into the brainwithout a homunculus?As faras I know,neither Chomsky nor Marr ever addressed the question or even thoughtthere was such a question. But withouta homunculusthere is no explanatorypower to the postulationof the programstates. There is just a physical 34 APA PROCEEDINGS, VOL. 64, NO.3 mechanism,the brain,with its various real physicaland physical/mentalcausal levelsof description.

VII. FourthDifficulty: The BrainDoes Not Do InformationProcessing

In thissection I turnfinally to whatI thinkis, in someways, the central issue in all of this,the issue of informationprocessing. Many people in the "cognitivescience" scientificparadigm will feel that much of mydiscussion is simplyirrelevant and theywill argueagainst it as follows:

Thereis a differencebetween the brain and all of theseother systems you have been describing,and thisdifference explains why a computationalsimulation in thecase of the othersystems is a meresimulation, whereas in thecase of thebrain a computa- tional simulationis actuallyduplicating and not merelymodeling the functional propertiesof thebrain. The reasonis thatthe brain, unlike these other systems, is an informationprocessing system. And this fact about the brain is, in yourwords, "intrinsic".It is just a fact about biologythat the brain functionsto process information,and sincewe can also processthe same informationcomputationally, computationalmodels of brain processeshave a differentrole altogetherfrom computationalmodels of, for example, the weather. So thereis a welldefined research question:"Are the computational procedures by which the brain processes information thesame as the proceduresby which computers process the same information?

What I just imaginedan opponentsaying embodies one of the worstmistakes in cognitivescience. The mistakeis to supposethat in thesense in whichcomputers are used to processinformation, brains also processinformation. To see thatthat is a mistake, contrastwhat goes on in thecomputer with what goes on in thebrain. In thecase of the computer,an outsideagent encodes some information in a formthat can be processedby thecircuitry of thecomputer. That is,he or she providesa syntacticalrealization of the informationthat the computercan implementin, forexample, different voltage levels. The computerthen goes througha seriesof electricalstages that the outsideagent can interpretboth syntactically and semanticallyeven though,of course, the hardware has no intrinsicsyntax or semantics:It is all in theeye of the beholder. And thephysics does not matterprovided only that you can getit to implementthe algorithm. Finally, an output is producedin theform of physical phenomena which an observercan interpretas symbols witha syntaxand a semantics. But nowcontrast that with the brain. In thecase of thebrain, none of therelevant neurobiologicalprocesses are observerrelative (though of course,like anything,they can be describedfrom an observerrelative point of view) and thespecificity of theneurophysi- ologymatters desperately. To makethis difference clear, let us go throughan example. SupposeI see a car comingtoward me. A standardcomputational model of visionwill take in informationabout the visual arrayon myretina and eventuallyprint out the sentence,"There is a car comingtoward me". But thatis notwhat happens in theactual biology.In thebiology a concreteand specificseries of electro-chemicalreactions are set up by theassault of the photonson thephoto receptor cells of my retina, and thisentire PRESIDENTIALADDRESSES 35 processeventually results in a concretevisual experience. The biological reality isnot that ofa bunchof words or symbols being produced by the visual system, rather it is a matter of a concretespecific conscious visual event; this very visual experience. Now, that concretevisual event is as specificand as concreteas a hurricaneor thedigestion of a meal. We can,with the computer, do an informationprocessing model of that event or ofits production, as we can do an informationprocessing model of the weather, digestion or anyother phenomenon, but the phenomena themselves are notthereby information processingsystems. In short,the sense of information processing that is usedin cognitivescience, is at muchtoo high a levelof abstraction tocapture the concrete biological reality of intrinsic intentionality.The "information" inthe brain is alwaysspecific to some modality or other. It is specificto thought,or vision,or hearing,or touch,for example. The levelof informationprocessing which is describedin thecognitive science computational models ofcognition, on theother hand, is simplya matter of getting a set of symbols as output in responseto a setof symbols as input. We areblinded to thisdifference bythe fact that the same sentence, "I see a car comingtoward me", can be usedto recordboth the visual intentionality and theoutput ofthe computational model of vision. But this should not obscure from us thefact that thevisual experience is a concreteevent and is producedin thebrain by specific electro- chemicalbiological processes. To confusethese events and processes with formal symbol manipulationis to confusethe reality with the model. The upshotof thispart of the discussionis that in thesense of "information" used in cognitive science, it is simplyfalse to saythat the brain is an informationprocessing device.

VI. Summaryof the Argument

Thisbrief argument has a simplelogical structure and I willlay it out:

1. On thestandard textbook definition, computation isdefined syntactically interms ofsymbol manipulation.

2. Butsyntax and symbolsare notdefined in termsof physics.Though symbol tokensare always physical tokens, "symbol" and "same symbol" are not defined in termsof physical features. Syntax, in short, is notintrinsic to physics.

3. Thishas the consequence that computation is not discovered in thephysics, it is assignedto it. Certainphysical phenomena are assigned or used or programmed orinterpreted syntactically. Syntax and symbols are observer relative.

4. It followsthat you couldnot discoverthat the brainor anythingelse was intrinsicallya digital computer, although you could assigna computational interpretationtoit as youcould to anything else. Thepoint is notthat the claim "The brainis a digitalcomputer" is false. Ratherit does not get up to the level of falsehood.It does not have a clearsense. You willhave misunderstoodmy accountif you thinkthat I am arguingthat it is simplyfalse that the brainis a digitalcomputer. The question"Is thebrain a digitalcomputer?" Is as ill defined 36 APA PROCEEDINGS,VOL. 64,NO.3

as thequestions "Is it an abacus?","Is it a book?","Is it a setof symbols?", "Isit a setof mathematical formulae?"

5. Somephysical systems facilitate the computational use much better than others. That is whywe build,program, and use them. In suchcases we are the homunculusinthe system interpreting thephysics in both syntactic and semantic terms.

6. Butthe causal explanations we thengive do notcite causal properties different fromthe physics of the implementation andthe intentionality ofthe homunculus.

7. Thestandard, though tacit, way out of this is tocommit the homunculus fallacy. The homunculusfallacy is endemicto computationalmodels of cognition and cannotbe removed by the standard recursive decomposition arguments. They are addressedto a differentquestion.

8. We cannotavoid the foregoingresults by supposingthat the brainis doing "informationprocessing". The brain,as faras its intrinsicoperations are concerned,does no information processing. Itis a specificbiological organ and its specificneurobiological processes cause specific forms of intentionality. In the brain,intrinsically, there are neurobiological processes and sometimes they cause consciousness.But that is theend of the story. [5]

Footnotes

[1] Forearlier explorations see Searle(1980) and Searle (1984). [2] Thisview is announcedand defended in a largenumber of books and articles manyof which appear to havemore or lessthe same title, e.g. Computers and Thought (Feigenbaumand Feldman, eds. 1963), Computers and Thought (Sharples et al, 1988),The Computerand the Mind (Johnson-Laird, 1988), Computation andCognition (Pylyshyn, 1985), "The ComputerModel of the Mind"(Block, 1990, forthcoming),and of course, "ComputingMachinery and Intelligence" (Turing, 1950). [3] Peoplesometimes say that it would have to addsix to itself eight times. But that is badarithmetic. Six addedto itselfeight times is fifty-four,because six added to itself zerotimes is stillsix. It is amazinghow often this mistake is made. [4] The examplewas suggested by John Batali. [5] I amindebted to a remarkablylarge number of people for discussions ofthe issues in thispaper. I cannotthank them all, but special thanks are due to John Batali, Vinod Goel,Ivan Havel, Kirk Ludwig, Dagmar Searle and Klaus Strelau. References

Block,Ned (1990),forthcoming "The Computer Model of the Mind". Boolos,George S. andJeffrey, Richard C. (1989). ComputabilityandLogic. Cambridge UniversityPress, Cambridge. PRESIDENTIALADDRESSES 37

Dennett,Daniel C. (1978). Brainstorms:Philosophical Essays on Mind and Psychology. MIT Press,Cambridge, Mass. Feigenbaum,E.A. andFeldman, J. eds. (1963). Computersand Thought.McGraw-Hill Company,New York and San Francisco. Fodor,J. (1975). TheLanguage of Thought. Thomas Y. Crowell,New York. Haugeland,John, ed. (1981). MindDesign. MIT Press,Cambridge, Mass. Johnson-Laird,P.N. (1988). The Computerand theMind. HarvardUniversity Press, Cambridge,Mass. Kuffler,Stephen W. and Nicholls,John G. (1976). FromNeuron to Brain. Sinauer Associates,Sunderland, Mass. Lettvin,J.Y., Maturana, H.R., McCulloch, W.S., and Pitts,W.H. (1959). "Whatthe Frog'sEye Tells the Frog's Brain". Proceedings ofthe Institute ofRadio Engineers, 47, 1940-1951,reprinted in McCulloch, 1965, pp. 230-255. Marr,David (1982). Vision.W.H. Freemanand Company, San Francisco. McCulloch,Warren S. (1965). TheEmbodiments ofMind. MIT Press,Cambridge, Mass. Pylyshyn,Z. (1985). Computationand Cognition. MIT Press,Cambridge, Mass. Searle,John R. (1980). "Minds,Brains and Programs", The Behavioral and Brain Sciences. 3, pp.417-424. Searle,John R. (1984). Minds,Brains and Science. Harvard University Press, Cambridge, Mass. Sharples,M., Hogg, D., Hutchinson,C., Torrance,S., andYoung, D. (1988). Computers andThought. MIT Press,Cambridge, Mass. and London. Shepherd,Gordon M. (1983). Neurobiology.Oxford University Press, New Yorkand Oxford. Turing,Alan (1950). "ComputingMachinery and Intelligence." Mind, 59, 433-460.