Philosophical Review

Computation and Content Author(s): Frances Egan Source: The Philosophical Review, Vol. 104, No. 2 (Apr., 1995), pp. 181-203 Published by: Duke University Press on behalf of Philosophical Review Stable URL: http://www.jstor.org/stable/2185977 . Accessed: 17/02/2011 14:54

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . http://www.jstor.org/action/showPublisher?publisherCode=duke. .

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Duke University Press and Philosophical Review are collaborating with JSTOR to digitize, preserve and extend access to The Philosophical Review.

http://www.jstor.org ThePhilosophical Review, Vol. 104, No. 2 (April 1995)

Computation and Content Frances Egan

1. Introduction

The dominant program in cognitivepsychology since the demise of behaviorismin the 1960s has been computationalism.Computa- tional theories treat human cognitiveprocesses as a species of in- formationprocessing, and the systemsthat implement such pro- cessing as symbol-manipulatingsystems. Describing a device as a symbolmanipulator implies that it is possible to treat some of its internal statesas representationsof propertiesor objects in a par- ticulardomain. Computational theoriesof vision,for example, pos- it internal states that can be interpretedas representingthe depth of the distal scene. There has been considerable disagreementabout the nature and functionof representationalcontents assigned to the statesposited by computational theories. It is widelythought that such theories respect whatJerry Fodor (1980) has called the "formalitycondi- tion," which requires that computational processes have access only to the formal (that is, nonsemantic)properties of the repre- sentationsover which theyare defined. It is by respectingthe for- malitycondition that computationalismpromises to answerone of the most pressingproblems in the of -how can representationalmental states be causally efficaciousin the pro- duction of behavior? Representationalmental states,according to computationalism,have their causal roles in virtue of (roughly) their structuralproperties.' But this advantage comes at a price. The formal character of computational description appears to leave no real workfor the semanticproperties of the mental states it characterizes.Thus, computationalismhas been thoughtby some to support a form of eliminativism,the thesis that denies that in- tentionallycharacterized states play a genuinely explanatoryrole in psychology (see, for example, Stich 1983). If the content of computationalstates is indeed explanatorilyidle, then the relation

'For language-likerepresentations, the formalitycondition claims that theyhave theircausal roles in virtueof their syntax.

181 FRANCESEGAN between psychological states, as characterized by computational psychology,and psychologicalstates as characterizedby our com- monsense explanatory practices, which do advert to content, is quite obscure. In this paper I articulateand defend a strategyfor reconciling the formal characterof computational descriptionwith a commit- ment to the explanatoryusefulness of mental content.I argue that content does not play an individuativeor taxonomic role in com- putational theories-a computationalcharacterization of a process is a formalcharacterization. Nonetheless, content does play a gen- uine explanatoryrole in computational accounts of cognitiveca- pacities. Content ascriptionsconnect the formal characterization of an internalprocess withthe subject's environment,enabling the computationaltheory to explain how the operation of the process constitutesthe exercise of a cognitivecapacity in thatenvironment. I support my account of the role of content in computationalpsy- chology by reference to David Marr's theory of early vision,2in part because it has received a great deal of attentionfrom philos- ophers; however,my argument depends on general features of computational methodology,and so applies to computational the- ories generally. Recent attemptsto reconcile computationand content have ap- pealed to a notion of narrowcontent, that is, content that super- venes on intrinsicphysical states of the subject. Proponents of nar- row content have so far failed to articulatea notion that is clearly suitable for genuine explanatorywork in psychology.3I argue that it is typicallybroad content that plays a central role in computa- tional explanation, though I do identifya specific (and limited) functionserved by narrow content ascription.

2. Why ComputationalTheories Are Not Intentional

It might be argued that, the formalitycondition notwithstanding, computational theories are intentionalin the followingsense: The

2For the most detailed exposition of Marr's theorysee Marr 1982. 3Loar (1988) and Segal (1989, 1991) have perhaps come closest.Loar's proposal concerns commonsense, as opposed to computational,psychol- ogy.Segal argues thatnarrow content plays a centralrole in Marr's theory. See Egan (forthcoming)for criticismof Segal's proposal.

182 states they posit not only have representationalcontent, but the contentthey have playsan individuativerole in the theory.In other words, computational theories taxonomize states by reference to their contents. The motivationfor the claim thatcomputational theories of cog- nition are intentionalin the above sense is not hard to understand. Consider the followingpassages:

Thereis no otherway to treatthe visual system as solvingthe problem thatthe theorysees it as solvingthan by attributing intentional states thatrepresent objective physical properties. (Burge 1986, 28-29) [I]t is at leastarguable that where rational capacities are the explan- anda, it is necessarythat there be propositionalattitudes in the ex- planans.If thisargument is correct,then it is pragmaticallyincoherent forStich and his followersto insistthat cognitive psychology explains rationalcapacities by referenceto statesnot describedas possessing propositionalcontent. (Hannan 1993,173)

The argument underlyingboth passages can be expressed some- what crudelyas follows:

(P) The explananda of computational psychological theories are intentionallycharacterized capacities of subjects. (C) Therefore,computational psychological theories are inten- tional-they posit intentionalstates.

Underlyingthe argument is the intuition that scientificexpla- nations should "match" (in some sense) theirexplananda. Wilson endorses a constraintof this sort, which he calls theoreticalappro- priateness:

An explanation is theoreticallyappropriate when it providesa natural (e.g. non-disjunctive)account of a phenomenon at a level of expla- nation matchingthe level at which thatphenomenon is characterized [in the explanandum]. (1994, 57)

The notion of a "level of explanation" is somewhat vague, but let us assume, for the sake of argument, that there is a unique level of explanation such that all and only explanations at that level involve ascriptions of content. If theoretical appropriateness is a desideratum of scientific explanation, then an explanation of in-

183 FRANCESEGAN tentionallycharacterized phenomena should itselfadvert to inten- tionallycharacterized states.4 An unresolved tension surfaces,though, when we consider com- putational explanation. The fact that the explananda of computa- tional theories are intentionallyspecified suggests that computa- tional statesare essentiallyindividuated by referenceto their con- tents.If computational theories are not intentional,then how can computational theories explain intentionallycharacterized phe- nomena? But the formalitycondition exerts an opposite pressure. In requiring that computational processes have access only to the nonsemantic properties of the representationalstates over which they are defined, it suggests that computational individuationis nonsemantic,or in Fodor's terminology,formal. Are computational taxonomies intentionalor formal?At thispoint it is helpfulto turn to a well-developedexample. Interpretersof Marr's theoryof vision have assumed that visual statesare individuatedin the theoryby referenceto theircontents, hence thatthe theoryis intentional(see Burge 1986; Kitcher1988; Segal 1989, 1991; Davies 1991; Morton 1993; Shapiro 1993). Al- though there has been a good deal of disagreementabout the sort of content (broad or narrow) that Marrian structureshave, the assumption that content plays an individuativerole in the theory has not been to require explicit argument.5Burge says that it is "sufficientlyevident" that the theoryis intentionalfrom the factthat "the top levels of the theoryare explicitlyformulated in intentionalterms" (1986, 35). I shall argue that in construing content as individuative,interpreters of Marr have misconstrued the role of content in computational theories. While it is true that in his informal exposition of the various visual processes Marr typicallycharacterizes them by reference to features of the distal scene, one should not read too much into

4A tacit appeal to theoreticalappropriateness seems to underlie the ar- gument of Graves et al. (1973) for the claim that the explanation of the speaker's knowledgeof her language must appeal to internalizedknowledge of grammar. Shapiro (1993) has described my claim (in Egan 1991) that Marr's theoryis not intentionalas "startling."It should not be startling.A central claim of Field (1978) is, as Field puts it in his (1986) paper, "that psycho- logical theories have a non-intentionalcore" (114). In any event, inter- pretersof Marr have not defended the crucial assumptionthat his theory is intentional.

184 COMPUTATION AND CONTENT this fact.The processes are also characterizedformally. They have to be-Marr's theory of vision is a computational theory,and a formal characterizationguarantees that they are programmable (hence, physicallyrealizable). The question is, which characteriza- tion does the individuativework? Marr argued persuasivelythat an information-processingsystem should be analyzed at three distinct levels of description. The "top" level, which Marr called the theoryof the computation,is a characterizationof the functioncomputed by the system-whatthe systemdoes. The algorithmiclevel specifies an algorithmfor com- puting the function,and the implementationlevel describes how the process is realized physically.6The top level in Marr's hierarchyis sometimesidentified with Pylyshyn's semantic level (Pylyshyn1984) and Newell's knowledge level (Newell 1982). In other words, the theory of the computation has been construed as essentiallyan intentionalor semanticcharacterization of a mechanism. But such a construal makes somewhat puzzling Marr's insistence that the search for the algorithmmust await the precise specificationof the theoryof the computation.He says,"unless the computationalthe- ory of the process is correctlyformulated, the algorithmwill almost certainlybe wrong" (1982, 124), suggesting that the top level should be understood to provide a function-theoreticcharacterization of the device. Indeed, Marr explicitlypoints out that the theoryof the computation is a mathematical characterizationof the func- tion(s) computed by the various processingmodules. In describing the mathematicalformula that characterizesthe initial filteringof the image (the calculation of the Laplacian of the image convolved with a Gaussian), Marr says the following:

I have arguedthat from a computationalpoint of view[the retina] signalsV2G*I (the X channels)and itstime derivative a/at(V2G*I) (the y channels). From a computationalpoint of view,this is a precise char- acterizationof what the retina does. Of course, it does a lot more-it transducesthe light,allows fora huge dynamicrange, has a foveawith interestingcharacteristics, can be moved around, and so forth.What you accept as a reasonable descriptionof whatthe retinadoes depends on your point of view.I personallyaccept V2G as an adequate descrip-

61ndescribing the levels of Marr'shierarchy as levelsof descriptionI do not mean to preclude treatingthe items classifiedby level as phenomena and processes ratherthan purelylinguistic devices available to theorists.

185 FRANCESEGAN

tion, although I take an unashamedlyinformation-processing point of view. (1982, 337)

V2G is a function that takes as arguments two-dimensional intensity arrays I (x,y) and has as values the isotropic rates of change of intensity at points (x,y) in the array. The implementation of this function is used in Marr and Hildreth's (1980) model of edge detection to detect zero-crossings.(A zero-crossing is a point where the value of a function changes its sign. Zero-crossings correspond to sharp intensity changes in the image.) Marr grants that the mathematical specification of the function computed by the retina may not make what the retina does perspicuous.Nonetheless, from an information-processing point of view, the formal specification is "adequate." More precisely, it is the description upon which the correct specification of the algorithm crucially depends. The claim that the top level provides a mathematical character- ization does not imply that it is wrong to speak of the visual system as taking representations of light intensityvalues as input and yield- ing representations of shape as output. I am not denying that com- putational processes have true intentional (semantic) descriptions. For some purposes, as we shall see in the next section, an inten- tional description of a process will be preferable to a formal char- acterization. It is not incorrect to say that an intentional charac- terization of the function computed by a mechanism resides at the top level in Marr's hierarchy, although the intentional character- ization provides an extrinsicdescription of what the device does, and does not individuate the computational process. For the pur- pose of individuation, the precise mathematical description given by the theory of the computation is the description that counts.7'8

7In arguing for (various) intentionalcharacterizations of the theoryof the computation,interpreters of Marr point out that he speaks of the pri- mal sketch as "representingthe image," and of other structuresas repre- senting such distal properties as depth and surface reflectance.The as- sumptionunderlying such argumentsis thatMarr's words in these passages are decisive for settlingissues of taxonomy.If theoryinterpretation were so simple, much of the philosophyof science would be out of business. The individuativeprinciples of a scientifictheory can rarelybe read off the language used to articulate the theory.Marr is not generallycareful or consistentin his language. There is no reason why he should be-he is not focusingon the issues that have concerned . In the pas- sage I have quoted in the text,however, Marr is explicitlydiscussing fun-

186 COMPUTATION AND CONTENT

If, as I have argued, the top level of a computational account provides a purelymathematical characterization of a device, then thereis littletemptation to construethe second, or algorithmiclevel, as intentional. (Only Burge, as far as I know, construes the algo- rithmiclevel as intentional,apparently because in discussingvari- ous possible algorithmsMarr sometimes employs intentionallan- guage.) The algorithmiclevel of theory simplyspecifies how the function characterized in mathematical terms at the top level is computed by the system.

3. The ExplanatoryRole of Content

I have argued that Marr's theory of vision is not intentional.My argument appeals to general features of computational method- ology; if I am right,then computational theories of cognition are not intentional-the states and processes characterized by such theories are not individuatedby referenceto the representational contents ascribed to them. The formal-namely mathematical- characterizationdoes the taxonomic work. Let us consider for a moment the implicationsof the claim that computational theories are not intentional.Two mechanisms that compute the same mathematical function,using the same algo- rithm,are, from a computational point of view,the same mecha- nism, even though they may be deployed in quite differentenvi- ronments. A computational description is an environment-inde- pendent characterizationof a mechanism.9Inasmuch as compu- damentalcommitments of theinformation-processing approach, in partic- ular,how the theory of the computation is to be understood;so thepassage bearsspecial significance in thecontext of the currentissue. 8ColinMcGinn has pointedout to me thatthe theoryof the computa- tionis intentionalin the followingsense: it does specifyan intendedin- terpretationof a computationalprocess-the intendedinterpretation is mathematical.The topmostlevel of a computationaltheory characterizes the systemas computinga seriesof functionsdefined on mathematical entities.I am quitehappy to saythat a computationaltheory is intentional in thisrather unusual sense. This is notthe sense in whichinterpreters of Marrhave taken his theory to be intentional.(They have assumed that the theorycharacterizes the system,essentially, as computinga functionde- finedon aspectsof thevisual domain, and thisis preciselywhat I deny.) 9Thisis not to suggestthat the theoristcan ignorethe subject'senvi- ronmentin attemptingto formulatea computationaldescription of the device.Quite the contrary.See section5.

187 FRANCESEGAN

tational processes are generally construed as modularprocesses, even the internalenvironment is irrelevantto the type-individuation of a computational process. Imagine a component of the visual system,called the visex,that computes a representation of the depth of the visual scene from informationabout binocular dis- parity.10Now imagine that withinthe auditorysystem of some ac- tual or imagined creature there is a component that is physically identical to the visex. Call this component the audex.According to the theoryof auditoryprocessing appropriate to this creature,the audex computes a representationof certain sonic properties.We can imagine a particularvisex and audex removed fromtheir nor- mal embeddings in visual and auditory systemsrespectively and switched.Since the two components are by hypothesisphysically identical,they compute the same class of functions.The switchwill make no discernible differenceto the behavior of the creatures, nor to what is going on inside their heads. The two mechanisms are computationallyidentical, despite the differencein their nor- mal internalenvironments. It will perhaps be noted that the visual theorythat describes the visex characterizesit as computinga representationof depthfrom dis- parity,and not as computinga representationof certainsonic prop- erties, although it would do the latter if it were embedded in a differentinternal environment.The importantpoint is that the postulated structureshave no content considered independently of the environment(internal and external) in which theyare nor- mallysituated. This is the sense in which an intentionalcharacter- ization of a computationalprocess is an extrinsicdescription. Struc- tures in the raw primal sketch,which contains informationfrom several distinctV2G channels and providesthe input to most of the modular processes characterizedby Marr's theory,are reliablycor- related with such salient distal propertiesas object boundaries or changes in illumination,and are described by Marr as representing these properties.In some radicallydifferent environment, however, the same structuresmay be correlated with differentdistal prop- erties, or perhaps with no objective feature of the world. In the latterworld, the structureswould not representanything, except perhaps featuresof the image.They would have no distal content in thatworld.

10Thisis an adaption of an example fromDavies 1991.

188 COMPUTATIONAND CONTENT

The point I wish to underscore is that an intentionalcharacter- ization of a computational mechanism involvesan implicitrelativ- ization to the contextin which the mechanism is normallyembed- ded. The mathematicalcharacterization provided by the theoryof the computationdoes not. Only the mathematicalcharacterization picks out an essentialproperty of a computationalmechanism. The intentionalcharacterization is not essential,since in some possible circumstancesit would not apply. What, then, is the role that representationalcontent plays in computationalaccounts of cognitiveprocesses, if not to essentially characterize cognitiveprocesses? I have argued elsewhere (Egan 1992) that semantic interpretationsplay a role in computational psychologyanalogous to the role played by explanatorymodels in the physicalsciences. There are two senses in which thisis true. In the firstplace, an intentionalcharacterization of an essentiallyfor- mal process serves an expositoryfunction, explicating the formal account, which might not itselfbe perspicuous. Secondly,when a theoryis incompletelyspecified (as is Marr's theory),the studyof a model of the theorycan oftenaid in the subsequent elaboration of the theoryitself. A computational theoristmay resort to char- acterizinga computation partlyby reference to featuresof some representeddomain, hoping to supplythe formaldetails (i.e., the theory) later. Though the analogywith models in physicsis, I ,interesting and useful, the most importantfunction served by intentionalin- terpretationsof computational processes is unique to psychology. The questions that antecedently define a psychological theory's domain are usuallycouched in intentionalterms. For example, we want a theory of vision to tell us, among other things,how the visual systemcan detect depthfrom information contained in two- dimensional images. An intentionalspecification of the postulated computationalprocesses demonstratesthat these questions are in- deed answered by the theory.It is only under an interpretationof some of the states of the systemas representationsof distal prop- erties (like depth, or surface reflectance) that the processes given a mathematicalcharacterization by a computational theoryare re- vealed as vision. Thus contentascriptions play a crucial explanatory role: we need them to explain how the operation of a formally characterizedprocess constitutesthe exercise of a cognitivecapac- ityin the environmentin which the process is normallydeployed.

189 FRANCESEGAN

Let us returnfor a moment to the argumentconsidered earlier for the claim that computational theories of cognition are inten- tional: (P) The explananda of computational psychological theories are intentionallycharacterized capacities of subjects. (C) Therefore,computational psychological theories are inten- tional-they posit intentionalstates. The premise of the argumentis true-the questions that define a psychologicaltheory's explanatory domain are usually couched in intentionalterms-but, as we have seen, it does not followthat the theory characterizes the states and processes it describes as necessarilyintentional. Computational states and processes will typ- ically have no true intentionaldescription when considered inde- pendently of an environment. Intentional characterizatonsare thereforenot part of the individuativeapparatus of computational theories. In thissense, (C) is false. Yet the argumentdoes contain an importantinsight: an intentionalcharacterization is needed to connect a computational theorywith its pretheoreticexplananda. An explanation of how the visual systemdetects the depth of the scene from informationcontained in two-dimensionalimages is forthcomingonly when the statescharacterized in formalterms by the theory are construed as representationsof distal properties. But, one mightobject, isn't this crucial explanatoryrole played by an intentional interpretation of a computational process enough to make the computational theoryintentional? Indeed, it mightseem that a computational theory,when divorced from the intentional interpretationthat secures its explanatory relevance, cannot properlybe characterizedas a theoryof cognition.There is a sense in which this is true; however,it does not undermine my point that computational theories are not intentional.Let me ex- plain. A computational theory provides a mathematical characteriza- tion of the functioncomputed by a mechanism,but only in some environmentscan thisfunction be characterizedas a cognitivefunc- tion (that is, a functionwhose arguments and values are episte- micallyrelated, such that the outputs of the computation can be seen as rationalor cogent given the inputs). An example will make the point clearer. The matching of stereo images essential to the computation of depth frombinocular disparityis aided, according

190 COMPUTATION AND CONTENT

to Marr, by a fundamental fact about our world-that disparity varies smoothly,because matteris cohesive. This is an example of what Marr calls a naturalconstraint (in particular,the continuitycon- straint).In some environments,the constraintsthat enable a cog- nitiveinterpretation of the mathematicalfunction computed by a mechanism will not be satisfied.In environmentswhere the con- tinuityconstraint is not satisfied(a spikyuniverse), the stereopsis module would compute the same formallycharacterized function, butit wouldnot be computingdepth from disparity. The functionmight have no cognitive (i.e., rational) descriptionin this environment. A computational theory prescinds from the actual environment because it aims to provide an abstract,and hence completelygen- eral, descriptionof a mechanism thataffords a basis forpredicting and explaining its behavior in any environment,even in environ- ments where what the device is doing cannot comfortablybe de- scribed as cognition.When the computational characterizationis accompanied by an appropriate intentionalinterpretation, we can see how a mechanism that computes a particular mathematical functioncan, in a particularcontext, subserve a cognitivefunction such as vision. A computationaltheory explains a cognitivecapacity by subsum- ing the mechanism that has that capacityunder an abstractcom- putational description. Explaining a pretheoreticallyidentifiable capacity by reference to a class of devices that have an indepen- dent, theoretical,characterization is an explanatorystrategy famil- iar from other domains, particularlybiology. The abilityof sand sharksto detect prey is explained by positingwithin the shark the existence of an electricfielddetector, a device whose architectureand behavior is characterized by electromagnetictheory. Electromag- netic theorydoes most of the explanatorywork in the biological explanation of the shark's prey detectingcapacity. Of course, the explanation appeals to other facts-for example, that animals, but not rocks and other inanimate objects in the shark's natural envi- ronment, produce significantelectric fields-but no one would suggestthat such factsare part of electromagnetictheory. Similarly, by specifyingthe class of computationaldevices to which a mechanism belongs and providing an independent (i.e., noncognitive) char- acterizationof the behavior of this class, a computational theory bears the primaryexplanatory burden in the explanation of a cog- nitive capacity.The intentionalinterpretation of the process also

191 FRANCESEGAN plays an explanatory role-it demonstratesthat the capacity has been explained-but playingan essential role in the cognitiveex- planation does not therebymake it part of the computationaltheory proper. So does it followthat computational theories are not cognitive? It depends. If a theorymust give a cognitivecharacterization of a mechanism (according to which computinga cognitivefunction is a necessaryproperty of the mechanism) to be a cognitivetheory, then computational theories are not cognitive.If bearing the pri- maryexplanatory burden in an explanation of a cognitivecapacity is sufficient,then theytypically are. Let us returnbriefly to Wilson's theoreticalappropriateness con- dition,the requirementthat a scientificexplanation characterizea phenomenon at the same level in the explanans as in the explan- adum. The above account of computational explanation suggests that theoreticalappropriateness is not a general constrainton sci- entificexplanation. Computational explanations characterizecog- nitivecapacities in nonintentional,formal, terms. The requirement is independentlyimplausible in any case, since it would rule out not only reductiveexplanations (e.g., microreductions)of antece- dentlycharacterized phenomena, but also explanation byfunction- al ,the predominantform of explanation in both cognitive psychologyand biology.11Such explanations typicallyanalyze com- plex capacities or processes into more basic, less specialized, ele- ments. For example, the explanation of the capacity to do long divisionappeals to the abilityto copy numerals and performmul- tiplicationand subtraction.The explanation of digestion appeals to more basic chemical processes. Both of these explanations ap- pear to violate Wilson's theoretical appropriateness condition. Though Wilson grants that theoreticalappropriateness is a defea- sibleconstraint on scientificexplanation, the ubiquityof explana- tions of this sort suggeststhat it is not a constraintat all.

4. The Ascriptionof Content

An interpretationof a computational systemis given by an inter- pretationfunctionfithat specifies a mapping between the postulated

"1See Cummins 1983, chap. 2, for an account of functionalanalysis.

192 COMPUTATION AND CONTENT structuresof the systemand elements of some represented do- main. For example, to interpreta device as an adder involvesspec- ifyingan interpretationfunction fi that pairs states of the device withnumbers. The device can plausiblybe said to representelements in the domain only if there exists an interpretationfunction that maps formallycharacterized structures to these elements in a fairly directway. Since an interpretationis just a structure-preservingmapping between formallycharacterized elements and elements of some representeddomain, there is no reason to thinkthat the interpre- tation of a computational systemwill be unique. The non-unique- ness of computational interpretationhas been thought to be a problem for computationalism,but in fact it is not. Most "unin- tended" interpretationswill not meet the directnessrequirement.'2 More importantly,the plausibilityof a computational account de- pends only on the existence of an interpretationthat does explan- atorywork.'3 If the above account of the explanatoryrole of content is cor- rect,then the interpretationof a computationalsystem should con- nect the formal apparatus of the theorywith its pretheoreticex- plananda. This requirementwill constrainthe choice of an appro- priate interpretationfunction. A computational theory that pur- ports to explain our arithmeticalabilities cannot plausiblyclaim to have done so unless some of the states it postulates are interpret- able as representingnumbers. The fact that the systemcould also be interpretedas chartingthe progressof the Six-DayWar (to use an example of Georges Rey's) would not undermine the theorist's claim to have described an arithmeticalsystem, assuming that the mechanism can be consistentlyand directlyinterpreted as com-

12Thedirectness requirement precludes interpretinga desk as an adder, since the assignmentof numbers to states of the desk requires the inter- preter to compute the addition functionherself. The systemis not doing the work.The directnessrequirement has yet to be preciselyspecified, but see Cummins 1989, chap. 8 for discussion.I gloss over this issue here pri- marilybecause, as we shall see below, the "problem" of ruling out unin- tended interpretationsof computationalsystems typically does not arise. 13Theexistence of more than one interpretationmeeting the directness requirementsimply shows thatthe formallycharacterized device is capable of computing more than one cognitive function. The visex, described above, would compute a functionon the auditorydomain if it were em- bedded differentlyin the organism.

193 FRANCESEGAN puting the appropriate arithmeticalfunctions. Given the explana- toryrole of intentionalinterpretation as characterizedin the pre- vious section, the existence of "unintended" interpretationsof computationalsystems is irrelevant.The preexistingexplananda of the theoryset the termsfor the ascriptionof content. Consider what this means for theories that purport to explain our perceptual capacities. The cognitivetasks that define the do- mains of theories of perception are typicallyspecified in termsof the recoveryof certain types of informationabout the subject's normal environment.Interpreting states of the systemas repre- sentingenvironment-specific properties demonstrates that the the- ory explains how the subject is able to recover this informationin its normal environment.Consequently, we should expect the con- tents ascribed to computationallycharacterized perceptual states to be broad,that is, not shared by physicallyidentical subjects in significantlydifferent environments. It has been argued by Fodor (e.g., 1980, 1984, 1987) and others (e.g., Block (1986) and Cummins (1989)) that computationalpsy- chology must restrictitself to a notion of narrowcontent, that is, content thatsupervenes on intrinsicphysical states of the subject.'4 In part, the motivationfor such a viewis the recognitionthat com- putational taxonomyprescinds from the subject's normal environ- ment. Physicalduplicates are computationalduplicates. Given this fact,if computational states have their semantic properties essen- tially,then computationalpsychology requires a notion of content that supervenes on the physicalproperties of the system;in other words,it needs a notion of narrowcontent. But if,as I have argued, computational states have their semantic propertiesonly nonessen- tially,then narrow content is not necessary.And it turns out that there are good reasons whycomputational psychology should not restrictitself to narrow content. In the firstplace, a useful notion of narrow content has been notoriouslyhard to specify.More importantly,since the explan- anda of theories of perception are typicallyformulated in environ- ment-specificterms, ordinary environment-specific broad contents will best serve the explanatorygoals of such theories. The point

140thers,such as Stich (1983), impressed by the fact that content as- criptionis typicallycontext-sensitive and observer-relative,have concluded that cognitivepsychology should not advertto content at all.

194 COMPUTATION AND CONTENT

can be generalized. It is widelyappreciated that ordinarycontents are broad. Insofaras the pretheoreticexplananda of computation- al theories are framed in ordinaryterms, the ascriptionof broad content to computationalstates and structureswill be appropriate. A close look at Marr's theory confirmsthe point. He ascribes broad, environment-specificcontents where possible. If in a sub- ject's normal environmenta structureis reliablycorrelated with a salient distal property,then Marr describes the structureas rep- resentingthat property.(For example, he describes structuresin the 2.5D. sketch as representingsurface orientation.) Some of the structuresposited by Marr's theorycorrelate with no simple distal propertytokening in the subject's normal environment.The struc- tures that Marr calls edgessometimes correlate with changes in sur- face orientation,sometimes with changes in depth, illumination, or reflectance.Marr describes edges as representingthis disjunc- tive distal property.Notice that in both cases-correlation with a simple distal propertyin the subject's normal environmentor cor- relation with a disjunctivedistal propertyin the subject's normal environment-the contentsascribed to the representationalstruc- tures are broad. Moreover,the broad contentsso ascribed are de- termined by the correlations that obtain in the subject's normal environment,not by those that would obtain in some other envi- ronment. Some of the structuresthat Marr posits (e.g., individual zero- crossings) do not, however,correlate withany easily characterized distal property,simple or disjunctive,in the subject's normal en- vironment.Some of their tokenings correlate with distal proper- ties, others appear to be mere artifactsof the imaging process. Marr recognizes this fact,cautioning that such structuresas zero- crossings are not "physicallymeaningful"; he describes them as representing discontinuitiesin the image.Their contents are only proximal, and hence narrow-they supervene on the intrinsic properties of the subject. But such proximal or narrow content, far frombeing Marr's content of choice, is his content of last re- sort,since he ascribesproximal contentonly when a broad content ascriptionis unavailable.15

15Commentatorswho have thoughtnarrow content to be Marr's content of choice have presumablydone so because theyrecognize that content- determiningcorrelations with distal propertiescan varywildly across en- vironments(see, for example, Segal 1989, 1991). They fail to notice that

195 FRANCESEGAN

Covariational(or information-theoretic)theories of contentidentify the meaning of a representationalstate with the cause of the state's tokeningin certain specifiable circumstances.'6The foregoingac- count of content ascriptionin Marr's theory may tempt some to find in his theorya tacit endorsement of a covariationaltheory of content. This would be a mistake.I have claimed that in ascribing content Marr looks for salient distal correlates of a structure'sto- kening in the subject's normal environment.I have been careful to avoid claiming that these correlates are thecause of the struc- ture's tokening.Though it maybe naturalto saythat they are, Marr makes no such claim, and a number of well-knownproblems are avoided by not doing so.'7 It should be clear that Marr's theoryis not committedto a covariationaltheory of contentif one considers the sort of case where no salientdistal correlate (simple or dis- junctive) of a structure'stokening can be found. In such cases, for Marr the relevant correlationsare those that obtain in the subject's normal environment. 16See, for example, Stampe (1977), Dretske (1981), and Fodor (1990). There are, of course, importantdifferences in their accounts. 170ne problem with covariationaltheories of content is their implica- tion that the meaning of a symbolis given by the disjunctionof all of its potential causes. Since "horse" tokeningswould be caused not only by horses, but also by horsey looking cows, covariational theories seem to implythat "horse" means horseor horseylooking cow. (See Fodor 1990 for discussion.) The "disjunction problem" gives rise to a furtherdifficulty, namely,how to account for the possibilityof a symbol'smisrepresenting its object, given that all potential causes of a symbol'stokening determine its meaning. Though computationaltheorists have had littleto say about mis- representation(their concern is to characterizewhat is going on in the normal case, where perception is veridical) it is not hard to see how mis- representationcan arise on the account of content ascription I have sketchedabove. Structuresassigned distal contents (simple or disjunctive) will misrepresentif theyare tokened when the normal environmentalcon- ditions for theirtokening are not satisfied.Suppose that,as part of a mil- itarytraining exercise, Bill, a normal human witha Marrianvisual system, is placed in a room where the continuityconstraint, which holds that dis- parityvaries smoothlybecause matter is cohesive, is not satisfied.Bill's visual systemnormally computes depthfrom disparityinformation. How- ever, in these circumstances,where spikes of matterproject in all direc- tions, Bill (or, more specifically,the stereopsismodule of Bill's visual sys- tem) willcompute the same formallycharacterized function as he normally does, but he will misrepresentsome other property(not a propertyfor which we have a convenientname) as depth. In general, where the con- traintsthat normallyenable an organismto compute a cognitivefunction are not satisfied,it will fail to representits environment.

196 COMPUTATION AND CONTENT

Marr ascribes a proximal content to the structure,interpreting it as representinga featureof the image or input representationrath- er than the distal cause of its tokening,whatever that might be. The ascriptionof proximal contentserves an importantexpository purpose-it makes the computational account of the device more perspicuous, by allowing us to keep track of what the device is doing at points in the processingwhere the theoryposits structures that do not correlate neatlywith a salient distal property.'8No ex- planatorypurpose would be served by an unperspicuous distal in- terpretationof these structures;consequently, Marr does not in- terpretthem as representingtheir distal causes. The decision to adopt a proximal ratherthan a distal interpretationis dictated by purelyexplanatory considerations.'9

5. ComputationalPsychology and NaturalisticPsychology

The compatibilityof computationaldescription and broad content seems to have gone unnoticed in the literature.Cummins (1989), whose account of representation in computational psychology bears some resemblance to mine, says the following:

The CTC [computational theoryof cognition] ... seeks an individu-

18One mightwonder whetherwhat I am calling "proximal content" is really content at all. To be sure, proximal contents do not bear much resemblance to the contents we ascribe in our ordinary predictiveand explanatorypractices; however, I do thinkthat contentsof this sort play a genuine explanatoryrole in computationalaccounts of internalprocesses. To cite a second example, Marcus (1980) interpretsthe structuraldescrip- tions constructedin the course of naturallanguage comprehensionas rep- resentingnot distal objects (or public language sentences) but the items in stacks or buffersof the parser. In both the vision and parsing cases, interpretinga structureas representingother structuresconstructed ear- lier in the process serves the importantfunction of allowing us to keep trackof what the processor is doing. Given that the rationale for content ascription in computational psychologyis primarilyexplanatory, I think that proximal content should be treated as a species of content, though perhaps only as a sort of "minimal" content. 19Matthews(1988), Segal (1989), and McGinn (1989) note another rea- son to resistan exclusivelycausal account of content.They argue that the contents of mental representationsseem to be partlydetermined by the sorts of behaviors that they tend to produce. Whether a structurewhose tokening is caused by both cracks and shadows means crack,shadow, or crackor shadowdepends in part upon whetherits tokeningcontributes to the production of behavior appropriate to cracks,shadows, or both.

197 FRANCESEGAN

alistpsychology, i.e., a psychologythat focuses on cognitivecapacities of the kind that might be brought to bear on radicallydifferent en- vironments.If the anti-individualistposition withregard to intention- alityis right (i.e. if beliefs and desires cannot be specified in a way that is independent of environment),then the explananda of an in- dividualistpsychology cannot be specifiedintentionally. It followsthat the CTC shouldn't-indeed, musn't-concern itselfwith intentionally specifiedexplananda. (140)

Cummins's mistake is in thinking that the fact that a computational theory seeks to provide a nonintentional, environment-indepen- dent characterization of a cognitive process entails that it cannot explain phenomena specified in environment-specific terms. This, we have seen, is wrong. A computational theory explains an envi- ronment-specific cognitive capacity by subsuming it under an en- vironment-independent characterization. The intentional interpre- tation of the process serves as a bridge between the abstract char- acterization provided by the theory and the environment-specific intentional characterization that constitutes the theory's explan- anda. Precisely because the intentional interpretation does not play an essentially individuative role in the theory-in other words, whatever contents computational states have, they have them non- essentially-the theorist is free to assign broad contents where appro- priate to secure the connection between theory and explananda. The fact that the computational theorist can and typically will assign broad contents to computational structures has larger im- plications for psychology. In "Methodological Solipsism Consid- ered as a Research Strategy in Cognitive Psychology," Fodor says the following:

there is room for both a computationalpsychology-viewed as a the- ory of formal processes defined over mental representations-and a naturalisticpsychology, viewed as a theoryof the (presumablycausal) relationsbetween representationsand the world which fixthe seman- tic interpretationsof the former.I thinkthat in principle this is the right way to look at things ... however ... it's overwhelmingly likely that computational psychologyis the only one that we are likelyto get.... [A] naturalisticpsychology isn't a practicalpossibility and isn't likelyto become one. (1980, 66)

Naturalistic psychology, as Fodor construes it, is the theory of or- ganism/environment relations that fix the meanings of our mental

198 COMPUTATION AND CONTENT

terms. He offerstwo argumentsfor the claim that a naturalistic psychologyis impossible.As both argumentshave been thoroughly worked over in the literature(see the commentariesthat accom- pany Fodor 1980), I won't go into them here. But as far as I know, no one has disputed Fodor's implication that computational psy- chologyand naturalisticpsychology are entirelyunrelated projects. If my account of the role of content in computationalpsychology is correct, then Fodor's way of conceiving things is wrong. If we had a complete computationalpsychology, that is, a computational account of each human cognitivecapacity, we would ipso facto al- ready have a naturalisticpsychology. Let me elaborate. Although a computational theory provides a formal, environ- ment-independent,characterization of a process, the theoristwill usually be unable to discover the correct formal characterization withoutinvestigating the subject's normal environment.Typically, a necessary firststep in specifyingthe function computed by a cognitivemechanism is discoveringenvironmental constraints that make the computationtractable. The solutionsto information-pro- cessing problems are oftenunderdetermined by informationcon- tained in the input to the mechanism; the solution is achieved only withthe help of additional informationreflecting very general fea- tures of the subject's normal environment.For example, as previ- ously mentioned, the computation of depth from binocular dis- parityis possible only because the mechanism is built to assume somethingthat is true about its normal environment-that dispar- ityvaries smoothlybecause matteris cohesive (the continuitycon- straint).Finding constraintsof thisvery general sort is a necessary firststep in characterizingthe mathematical problem that the mechanism has to solve, and thus in arrivingat a correct compu- tational descriptionof the process. There is a second and more obvious point at which the com- putational theoristwill contribute to the specificationof the or- ganism/environmentinteractions that fix the meanings of mental terms-namely,in the specificationof an intentionalinterpretation of a formallycharacterized process. I have argued that content ascriptionis constrainedby the subject's normal environment.The process of ascribingcontent to the structuresposited by the theory involves the attempt to specifythe normal environmentalcorre- lates of tokeningsof these structures.The factthat Marr succeeded in ascribingdistal contentsto manyof the structuresposited in his

199 FRANCES EGAN theory (and Marr is not unique in thisachievement) demonstrates that naturalisticpsychology is not impossible. Although computa- tional psychologyis formal-its taxonomic principlesare formal- it develops hand-in-glovewith the project that Fodor calls natural- istic psychology.20

6. Scope and Limits of the Account

My account of the explanatoryrole of contenthas been articulated and defended by referenceto classicalcomputational models. Clas- sical architecturestreat cognitive processes as rule-governedma- nipulations of internal symbolsor data structuresthat are explicit candidates for interpretation.Connectionist cognitive models, by contrast,do not posit data structuresover which the device's op- erations are defined. (Many connectionistdevices, unlike classical devices, are not correctlydescribed as constructing,storing, and retrievinginternal representations.) Connectionist models posit ac- tivatedunits (nodes) that increase or decrease the level of activa- tion of other units to which theyare connected until the ensemble settles into a stable configuration. Consequently, connectionist models lack convenient"hooks" on which an intentionalinterpre- tation of a process may be hung. Semantic interpretationsascribe content either to individual units in the networkor to patternsof activationover an ensemble of units. However,I see no reason why the above account of the explanatoryrole of content would not apply straightforwardlyto connectionist systems.Semantic inter- pretations of connectionist networksplay the same complex ex- planatory role as do interpretationsof classical computational models. Most importantly,they connect a connectionisttheory of a cognitivecapacity with its pretheoreticexplananda. It remains to be seen whether computational psychologywill shed much light on paradigm cases of intentionalstates, namely, beliefsand desires.The conspicuous successes of computationalism

20Naturalisticpsychology, construed as the specificationof the organ- ism/environmentrelations that fix the meanings of mental representa- tions,should not be confusedwith the enterprisethat is sometimescalled "the naturalizationproject" in semantics.The latter attemptsto specify sufficientconditions, in a nonintentionaland nonsemanticvocabulary, for a mental state's meaning what it does. It is a purelyphilosophical project, not the concern of psychologists.

200 COMPUTATION AND CONTENT have been in characterizinghighly modularized, informationally encapsulated processes such as earlyvision and syntacticand pho- nological processing.The statesposited by theories of this sortfail to exhibit the complex functionalroles characteristicof the prop- ositional attitudes (including, typically,accessibility to conscious- ness). They are subdoxasticstates. Fodor, in TheModularity of Mind, has expressed considerable pessimismabout the prospectsof char- acterizing in formal,computational terms more centralcognitive processes such as belief fixation. I think this pessimism is well placed, if only because the context-sensitivityof belief ascription makes the programmingtask appear intractable.However, should a computational account of propositional attitudesbe forthcom- ing, contentwould play the same explanatoryrole it plays in the- ories of modular capacities. An interestingconsequence of this eventualityis that propositional attitudes,so characterized,would not have theircontents essentially. Type-identical belief-state tokens mighthave differentcontents, should theybe tokened in relevantly differentenvironments.2' The prospect of a computationaltheory of belief, therefore,challenges a fundamentalcommitment of or- thodox philosophyof mind.22Some may conclude that such a the- ory would not reallybe about the propositional attitudes,though nothingin the folkconception of the mind would seem to warrant this conclusion.23

RutgersUniversity

21Forexample, a computationaltheory of belief would type-identifymy waterbeliefs and my Twin Earth doppelganger's twaterbeliefs, although intentionalinterpretations appropriate to our respectiveworlds mightas- sign differentbroad contentsto our type-identicalbeliefs. A computational theoryof belief would, therefore,respect the intuitionthat has been the prime motivationfor the postulationof narrowcontent-that doppelgang- ers are identical in psychologicallyrelevant respects, and hence should be subsumed under the same psychologicalgeneralizations. But because a computationaltheory is not committedto narrow content,it can also ac- commodate the intuitionthat the subject's environmentis a determinant of her belief contents. 22Butsee Matthews1994 for an account of propositional attitudesthat denies that theyhave theircontents essentially. 23Thanksto Kent Bach, Noam Chomsky,Patricia Kitcher,Robert Mat- thews,Colin McGinn, Rob Wilson, and the editors for helpful comments on earlier versionsof this paper.

201 FRANCESEGAN

References

Block, Ned. 1986. "Advertisementfor a Semanticsfor Psychology." In Mid- westStudies in Philosophy,vol. 10, ed. P. French,T. Uehling, and H. Wett- stein. Minneapolis: Universityof Minnesota Press. Burge, Tyler. 1986. "Individualism and Psychology."Philosophical Review 95:3-45. Cummins,Robert. 1983. TheNature ofPsychologicalExplanation. Cambridge: MIT Press. . 1989. Meaningand MentalRepresentation. Cambridge: MIT Press. Davies, Martin. 1991. "Individualismand Perceptual Content." Mind 100: 461-84. Dretske,Fred. 1981. Knowledgeand theFlow of Information. Cambridge: MIT Press. Egan, Frances. 1991. "Must Psychologybe Individualistic?"Philosophical Review100:179-203. . 1992. "Individualism, Computation, and Perceptual Content." Mind 101:443-59. . Forthcoming. "Intentionalityand the Theory of Vision." In Prob- lemsin Perception,ed. K Akins. Oxford: Oxford UniversityPress. Field, Hartry. 1978. "Mental Representation.", 13:9-61. . 1986. "The DeflationaryConception of Truth." In Fact, Science, and Morality:Essays on A.j Ayer's"Language, Truth,and ," ed. G. McDonald and C. Wright,55-117. Oxford: Basil Blackwell. Fodor,Jerry. 1980. "Methodological Solipsism Considered as a Research Strategyin CognitivePsychology." Behavioral and Brain Sciences3:63-73. 1984. "Observation Reconsidered." Philosophyof Science 51:23-43. 1987. Psychosemantics.Cambridge: MIT Press. 1990. A Theoryof Contentand OtherEssays. Cambridge: MIT Press. Graves,C., J. Katz,Y Nishiyama,S. Soames, R. Stecker,and P. Tovey. 1973. "Tacit Knowledge."Journal of Philosophy 70:318-30. Hannan, Barbara. 1993. "Don't Stop Believing: The Case Against Elimi- native ."Mind and Language8:165-79. Kitcher,Patricia. 1988. "Marr's ComputationalTheory of Vision." Philos- ophyof Science 55:1-24. Marcus, Mitchell. 1980. A Theoryof SyntacticRecognition for Natural Lan- guage.Cambridge: MIT Press. Marr,David. 1982. Vision.New York:Freeman Press. Marr,David, and Ellen Hildreth. 1980. "Theory of Edge Detection." Pro- ceedingsof the Royal Society B200:187-217. Matthews,Robert. 1988. "AuthoritativeSelf-Knowledge and Perceptual In- dividualism."In Contentsof Thought,ed. R. Grimmand D. Merrill.Tuc- son: Universityof Arizona Press.

202 COMPUTATION AND CONTENT

. 1994. "The Measure of Mind." Mind 103:131-46. McGinn, Colin. 1989. Mental Content.Oxford: Basil Blackwell. Morton, Peter. 1993. "Supervenience and Computational Explanation in Vision Theory." Philosophyof Science 60:86-99. Newell, Allen. 1982. "The Knowledge Level." ArtificialIntelligence 18:87- 127. Pylyshyn,Zenon. 1984. Computationand Cognition.Cambridge: MIT Press. Segal, Gabriel. 1989. "On Seeing What is Not There." PhilosophicalReview 98:189-214. . 1991. "Defence of a Reasonable Individualism." Mind 100:485- 93. Shapiro, Lawrence. 1993. "Content, Kinds, and Individualismin Marr's Theory of Vision." PhilosophicalReview 102:489-513. Stampe, Dennis. 1977. "Toward a Causal Theory of LinguisticRepresen- tation." In MidwestStudies in Philosophy,vol.2, ed. P. French, T. Uehling, H. Wettstein.Minneapolis: Universityof Minnesota Press. Stich, Stephen. 1983. FromFolk Psychologyto CognitiveScience. Cambridge: MIT Press. Wilson, Robert. 1994. "Causal Depth, Theoretical Appropriateness,and Individualismin Psychology." 61:55-75.

203