The Rise and Fall of Computational Functionalism

Total Page:16

File Type:pdf, Size:1020Kb

The Rise and Fall of Computational Functionalism 1 The Rise and Fall of Computational Functionalism 1. Introduction Hilary Putnam is the father of computational functionalism, a doctrine he developed in a series of papers beginning with “Minds and machines” (1960) and culminating in “The nature of mental states” (1967b). Enormously influential ever since, it became the received view of the nature of mental states. In recent years, however, there has been growing dissatisfaction with computational functionalism. Putnam himself, having advanced powerful arguments against the very doctrine he had previously championed, is largely responsible for its demise. Today, Putnam has little patience for either computational functionalism or its underlying philosophical agenda. Echoing despair of naturalism, Putnam dismisses computational functionalism as a utopian enterprise. My aim in this article is to present both Putnam’s arguments for computational functionalism, and his later critique of the position.1 In section 2, I examine the rise of computational functionalism. In section 3, I offer an account of its demise, arguing that it can be attributed to recognition of the gap between the computational-functional aspects of mentality, and its intentional character. This recognition can be traced to two of Putnam’s results: the familiar Twin-Earth argument, and the less familiar theorem that every ordinary physical system implements every finite automaton. I close with implications for cognitive science. 2. The rise of computational functionalism Computational functionalism is the view that mental states and events – pains, beliefs, desires, thoughts and so forth – are computational states of the brain, and so are defined in terms of “computational parameters plus relations to biologically characterized inputs and outputs” (1988: 7). The nature of the mind is independent of the physical making of 2 the brain: “we could be made of Swiss cheese and it wouldn’t matter” (1975b: 291).2 What matters is our functional organization: the way in which mental states are causally related to each other, to sensory inputs, and to motor outputs. Stones, trees, carburetors and kidneys do not have minds, not because they are not made out of the right material, but because they do not have the right kind of functional organization. Their functional organization does not appear to be sufficiently complex to render them minds. Yet there could be other thinking creatures, perhaps even made of Swiss cheese, with the appropriate functional organization. The theory of computational functionalism was an immediate success, though several key elements of it were not worked out until much later. For one thing, computational functionalism presented an attractive alternative to the two dominant theories of the time: classical materialism and behaviorism. Classical materialism – the hypothesis that mental states are brain states – was revived in the 1950s by Place (1956), Smart (1959) and Feigl (1958). Behaviorism – the hypothesis that mental states are behavior-dispositions – was advanced, in different forms, by Carnap (1932/33), Hempel (1949) and Ryle (1949), and was inspired by the dominance of the behaviorist approach in psychology at the time. Both doctrines, however, were plagued by difficulties that did not, or so it seemed, beset computational functionalism. Indeed, Putnam’s main argument for functionalism is that it is a more reasonable hypothesis than classical materialism and behaviorism. The rise of computational functionalism can be also explained by the “cognitive revolution” of the mid-1950s. Noam Chomsky’s devastating review of Skinner’s Verbal Behavior, and the development of experimental instruments in psychological research, led to the replacement of the behaviorist approach in psychology by the cognitivist. In addition, Chomsky’s novel mentalistic theory of language (Chomsky 1957), which revolutionized the field of linguistics, and the emerging research in the area of artificial intelligence, together produced a new science of the mind, now known as cognitive science. The working hypothesis in this science has been that the mechanisms underlying our cognitive capacities are species of information processing, namely, computations that operate on mental representations. Computational functionalism was inspired by these 3 dramatic developments. Putnam, and even more so Jerry Fodor (1968, 1975) thought of mental states in terms of the computational theories of cognitive science. Many even see computational functionalism as furnishing the requisite conceptual foundations for cognitive science. Given its close relationship with the new science of the mental, it is not surprising computational functionalism was so eagerly embraced. Putnam develops computational functionalism in two phases. In the earlier papers, Putnam (1960, 1964) does not put forward a theory about the nature of mental states. Rather, he uses an analogy between minds and machines to show that “the various issues and puzzles that make up the traditional mind-body problem are wholly linguistic and logical in character… all the issues arise in connection with any computing system capable of answering questions about its own structure” (1960: 362). Only in 1967 does Putnam make the additional move of identifying mental states with functional states, suggesting that “to know for certain that a human being has a particular belief, or preference, or whatever, involves knowing something about the functional organization of the human being” (1967a: 424). In “The nature of mental states”, Putnam explicitly proposes “the hypothesis that pain, or the state of being in pain, is a functional state of a whole organism” (1967b: 433). 2.1 The analogy between minds and machines Putnam advances the analogy between minds and machines because he thinks that the case of machines and robots “will carry with it clarity with respect to the ‘central area’ of talk about feelings, thoughts, consciousness, life, etc.” (1964: 387). According to Putnam, this does not mean that the issues associated with the mind-body problem arise for machines. At this stage Putnam does not propose a theory of the mind. His claim is just that it is possible to clarify issues pertaining to the mind in terms of a machine analogue, “and that all of the question of ‘mind-body identity’ can be mirrored in terms of the analogue” (1960: 362). The type of machine used for the analogy is the Turing machine, still the paradigm example of a computing machine. 4 A Turing machine is an abstract device consisting of a finite program, a read- write head, and a memory tape (figure 1). The memory tape is finite, though indefinitely extendable, and divided into cells, each of which contains exactly one (token) symbol from a finite alphabet (an empty cell is represented by the symbol B). The tape’s initial configuration is described as the ‘input’; the final configuration as the ‘output’. The read- write mechanism is always located above one of the cells. It can scan the symbol printed in the cell, erase it, or replace it with another. The program consists of a finite number of states, e.g., A, B, C, D, in figure 1. It can be presented as a machine table, quadruples, or, as in our case, a flow chart. The computation, which mediates an input and an output, proceeds stepwise. At each step, the read-write mechanism scans the symbol from the cell above which it is located, and the machine then performs one or more of the following simple operations: (1) erasing the scanned symbol, replacing it with another symbol, or moving the read- write mechanism to the cell immediately to the right or left of the cell just scanned; (2) changing the state of the machine program; (3) halting. The operations the machine performs at each step are uniquely determined by the scanned symbols and the program’s instructions. If, in our example, the scanned symbol is ‘1’ and the machine is in state A, then it will follow the instruction specified for state A, e.g., 1:R, meaning that it will move the read-write mechanism to the cell immediately to the right, and will stay in state A. Overall, any Turing machine is completely described by a flow chart. The machine described by the flow chart in figure 1 is intended to compute the function of addition, e.g., ‘111+11’, where the numbers are represented in unary notation. The machine starts in state A, with the read-write mechanism above the leftmost ‘1’ of the output. The machine scans the first ‘1’ and then proceeds to arrive at the sum by replacing the ‘+’ symbol by ‘1’, and erasing the rightmost ‘1’ of the input. Thus if the input is ‘111+11’, the printed output is ‘11111’. The notion of a Turing machine immediately calls into question some of the classic arguments for the superiority of minds over machines. Take for example Descartes’ claim that no machine, even one whose parts are identical to those of human 5 body, cannot produce the variety of human behavior: “even though such machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others” (1637/1985: 140). It is true that our Turing machine is only capable of computing addition. But as Turing proved in 1936, there is also a universal Turing machine capable of computing any function that can be computed by a Turing machine. In fact, almost all the computing machines used today are such universal machines. Assuming that human behavior is governed by some finite rule, it is hard to see why a machine cannot manifest the same behavior.3 As Putnam shows, however, minds and Turing machines are not just analogous in the behavior they are capable of generating, but also in their internal composition. Take our Turing machine. One characterization of it is given in terms of the program it runs, i.e., the flow chart, which determines the order in which the states succeed each other, and what symbols are printed when.
Recommended publications
  • Block.What.Psch.States.Not.1972.Pdf
    Philosophical Review What Psychological States are Not Author(s): N. J. Block and J. A. Fodor Source: The Philosophical Review, Vol. 81, No. 2 (Apr., 1972), pp. 159-181 Published by: Duke University Press on behalf of Philosophical Review Stable URL: http://www.jstor.org/stable/2183991 Accessed: 08/09/2009 16:04 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=duke. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected]. Duke University Press and Philosophical Review are collaborating with JSTOR to digitize, preserve and extend access to The Philosophical Review.
    [Show full text]
  • Chapter 7 Mental Representation Mental Representation
    Chapter 7 Mental Representation Mental Representation Mental representation is a systematic correspondence between some element of a target domain and some element of a modeling (or representation) domain. A representation, whether it be mental or any other, is a system of symbols. The system of symbols is isomorphic to another system (the represented system) so that conclusions drawn through the processing of the symbols in the representing system constitute valid inferences about the represented system. Isomorphic means `having the same form.' The following figure is a typical example of how we represent information mentally in our minds. Figure 8.12 A hierarchical network representation of concepts. Source: Collins and Quillian (1969) The cognitive psychologists have always agreed on the fact that human information processing depends on the mental representation of information; but there is a great disagreement with regard to the nature of this mental representation of information. Symbols are the representations that are amodal. They bear no necessary resemblance to the concept or percept they represent. The systematic correspondence between the two domains may be a matter of convention (only). For example, in algebra, we denote the different variables as x, y, z, and so on, but neither of these symbols have a direct resemblance to what they represent. Similarly, while solving a geometrical problem involving geometrical shapes, we might assign symbols such as A, B, or C to such geometrical shapes, even though these symbols do not have a direct resemblance to the shapes. Images are another way how the information can be represented in our minds. Images are basically representations that resemble what they represent in some non-arbitrary way.
    [Show full text]
  • Magic, Semantics, and Putnam's Vat Brains
    Published in Studies in History and Philosophy of Science (««) t: [email protected] ;–tE. doi:10.1016/j.shpsc.2004.03.007 Magic, semantics, and Putnam’s vat brains Mark Sprevak Christina McLeish University of Edinburgh University of Cambridge Ït March «« In this paper we oòer an exegesis of Hilary Putnam’s classic argument against the brain-in-a-vat hypothesis oòered in his Reason, Truth and History (ÏÊÏ). In it, Putnam argues that we cannot be brains in a vat because the semantics of the situation make it incoherent for anyone to wonder whether they are a brain a vat.Putnam’s argument is that in order for ‘I am a brain in a vat’ to be true, the person uttering it would have to be able to refer successfully to those things: the vat, and the envatted brain. Putnam thinks that reference can’t be secured without relevant kinds of causal relations, which, if envatted, the brain would lack, and so, it fails to be able to meaningfully utter ‘I am a brain in a vat’. We consider the implications of Putnam’s arguments for the traditional sceptic. In conclusion, we discuss the role of Putnam’s arguments against the brain in a vat hypothesis in his larger defense of his own internal realism against metaphysical realism. Ï Introduction Consider the possibility, familiar to us in many guises, that instead of being a living breathing human, you are a brain in a vat. Almost everything that you believe about the external world is false. Your body, your friends, your family, your home—none of these things exist.
    [Show full text]
  • Perception and Representation in Leibniz
    PERCEPTION AND REPRESENTATION IN LEIBNIZ by Stephen Montague Puryear B.S., Mechanical Engineering, North Carolina State University, 1994 M.A., Philosophy, Texas A&M University, 2000 M.A., Philosophy, University of Pittsburgh, 2004 Submitted to the Graduate Faculty of the Department of Philosophy in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2006 UNIVERSITY OF PITTSBURGH DEPARTMENT OF PHILOSOPHY This dissertation was presented by Stephen Montague Puryear It was defended on December 5, 2005 and approved by Nicholas Rescher University Professor of Philosophy Robert B. Brandom Distinguished Service Professor of Philosophy Stephen Engstrom Associate Professor of Philosophy J. E. McGuire Professor of History and Philosophy of Science Dissertation Director: Nicholas Rescher University Professor of Philosophy ii Copyright °c by Stephen Montague Puryear 2006 iii PERCEPTION AND REPRESENTATION IN LEIBNIZ Stephen Montague Puryear, Ph.D. University of Pittsburgh, 2006 Though Leibniz’s views about perception and representation go to the heart of his philosophy, they have received surprisingly little attention over the years and in many ways continue to be poorly understood. I aim to redress these shortcomings. The body of the work begins with an exploration of Leibniz’s proposed analysis of representation (Chapter 2). Here I argue that on this analysis representation consists in a kind of structural correspondence— roughly an isomorphism—between representation and thing represented. Special attention is given to the application of this analysis to the challenging cases of linguistic and mental representation. The next two chapters concern what I take to be the central issue of the work: the nature of distinct perception.
    [Show full text]
  • Ideas and Confusion in Leibniz 1
    This article was downloaded by: [Stanford University Libraries] On: 24 September 2012, At: 17:23 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK British Journal for the History of Philosophy Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rbjh20 Ideas and Confusion in Leibniz Shane Duarte a a Stanford University, Version of record first published: 22 Sep 2009. To cite this article: Shane Duarte (2009): Ideas and Confusion in Leibniz , British Journal for the History of Philosophy, 17:4, 705-733 To link to this article: http://dx.doi.org/10.1080/09608780903135089 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms- and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub- licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. British Journal for the History of Philosophy 17(4) 2009: 705–733 ARTICLE IDEAS AND CONFUSION IN LEIBNIZ1 Shane Duarte I.
    [Show full text]
  • Information and Mental Representation †
    Proceedings Information and Mental Representation † Wei Jiang School of Marxism, Xi’an University of Architecture and Technology, Xi’an 710055, China; [email protected] † Presented at the Fourth International Conference on Philosophy of Information, Berkeley, CA, USA, 2–6 June 2019. Published: 11 June 2020 Abstract: Information has been used to explain mental representation. However, whether it succeeds in explaining the mentality of mental representation is an issue. In my view, although there are some advantages of this approach, mental representation cannot be reduced to informational processes for two reasons. First, informational processes cannot cover the distinctively subjective feature of mental representation, Second, informational processes cannot characterize the semantic properties of mental representation. Furthermore, I have some doubts regarding the intelligence of AI based on the problems of the informational approach to mental representation. Keywords: information; mental representation; semantic 1. Introduction Informational processes have been considered as a common way to explain mental representation. However, whether it is able to go further to cover the special features of mental representation, apart from explaining the underlying process, is still an issue. I stand for a negative point of view that the informational approach is not a successful and complete approach to explain mental representation, which probably thereafter sheds light on the non-intelligence of AI. 2. Background Mental representation has been one of the central issues in the philosophy of the mind. It refers to mental phenomena that have mental content. There are many examples of mental representation in our daily life, such as mental pictures, perception, belief, desire, imagination, memory, and so on.
    [Show full text]
  • Introduction
    Introduction At least four assumptions appear to be part of the common ground of inter­ pretation among scholars of Ockham. The first can be described as a kind of general methodological presupposition, in that it has become standard to explore the Franciscan as a potential contributor to current philosophical debates, such as the issue of externalism in the philosophy of language and in the philosophy of mind.1 The view that medieval philosophers – or at least, Ockham – are of interest only to historians and theologians does not enjoy great popularity, especially not among philosophers working in the tradition of analytic philosophy.2 Second, Ockham is usually labelled a nominalist. While nominalism is not a single unified position,3 Ockham can be called a nominalist insofar as he sub­ scribes to the view that there are only particular things in this world: he admits only of two kinds of things in his ontology, namely particular substances (Socrates, this apple) and particularized qualities (the wisdom of Socrates, the redness of this apple).4 It would perhaps be more fitting to call Ockham’s posi­ tion ‘particularistic’. He thereby takes a conceptual stance on the problem of universals.5 According to Ockham, universals are nothing but concepts exist­ ing in the intellect. There is nothing universal ‘out there’ in the world. That a concept such as whale or wisdom is universal means that whale is semantically 1 See for instance Peter King, ‘Rethinking Representation in the Middle Ages’, in H. Lagerlund (ed.), Representation and Objects of Thought in Medieval Philosophy, Hampshire, 2007; Calvin Normore, ‘Burge, Descartes and Us’, in M.
    [Show full text]
  • Michael Rescorla
    February 9, 2020 Michael Rescorla Department of Philosophy University of California Los Angeles, CA 90095 [email protected] Employment Professor, Summer 2016 to present Department of Philosophy, University of California, Los Angeles Professor, Summer 2015 to Spring 2016 Department of Philosophy, University of California, Santa Barbara Associate Professor, Summer 2009 to Spring 2015 Department of Philosophy, University of California, Santa Barbara Assistant Professor, Fall 2003 to Spring 2009 Department of Philosophy, University of California, Santa Barbara Education Harvard University, Ph.D., Philosophy, June 2003 Dissertation: Is Thought Explanatorily Prior to Language? Harvard University, B.A., Summa Cum Laude, Philosophy and Mathematics, June 1997 Senior Thesis: Forcing, Atoms, and Choice Published Papers “Reifying Representations,” What Are Mental Representations?, eds. Joulia Smorthchkova, Tobias Schlicht, and Krzysztof Dolega. Oxford: Oxford University Press (forthcoming). “An Improved Dutch Book Theorem for Conditionalization,” Erkenntnis (published on-line; print version forthcoming). “On the Proper Formulation of Conditionalization,” Synthese (published on-line; print version forthcoming). “How Particular Is Perception?”. Philosophy and Phenomenological Research 100 (2020): pp. 721- 727. (Contribution to a book symposium on Susanna Schellenberg’s The Unity of Perception.) “Perceptual Co-Reference,” The Review of Philosophy and Psychology (published on-line; print version forthcoming). “A Realist Perspective on Bayesian Cognitive Science,” Inference and Consciousness, eds. Anders Nes and Timothy Chan. Routledge (2020): pp. 40-73. “A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization,” The Review of Symbolic Logic 11 (2018): pp. 705-735. “Motor Computation,” The Routledge Handbook of the Computational Mind, eds. Matteo Colombo and Mark Sprevak. Routledge (2018): pp. 424-435.
    [Show full text]
  • Infants' Use of Language to Update Mental Representations
    PSYCHOLOGICAL SCIENCE Research Article Thinking of Things Unseen Infants’ Use of Language to Update Mental Representations Patricia A. Ganea,1 Kristin Shutts,2 Elizabeth S. Spelke,2 and Judy S. DeLoache3 1Boston University, 2Harvard University, and 3University of Virginia ABSTRACT—One of the most distinctive characteristics of calls; Hauser, 1996), most cases of updating in humans are made humans is the capacity to learn from what other people tell possible by language. Language allows people to acquire new them. Often new information is provided about an entity information about any entity to which they can refer, and to that is not present, requiring incorporation of that infor- update their knowledge of that entity in the absence of any direct mation into one’s mental representation of the absent ob- contact with it. In the research reported here, we investigated ject. Here we present evidence regarding the emergence of the emergence of this ability in very young children. this vital ability. Nineteen- and 22-month-old infants first Infants begin to talk about absent objects in the second half of learned a name for a toy and later were told that the toy their 2nd year (Lewis, 1936; Sachs, 1983; Scollon, 1979; Shimpi had undergone a change in state (it had become wet) while & Huttenlocher, 2004; Veneziano & Sinclair, 1995), but they out of view. The 22-month-olds (but not the 19-month-olds) understand someone else’s reference to an absent object sub- subsequently identified the toy solely on the basis of the stantially earlier. Naturalistic observations in the home have property that they were told about but had never seen.
    [Show full text]
  • The Stoic Argument from Oikeiōsis
    Created on 2 December 2015 at 9.14 hours page 143 THESTOICARGUMENT FROM OIKEIŌSIS JACOBKLEIN . Introduction S Stoic accounts of oikeiōsis—appropriation, as I will translate it—are marked by two features: they begin with the ap- parently descriptive claim that the complex, seemingly purposeful behaviour all animals display in relation to their environment depends on a sophisticated capacity for self-perception. They con- clude, on the other hand, with the normative thesis that the human good consists in a life regulated by reason or, as the Stoics some- times describe it, in a life lived according to nature. This account is central to three of the fullest surviving presentations of Stoic ethics, and sources report that the Stoics appealed to it to defend their conception of the human good in general and their account of justice in particular. Since Pohlenz, most commentators have regarded the oikeiōsis doctrine as substantially Stoic in origin and important, one way or another, to Stoic ethical theory. But they © Jacob Klein I am especially grateful to Tad Brennan and Charles Brittain for extensive discus- sion, detailed criticism, and encouragement at crucial points. For insightful written comments I thank Victor Caston, Gail Fine, Brad Inwood, Terry Irwin, Nate Jezzi, Anthony Long, Martha Nussbaum, and an anonymous referee for this journal. I have benefited from (and greatly enjoyed) discussions with Margaret Graver, Lar- kin Philpot, Gretchen Reydams-Schils, Timothy Roche, and Maura Tumulty. Er- rors and shortcomings are my own. On the translation of oikeiōsis and its cognates see nn. and below. Diogenes Laertius (. – = LS A= SVF iii.
    [Show full text]
  • An Introduction to Contemporary Theories of Content
    An Introduction to Contemporary Theories of Content Chris Eliasmith University of Waterloo Do not believe in anything merely on the authority of your teachers and elders. – Buddha (Anguttara Nikaya, Tika Nipata, Mahavagga, Sutta No. 65) 1 Introduction This paper is intended to introduce advanced students of philosophy of mind to central theories and concepts that have been used to characterize mental meaning, content, and representation. I initially provide relevant historical background. Then I discuss the ubiquitous sense/reference distinction suggested by Frege. Finally, I outline the three major kinds of contemporary theories of mental representation and discuss the difficulties with each. Much of this material has been extracted from Eliasmith (2000). 2 A brief history of mind For thousands of years we have been trying to understand how our perceptual experiences relate to the world that causes them. Here, I examine a small subset of these attempts, inspired mainly by work in philosophy, psychology, or neuroscience. The exemplar theories I have chosen span the approaches taken to understanding mentality in the Western tradition and include theories committed to dualism, materialism, empiricism, and rationalism. Over a thousand years ago Stoicism, a philosophical school founded by Zeno (334-262 B.C.E.), developed a unique, materialistic theory of content. The Stoics held that mental representations – what they called ‘impressions’ – were of at least two kinds, sensory and non-sensory: Sensory impressions are ones obtained through one or more sense-organs, non-sensory are ones obtained through thought such as those of the incorporeals and of the other things acquired by reason (Diogenes Laertius 7.49-51).1 The roots of sensory impressions are in objects in the world that the Stoics label “impressors” (Aetius 4.12.1-5).
    [Show full text]
  • LEWIS on INTENTIONALITY Robert Stalnaker David Lewis's Account of Intentionality Is a Version of What He Calls `Global Descripti
    Australasian Journal of Philosophy Vol. 82, No. 1, pp. 199±212; March 2004 LEWIS ON INTENTIONALITY Robert Stalnaker David Lewis's account of intentionality is a version of what he calls `global descriptivism'. The rough idea is that the correct interpretation of one's total theory is the one (among the admissible interpretations) that come closest to making it true. I give an exposition of this account, as I understand it, and try to bring out some of its consequences. I argue that there is a tension between Lewis's global descriptivism and his rejection of a linguistic account of the intentionality of thought. I distinguish some different senses in which Lewis's theory might permit, or be committed to, a kind of holism about intentional content, and I consider the sense in which Lewis's account might be said to be an internalist account, and the motivation for this kind of internalism. David Lewis's account of intentional states and intentional content is an internalist one, in a sense: he is a proponent of narrow content. His strategy for solving the problem of intentionality is the most explicit and well developed internalist strategy that I know of, and I think it helps to bring out some consequences that any internalist account will have. Lewis's constructive proposals along with his criticisms of alternative strategies also throw light on a range of issues that keep recurring in the debates about intentionality, questions about the relation between language and thought, about holism, about the relation between internal and external perspectives on the content of speech acts and propositional attitudes.
    [Show full text]