<<

On What Language Is

by

David Alexander Balcarras M.A., University of Toronto (2014) B.A. (Hons.), University of Toronto (2013)

Submitted to the Department of Linguistics and in Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy

at the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

September 2020

c 2020 Massachusetts Institute of Technology. All rights reserved.

Signature of Author ...... Department of Linguistics and Philosophy September 1, 2020

Certified by ...... Alex Byrne Professor of Philosophy Thesis Supervisor

Accepted by ...... Bradford Skow Laurence S. Rockefeller Professor of Philosophy Chair of the Committee on Graduate Students

1 On What Language Is

by

David Balcarras

Submitted to the Department of Linguistics and Philosophy on September 1, 2020 in Partial Fulfillment of the requirements for the Degree of Doctor of Philosophy in Philosophy

ABSTRACT

What is language? I defend the view that language is the practical capacity for partaking in communication with linguistic signs. To have a language just is to know how to communicate with it. I argue that this view—communicationism—is compatible with its main rival: the view that we know our language by tacitly knowing a particular generative grammar, a set of rules and principles pairing sounds with meanings. But only communicationism gets at language’s essence. Moreover, the rival view may be false, for there is in fact little reason to think we tacitly know grammars. In chapter 1, I argue that communicationism is compatible with the view that language is con- stituted by tacit knowledge of grammar because the brain states that realize grammatical knowl- edge do so because they enable us to know how to linguistically communicate. In chapter 2, I offer further reasons to accept communicationism. The starting thought that we know how to communicate by knowing how to use sentences in a particular rule-governed way in order to ex- press our thoughts is developed into a use-based account of meaning, on which all expressions have their meanings because we know how we use them to mean things. In chapter 3, I explore the extent to which language use is enabled by unconscious representa- tions of grammatical rules. In particular, I consider whether linguistic understanding is enabled by tacit knowledge of compositional semantics. I argue that it is not. Language comprehension and production can be explained without appeal to tacit knowledge of semantics, by instead appealing to our subpersonal capacity to translate natural language sentences into the medium of thought. I conclude that there does not seem to be any reason to believe in tacit knowledge of grammar. Finally, in chapter 4, I survey proposals about what it would be for a speaker to tac- itly know a grammar, and argue that they are all inadequate. I conclude that linguistic meaning cannot be explained in terms of tacit knowledge of grammar. Rather, it should be understood in terms of the practical knowledge that manifests in intentional linguistic action, rather than in terms of that which might underlie it.

Thesis Supervisor: Alex Byrne Title: Professor of Philosophy

2 Contents

Acknowledgments 7

Introduction 9

1 What is it to have a language? 13

1.1 Laguage and communication ...... 13

1.2 Knowing how to communicate with a language ...... 17

1.2.1 Communicating with a language ...... 17

1.2.2 Inner speech ...... 19

1.2.3 The ‘language’ of thought ...... 20

1.2.4 Variation in speakers’ linguistic abilities ...... 22

1.2.5 Non-social, non-conventional linguistic communication ...... 23

1.3 The psycho-biology of language ...... 27

1.3.1 Cognitivism and Neurobiologicalism ...... 27

1.3.2 How psychogrammars are realized ...... 31

1.3.2.1 Functional realization ...... 31

1.3.2.2 Realizers do not suffice ...... 33

1.3.2.3 Are integrated physiogrammars neurophysiological? . . . . . 35

1.3.3 The evolution of language ...... 37

1.4 Conclusion ...... 39

2 Meaning, use, and know-how 41

2.1 Meaning and use ...... 41

3 2.2 Communicationism ...... 45

2.2.1 A framework for metasemantics ...... 45

2.2.2 Communication and speaker-meaning ...... 46

2.3 An argument for communicationism ...... 50

2.3.1 Having L entails knowing how to communicate with L ...... 50

2.3.2 Knowing how to communicate with L entails having L ...... 54

2.4 Another argument for communicationism ...... 55

2.4.1 Sentence-meaning from know-how ...... 56

2.5 Objections to communicationism ...... 59

2.5.1 Individuating languages ...... 59

2.5.2 Non-literal communication ...... 60

2.5.3 Speaker-meaning ...... 62

2.5.4 Semantic value versus content ...... 62

2.5.5 Meaning without use ...... 63

2.5.6 The subsentential meaning problem ...... 64

Appendices 67

2.A Word-meaning from sentence-meaning ...... 67

2.B Languages and semantic interpretations ...... 71

2.B.1 Speaker-relative meaning supervenes on interpretation-use ...... 73

2.B.2 Language supervenes on speaker-relative meaning ...... 74

2.B.3 Language supervenes on interpretation-use ...... 75

2.B.4 Explaining the supervenience of language-having ...... 76

3 Is meaning cognized? 81

3.1 Introduction ...... 81

3.2 Why meaning is said to be cognized ...... 82

3.3 Against semantic cognizing ...... 85

3.3.1 Forming semantic beliefs by disquotation ...... 86

4 3.3.2 Semantic beliefs formed by disquotation are epistemically safe . . . . . 87

3.3.3 Semantic knowledge by disquotation ...... 88

3.3.4 Non-natural languages of thought ...... 89

3.4 Sentence disquotationalism vs. speaker disquotationalism ...... 92

3.4.1 Pragmatic knowledge is insufficient for semantic knowledge ...... 93

3.5 Disquotationalism versus cognitivism ...... 94

3.6 Objections and replies ...... 98

3.6.1 The ‘there’s more to semantic competence’ objection ...... 98

3.6.2 The ‘no evidence’ objection ...... 102

3.6.3 The ‘no semantics-free translation’ objection ...... 103

3.6.4 The indexicality objection ...... 105

3.6.5 The ‘anti-reliabilism’ objection ...... 109

3.6.6 The ‘baseless semantic beliefs’ objection ...... 110

3.6.6.1 Against reason ...... 111

3.6.6.2 Cognitivism fails to satisfy Reason ...... 113

3.6.6.3 Fricker on the perception of meaning ...... 113

3.6.7 The indication objection ...... 117

4 What might knowledge of grammar be? 121

4.1 The trouble with psychogrammars ...... 121

4.2 Desiderata for a theory of psychogrammars ...... 123

4.3 Theories of psychogrammars ...... 127

4.3.1 Working theories ...... 127

4.3.2 Schematic theories ...... 129

4.3.3 Functionalist theories ...... 132

4.3.3.1 Psychofunctionalism ...... 132

4.3.3.2 Against psychofunctionalism ...... 133

4.3.3.3 De-semanticalizing understanding ...... 136

5 4.3.4 Computational theories ...... 138 4.3.4.1 Computational functionalism ...... 140 4.3.4.2 Computational structuralism ...... 142 4.3.4.3 Computational descriptivism ...... 143 4.3.4.4 Computational mechanicalism ...... 146 4.3.5 The Evans-Davies-Peacocke theory ...... 149 4.3.5.1 Peacocke’s account of psychogrammars ...... 151 4.3.5.2 Against Peacocke’s account ...... 153 4.3.6 Biological theories ...... 160 4.3.6.1 The biolinguistic conception of psychogrammars ...... 160 4.3.6.2 Limitations of the biological theory ...... 163 4.3.6.3 The multiple realizability of language ...... 164 4.4 Competence as performance ...... 167

6 Acknowledgments

The following counterfactual is most likely true, or is at least assertible: If Alex Byrne had not been my advisor, this dissertation would have been largely bereft of whatever degree of preci- sion and concision it achieves, while containing many more barbarisms and implicit denials of Moorean truths. Thanks of the highest order to Alex. And enormous thanks also to the rest of my committee, Justin Khoo, Agust´ın Rayo, and Bob Stalnaker, for supporting, inspiring, and challenging me. I have been equally spurred on by the work of my thesis grandfather, David Lewis. I should have liked to raise to him my 28 objections to his 28 replies to the 28 objections considered in “Languages and Language”, but I am sure he would have already anticipated them. (Those about to read this dissertation are advised to stop and instead read (or reread) Lewis (1983).) Another assertible counterfactual: If it were not for Benj Hellie and Jessica Wilson, this dissertation would not exist, not now or in the future. Thank you for showing me how to do philosophy, and why. For conversation and comments on my chapters or earlier incarnations thereof, thanks to Mart´ın Abreu Zavaleta, Allison Balin, Nathaniel Baron-Schmitt, Marion Boulicault, Tyler Brooke-Wilson, David Builes, Thomas Byrne, Kevin Dorst, Kelly Gaus, Cosmo Grant, E.J. Green, Samia Hesni, Kat Hintikka, Stephen Hollingworth, Michele Odisseas Impagnatiello, Ari Koslow, Daniel Munoz,˜ Dennis Papadopoulos, Amogh Sahu, Steve Schiffer, Haley Schilling, Kieran Setiya, Jack Spencer, Kirsi Teppo, Quinn White, Roger White, Steve Yablo, and to au- diences at MIT, Brown, NYU, and Manchester. (Apologies if I have left anyone out!) For making grad school more than grad school, heartfelt thanks to Allison, David, Nathaniel, and Samia. Finally, I owe an unrepayable debt to my family: to its honorary members, Dennis, Mehtaab, Jamus, Iris, and Paul, whose unrivaled constancy got me through; to Sam, for helping me raise the Massachusetts Sutherland population to two; and to my kin, Tim, Stu (and Meaghan, Teddy, and Rachel), Dave, and Kathy, for books, being there, and home base. (Disclaimer: The majority of this dissertation was written or proofed in conditions of pan- demic isolation, which may or may not have resulted in undue severity of tone throughout.)

7 8 Introduction

What is language? Many philosophers and linguists see language as a social, conventional system of communication. Others disagree. They say language is fundamentally the medium of thought, a psychological, ultimately biological phenomenon; it occurs at the level of the individual, and so is not social or conventional; and it is sharply distinct from linguistic com- munication, which evolved as a mere externalized byproduct of internalized language.

Whichever of these perspectives we adopt, it affects how we think about linguistic mean- ing. The view of language as a system of socio-conventional communication pairs with the view that meaning is conventional, on which words’ meanings are fixed by their use as devices for communication: a social, convention-governed affair. By contrast, the psycho-biological conception of language suggests a view of meaning as psychological and conceptual, on which words’ meanings are fixed by a biologically implemented lexicon associating them with con- cepts. The apparent disagreement here is radical. Who should the philosopher side with? Or can these perspectives be reconciled?

In this dissertation, I argue that the core of the view of language as a system of communi- cation is correct, and that while it can be reconciled with the psycho-biological view, that view faces serious challenges. We are better off abandoning it as a theory of what language is, and taking the capacity for linguistic communication as that which makes up language’s essence.

In chapter 1, “What is it to have a language?”, I defend communicationism: the view that to have a language is to know how to communicate with it. Because of the possibility of aso- cial creatures who partake in no conventions and yet nevertheless know how to linguistically communicate, I argue that language is not essentially social or conventional. I also argue that communicationism is compatible with the Chomskyan view that humans have languages in

9 virtue of tacitly knowing their grammars. The reason for this is that the brain states that realize knowledge of grammar in humans do so because they enable us to know how to linguistically communicate. I also argue that even if human language can be treated as a biological object with a particular evolutionary function distinct from enabling communication, this is no reason to think that what it is for a human to have a language is distinct from knowing how to linguis- tically communicate. Compare: Suppose I said being musical just is knowing how to play a musical instrument. It is irrelevant to the truth of this view if the human biological capacity for musicality did not evolve because knowing how to play a musical instrument is adaptive.

In chapter 2, “Meaning, use, and know-how”, I offer further reasons to accept communi- cationism. I start with the thought that we know how to communicate by knowing how to use sentences in a particular rule-governed way in order to express our thoughts. I develop this into a defense of a Gricean use-based account of meaning, on which sentences have their mean- ings because we know how we use them to mean things. Views in this spirit have fallen out of favor in the face of many objections. But most of these exploit superficial problems with how these views have been formulated, targeting analyses of theoretical jargon used to state them precisely (i.e., definitions of ‘speaker-meaning’, ‘convention’, ‘mutual knowledge’, etc.), and thus turn on issues orthogonal to the theory of meaning. I lay down a byway past these issues by (i) jettisoning convention and mutual knowledge from meaning’s analysans—for lin- guistic meaning is not necessarily conventional or mutually known—and then by (ii) taking the notion of what a speaker means in making an utterance as an unanalyzed primitive, and with it formulating a more neutral generalized Gricean theory of meaning. And in an appendix that follows I explore how such a theory might extend to cover expression-meaning, not just sentence-meaning, a feat that more standard views have trouble accomplishing.

Next, in chapter 3, “Is meaning cognized?”, I explore the extent to which language use is enabled by unconscious representations of grammatical rules. In particular, I consider whether linguistic understanding is enabled by tacit knowledge of compositional semantics, or, in other words, by cognizing grammars with compositional semantic components. The main argument that it is—the argument from productivity/creativity and understandability—fails, I argue. Lan-

10 guage comprehension and production can be explained without appeal to tacit knowledge of compositional semantics. Specifically, it can be explained entirely by appeal to our subper- sonal capacity to ‘translate’ natural language sentences into the medium of thought. And there is good reason, I argue, to think that such an explanation is better than any in terms of tacit knowledge of semantics. I conclude that there does not seem to be any reason to believe that there is an unconscious mental phenomenon of meaning-computation. If so, we need not theo- rize about meaning in accordance with the psycho-biological conception of language, and seem free to reject it. Finally, in chapter 4, “What might knowledge of grammar be?”, I survey different proposals about what it would be for a speaker to tacitly know a grammar for their language. Such tacit knowledge is supposed to do at least two things: (i) constitute the semantic and other linguistic facts about the tacit knower’s language, and (ii) enable, while being independent of, their capacities for linguistic performance. I review candidate metaphysical accounts of that in virtue of which a speaker might tacitly know a grammar, and argue that they are all inadequate. For no account allows tacit knowledge of grammar to do both (i) and (ii). Each account considered either makes tacit knowledge of grammar too dependent on prior facts about linguistic performance, such that it cannot do (ii), or makes it too dependent on prior semantic and other linguistic facts about speakers’ languages, such that it cannot do (i). I conclude that linguistic meaning should not be explained in terms of the lexica of tacitly known grammars, for it is plausible that there are no such things that could do this metasemantic job. For the theory of meaning, I recommend a turn away from the subpersonal and towards the personal. Linguistic meaning, and language itself, should be understood in terms of the practical knowledge that manifests in intentional linguistic activity, rather than in terms of that which cognitive science claims to be enabling that activity.

11 12 Chapter 1

What is it to have a language?

1.1 Laguage and communication

What is it to have a language? A popular view treats language as the social, conventional use of signs for the purposes of communication. Against this backdrop, to have a particular language L is a matter of partaking in a social convention to use L as a system of communication. For many philosophers, this is ‘the received view’: as Dummett puts it, “it is essential to language that it is a common instrument of communication” (1981, p. 139). Elsewhere he writes that the “view that might claim to represent common sense is that the primary function of language is to be used as an instrument of communication” (1989, p. 192).

In apparent agreement, Evans states that if “one’s interest is in the phenomenon of language itself, one must be concerned with the way in which it functions as a means of communication among speakers of a community” (1982, p. 67). And Kripke also agrees; what it is to be “a normal speaker of [a] language,” he says, is a matter of participation “in the life of [a] community and in communication” (1982, p. 92). And so Lewis takes it to be platitudinous that language is “a form of rational, convention-governed human social activity” (1975, p. 166), “a sphere of human action, wherein people utter strings of vocal sounds, or inscribe strings of marks, and wherein people respond by thought or action to the sounds or marks which they observe to have been so produced” (p. 164). That “language is ruled by convention,” Lewis writes, is a platitude “that commands the immediate assent of any thoughtful person” (1969, pp. 1–2). Philosophical characterizations of language along these lines are quite commonplace.

13 This view seems at odds, however, with the psycho-biological view of language advanced by linguists and philosophers working in the Chomskyan tradition. On this picture, to have a language is to tacitly know its grammar by virtue of being in a certain brain state, and the human capacity for such tacit knowledge—and so the human capacity for language—evolved to subserve private thought, not public communication. In defending this view, it is argued that language is not socio-conventional communication.1 Indeed, Chomsky calls any view on which communication is anything but “peripheral” to language a “virtual dogma” with “no serious support” (Chomsky 2015, pp. 14–16).2 Defending this “dogma” is the aim of this chapter. I advance the following view:

Communicationism: To have a language L just is to know how to communicate with L.3

As Kamp and Reyle announce at the outset of From Discourse to Logic: “Languages are for communication. To know a language is to know how to communicate with it” (1993, p. 7). (I will remain officially neutral about the nature of know-how.4 But I make the minimal assumption that there is a distinction between know-how and ability; know-how manifests in ability, but one can know how to φ even if one is unable to φ.5 However we will not be led astray by occasionally eliding this distinction.)

1See J. Collins 2008 (pp. 137–39), Hornstein 1984 (pp. 118–19, 150–51), Isac and Reiss 2008 (pp. 38–39, 72–75), Ludlow 1999 (pp. 17–26), Ludlow 2011 (pp. 44–47), and N. Smith and Allott 2016 (pp. 235–37). Others, in defending the view that language is essentially a matter of “communicative competence”—something essen- tially interpersonal and richer than tacit grammar-knowing—have on that basis questioned the psycho-biological conception; see Habermas 1970 and Hymes 1972, and Rickheit, Strohner, and Vorwerg 2008 for a more recent discussion. The incompatibility of these two views is thus felt by both of their adherents. 2He argues that “language is not properly regarded as a system of communication”, and that communication is “of no unique significance” for understanding “the nature of language” (Chomsky 2002a, p. 76); and that empirical research based on a ‘language-as-communication’ model is on that account “misdirected” and “seriously misguided” (Berwick and Chomsky 2016, p. 79, 84). 3Harman 1967 takes being ‘competent’ with L, in Chomsky’s sense (which I take to be equivalent to having L), to consist in “knowing how to speak and understand a language” (p. 75): “Competence is knowledge in the sense of knowing how to do something; it is ability” (p. 81). Dummett 1975, 1976 endorses a similar view, on which having a language is a “practical ability” consisting in “practical knowledge”: “Of course, what he has when he knows the language is practical knowledge, knowledge how to speak the language” (1976, p. 69). 4None of my arguments presuppose intellectualism about know-how—that knowing-how consists in knowing- that (see Stanley 2011; Stanley and T. Williamson 2001, 2017)—or its negation (see Noe¨ 2005). See Fantl 2008 for an overview of the know-how debate. 5As Ziff vividly illustrates: “Imagine a scene in which a great pianist is seated at a piano, about to begin his performance. A madman, with a Samurai sword, suddenly appears and chops off the pianist’s hands: can the pianist play the piano? No. Does he know how? Yes.” (1984, p. 71). See also Stanley and T. Williamson 2001 (p. 416).

14 I argue that the psycho-biological view of language is no threat to Communicationism, de- spite many claims to the contrary.6 Even if we have languages by tacitly knowing their gram- mars, and even if this is all realized by some brain state of ours, that brain state only realizes knowledge of grammar because it gives rise to knowledge of how to linguistically commu- nicate. In other words, the brain-based faculty of language of, say, a human English-speaker bestows English as their language because it bestows on them knowledge of how to communi- cate with English.

Language is thus essentially connected to communication, a core commitment of the re- ceived view of language.7 Although many have found this connection so obvious as to not require defense, I defend it in section 1.2.8

But I reject the view that language essentially involves socio-conventional communication. Note that two ideas are at play in the received view: (1) that what it is to have a language should be understood in terms of linguistic communication, and (2) that what it is to linguis- tically communicate essentially involves partaking in certain conventions, or of standing in certain social relations. Rarely are (1) and (2) treated as independent theses. And it is easy to see why. The fact that someone has some particular language as opposed to another is nat- urally attributable to social or conventional forces. By appeal to the arbitrariness of the sign, it is arbitrary that the language of the French is one in which ‘chevre’` means goat rather than ox. This looks like the work of convention; suppose it is. And then suppose that (1) is true:

6Chomsky argues in many places that, on the psycho-biological view, having a language is not a skill or wholly a matter of know-how (1968, pp. 25–6, 37–8, 190–91; 1980, pp. 101–2; 1988, pp. 9–12; 1997, pp. 12–15; 2000b, pp. 51–2). He has also argued that having a language cannot consist in having certain practical abilities (1980, p. 110–22; 1984, p. 11–13; 1986, p. 9–13; 1988, p. 9–12; 2000b, p. 50–53). Although not all of these arguments presuppose the psycho-biological conception, it is clear that he takes its plausibility to render any practice-based view of language implausible. I consider the main argument he offers in these passages in section 1.2.4. 7Communicationism would establish the truth, more or less, of claims like McDowell’s that “language is essen- tially communicative — that speaking and understanding are primarily the issuing and the reception of commu- nication”; “the essentially communicative nature of language”, he says, “seems obvious” (1980, p. 36). Kripke agrees, remarking offhand that “surely” language’s “primary purpose is for communication” (2011, p. 301). 8For instance, Stalnaker simply proceeds against the background assumption that a “language is a device for conveying information [...] in order to communicate” (2014, p. 23). Grice does so as well, theorizing about languages, “communication devices”, and “communication systems” in the same breath (1989, pp. 284, 286–88, 296); he says linguistic expressions are essentially “instruments of communication”, such that language use with- out communication is “conceptually impossible” (p. 367). Defenses of the language–communication connection are offered in Lewis 1969 (especially chapter 4), Strawson 1971 (especially chapters 8 and 9), and J. Bennett 1976. Influential works from parallel literatures that defend similar views include H. H. Clark 1992, 1996, Habermas 1998, and Tomasello 2003, 2008.

15 that English is our language somehow because of how we communicate. It is then natural to think that the way that the forces of convention arbitrate our language is by arbitrating how we linguistically communicate (i.e. by (2)’s being true). Moreover, if (2) is true, then it is natural to view the conventionality of how we communicate as explaining the conventionality of language, by endorsing (1). So (1) and (2) seem to pair nicely if we see language itself as socio-conventional.

But language is not social or conventional. Someone may have a language in the absence of the social and the conventional. So Communicationism rightly incorporates (1) but not (2).

The remainder of this chapter is in two parts. In the first part, I argue in favor of Commu- nicationism and against the socio-conventionality of language. Along the way, I explain how Communicationism allows for people with severe linguistic impairments to have languages, even if they only know how to use sentences in inner speech; and I also argue that the possibility of someone’s having a ‘language of thought’ without knowing how to communicate with it is no counterexample to my view. I then reply to an objection to Communicationism from Chomsky that a language can be shared by those whose linguistic abilities wildly vary; to prefigure, this is no problem because know-how can remain constant while ability varies.

In the second part of this chapter, I argue that Communicationism is not in tension with the psycho-biological conception of language; it is compatible with the view that humans have languages by being in brain states that realize tacit knowledge of their grammars. And Commu- nicationism would not be rendered implausible if it were to turn out that language’s evolutionary function has nothing to do with communication.

This chapter does not provide a decisive argument for Communicationism.9 Rather, my more modest aim is to motivate it as a plausible account of what it is to have a language that is not straightforwardly inconsistent with the place of language in the scientific image.

9I present arguments for Communicationism in chapter 2, and defend it against a series of objections.

16 1.2 Knowing how to communicate with a language

Communicationism accounts for language-having in terms of linguistic communication. What is linguistic communication? In the paradigm case, it involves productive speech and com- prehending audition. On the production side, linguistic communication involves saying (or otherwise tokening) a sentence and in doing so meaning (i.e. speaker-meaning) something. And on the comprehension side, receiving a communication involves discerning that someone meant something in saying something. So, roughly, Communicationism says that having a lan- guage consists in knowing how to mean things by saying sentences and how to interpret others as doing so. But what is it to communicate with a particular language L?

1.2.1 Communicating with a language

Here is a proposal. To keep things simple, treat a language L as a function from sentences to propositions.10 We can then say that to communicate with L is to systematically mean L(S ) in saying a sentence S of L, or to systematically discern that someone meant L(S ) in saying S . The systematicity requirement is important here, because it should be no coincidence that linguistic communicators mean and discern meaning in this way. As such, linguistic communicative activity involves systematic, non-accidental, non-deviant, genuine rule-following.11 One way to make this precise is as follows:

Linguistic Communication: For any speaker x, language L, and linguistic act Φ,12 x communicates with L in Φing

just if in Φing, x follows the rule SpeakL or the rule ListenL.

10For ease of exposition, I treat propositions as both sentence-meanings and the objects of acts of speaker- meaning, and ignore non-declarative moods, context-sensitivity, sub-sentential expressions, and other complica- tions. See 2.5.4 for an attempt to state the account of communication sketched here while treating sentence- meanings more like Kaplanian characters (Kaplan 1989b, pp. 505–6). 11Does this allow for non-human animal communication to count as linguistic? I take no strong view. It depends on whether non-human animals should be thought of as speaker-meaning things by saying sentences, a largely empirical question. (Or it may be merely a book-keeping issue.) See Green 2018 for a relevant recent discussion. 12The quantification over linguistic acts (including productive speech acts and receptive acts of comprehension) reflects the fact that actions like communicating with English are necessarily non-basic; they are always performed by or in performing more specific linguistic actions.

17 SpeakL: Say S of L only if you thereby mean L(S )!

ListenL: If someone says S of L, interpret them as meaning L(S )!

But no account of linguistic communication this specific will be presupposed in what follows.13 All that will be presupposed is that it essentially involves systematic comprehension of what speakers mean, or, systematic production through acts of meaning.

Note that communicating with a language L is disjunctive. It requires either systematically comprehending sentences of L or systematically and meaningfully producing sentences of L. This is because, on any plausible implementation of Communicationism, knowing how to lin- guistically communicate should be thought of disjunctively—it is to know how to comprehend or produce sentences—for there are clearly language-havers who know how to do one but not the other. A person with aphasia might have severely impaired production but relatively unim- paired comprehension, or vice versa, depending on where their impairment is neurologically localized.14 And paralysis might severely impair or entirely disable linguistic production while leaving comprehension intact, and deafness and blindness might do the same for comprehen- sion while leaving production intact. In either case, one might retain one’s language either on the basis of knowing how to systematically comprehend sentences, or on the basis of knowing how to systematically produce them.15

What about someone whose sole linguistic capacity is knowing how to ‘think in English’? Such a person might not know how to linguistically communicate outwardly, perhaps due to

13Those convinced of Lewis’s 1975 account of linguistic communication might replace Speak and Listen with the following:

TruthfulnessL: Say S of L only if you believe L(S )! TrustL: If someone says S of L, believe L(S )!

But I think this would be a mistake. There is clearly a sense in which we can communicate without convincing each other of anything (and without trying to). Sometimes communication succeeds just because we manage to express ourselves or understand each other. And sometimes that is all we try to accomplish by talking. See 2.2.2 for more on this. 14See D. G. Clark and Cummings 2003 for a discussion of the variety of forms of aphasia. 15There is no doubt a sense of ‘knows how to linguistically communicate’ on which it picks out only productive know-how, or, on which those who know how to follow ListenL but not SpeakL do not know how to ‘communicate’ with L. But I think there is a more general and more theoretically interesting notion of communicative know-how that amounts to, roughly, knowing how to partake in communication. Linguistic communication is two-way.

18 going blind, deaf, and mute at a young age, but might still know how to talk to themselves with English sentences silently in inner speech. Could they have English as their language on this basis, even though they might not know how to linguistically communicate? This case requires special treatment.

1.2.2 Inner speech

I think Communicationism can accommodate cases of even the most severely impaired language- havers. If someone only knows how to talk to themselves by producing sentences, then, I argue, they have actually retained both productive know-how and comprehending know-how. They have just been restricted to manifesting this know-how in self-directed communication. So Communicationism counts them as language-havers.

Why think inner speech is a form of genuine linguistic communication? Well, in outer speech directed at one’s present or future self, one systematically produces sentences via verbal utterance and means things by doing so, and one systematically comprehends sentences as one interprets one’s self as meaning things upon hearing the sound of one’s own voice. Talking to yourself out loud thus perfectly well counts as linguistic communication.

We should say the same thing about inner speech, which is a form of speech.16 The most salient difference between the outer case and the inner case is the medium in which they occur. Inner speech equally involves meaning things by producing sentences, but via “silent solilo- quy” (Ryle 1949, p. 27).17 In order for this to not qualify as linguistic communication, it would have to be that the medium of linguistic communication must be verbal or otherwise external to

16The view that inner speech is ‘real speech’ is defended, persuasively I think, in Gregory 2016, 2017, 2018. (See Vicente and Manrique 2011 for an overview of other positions.) But even if inner speech is merely imagined speech—or even if it involves a hallucinatory, false representation that one has performed an act of inner speech (as Byrne 2018 thinks (pp. 199–202))—I say it is still a form a speech. By ‘speech’ I mean any act of saying a sentence; saying ‘Goats eat cans’ always counts as speech, and so saying ‘Goats eat cans’ to yourself counts as speech. If inner speech does not involve the production of ‘non-physical’ sentence-tokens (as Gregory 2017 seems to think it does (pp. 54–56)), then the correct verdict is that not all speech involves the production of sentence- tokens. Perhaps we can say ‘Goats eat cans’ by imagining ourselves verbally uttering a token of ‘Goats eat cans’, or by intentionally inducing in ourselves a hallucination that we internally uttered such a token. 17Imagine: You say to yourself silently ‘Hmm, it’s raining cats and dogs’. Someone asks you ‘What are you thinking?’. If you answer ‘I was thinking it’s raining cats and dogs’, they could then ask ‘What did you mean by that?’, i.e. what did you mean by saying that to yourself. You could then answer ‘I meant that it’s raining a lot’. I thus disagree with Davis 1992, who denies that we mean or express things when we talk to ourselves in inner speech (pp. 229–230).

19 the mind. To see that it need not be, consider telepathic communication. If linguistic commu- nication between two can proceed by one’s telepathically projecting a sentence into the other’s mind,18 then linguistic communication can proceed within one by one’s non-telepathically pro- jecting a sentence into one’s own mind.

Chomsky (1975b) worries that if we adopt a “concept of ‘communication’ as including communication with oneself, that is, thinking in words” in inner speech, then the view of lan- guage “as ‘essentially’ a means of communication” fortuitously “collapses” into his opposing view: that language “is ‘essentially’ a system for expression of thought” (p. 57), i.e. a system for internally expressing our thoughts.19 But his worry is ill-founded. It is not true that if linguistic communication includes meaning things to ourselves in inner speech, then Commu- nicationism collapses into Chomsky’s view, which we might call ‘Thoughtism’: that to have a language is to internally express one’s thoughts in that language. Communicationism accom- modates the possibility of language-havers who are incapable of inner speech, but can com- municate with a language in outer or overt speech. Thoughtism does not allow for these cases, the possibility of which cannot be plausibly denied; there are documented cases of people with aphasia whose outer linguistic production is unimpaired but whose inner speech is severely impaired, and of people who lack the capacity for inner speech while possessing the capacity for outer linguistic comprehension.20 So Thoughtism and Communicationism are not equivalent given my account of inner speech as a form of linguistic communication. There is no threat of collapse.

1.2.3 The ‘language’ of thought

Could there be an even more severely impaired language-haver, lacking outward comprehen- sion and production, who also lacked the knowledge of how to communicate with themselves

18As it proceeds between the mutated humans in John Wyndham’s The Chrysalids. 19Here Chomsky is responding to Searle 1972, who defends the view that the “purpose of language is commu- nication” while claiming that we “communicate primarily with other people, but also with ourselves, as when we talk or think in words to ourselves”. 20These cases, as well as the potential disassociation of overt and inner speech more generally, are discussed in Geva, S. Bennett, et al. 2011, Langland-Hassan et al. 2015, and Stark, Geva, and Warburton 2017. Levine, Calvanio, and Popovics 1982 discuss a case of a subject with hemiparesis who became mute and suffered a “com- plete loss of inner speech” while retaining spoken comprehension, written comprehension, and written production, thereby retaining knowledge of how to linguistically communicate (p. 391).

20 in inner speech? Perhaps such a speaker might have a language of thought employed not in in- ner speech but in unconscious computation. If so, would they not have the language of thought or ‘Mentalese’ as their language?

I think not. Mentalese, as standardly conceived, is merely analogous to a natural language. It is a ‘(natural) language’ only in an extended or metaphorical sense of the term. We do not understand or comprehend sentences of Mentalese, at least not in the sense that those who have languages ‘understand’ their expressions.21 Moreover, linguistic expressions admit of lexical ambiguity, whereas Mentalese expressions are normally thought not to; “thought needs to be ambiguity-free” (Fodor 1998, p. 64).22 Fodor himself contrasts Mentalese with “real languages”, i.e. “natural languages” or “languages properly so-called” (1975, p. 31).

Communicationism is then to be read as a thesis about what it is to have a language in the way we have real, natural languages; it is not about what it is to ‘have’ Mentalese or some other machine code. After all, many systems ‘have’ machine codes in the way that those with brains ‘have’ the brain-specific machine code of Mentalese, at least according to Fodor and his followers. But we do not think that these systems, such as microwaves and iMacs, have languages.

But even if Mentalese is a natural language, this opens up no counterexample to Communi- cationism.23

Having L as one’s language of thought does not suffice for having L as one’s language, even when L is a language like German. A calculator or laptop might have German as its machine code without thereby having German as its language. And it makes no difference if the German sentence tokens that make up a system’s machine code realize that system’s mental states. To see this, imagine someone who acquired German as their first language in such a way that German became their language of thought. Their belief that goats eat cans, let us suppose, is

21On this point, see Rescorla 2019, section 6.2, on the distinction between Mentalese and natural languages. And see the introduction for the few crucial points of analogy between Mentalese and natural languages. 22Here is Fodor elsewhere: “That there are ambiguities in English is, indeed, the classic reason for claiming that we don’t think in English” (2008, p. 73). He is more open, however, to the claim that we think in “some ambiguity- free regimentation of English”, sentences of which are “formulas in what Chomsky calls ‘LF’ (roughly, Logical Form)” (1998, p. 65). See also Fodor 2008 (p. 58, fn. 15). 23See chapter 3, fn. 10, for more discussion of this view.

21 realized by a neural token of ‘Ziegen fressen Dosen’ somewhere in their brain that bears the content that goats eat cans. Now, there are, at least in principle, ways in which this person could sustain a traumatic brain injury which would result in the complete loss of German as their language, but which would not result in the loss of German as their language of thought. They might lose German as their language and yet still believe that goats eat cans by virtue of ‘Ziegen fressen Dosen’ being written in their brain.24 So having a natural language L as one’s language of thought does not suffice for having L as one’s language.

1.2.4 Variation in speakers’ linguistic abilities

I have argued that Communicationism can account for the possibility of speakers with very different linguistic capacities sharing a language. This puts to rest the objection to Communi- cationism from Chomsky (1992) that having a language cannot be a matter of having practical knowledge because people with very different linguistic abilities can nevertheless share a lan- guage. He argues that “knowledge of language”—which I take to be equivalent to having a language—cannot consist in “an ability that can be exercised by speaking, understanding, reading, talking to oneself, and so on” for the following reason: one’s linguistic abilities can vary, due to “injury or disease”, while one’s knowledge of language “remains constant”:

suppose that Jones, a speaker of some variety of what we call “English” in infor- mal usage, improves his ability to speak his language by taking a public-speaking course, or loses his ability because of an injury or disease, then recovering that ability, say, with a drug. [...] In all such cases, something remains constant, some property K, while ability to speak, understand, and so on, varies. In ordinary usage, we say that K is knowledge of language; thus Jones’s knowledge remained con- stant while his ability to put his knowledge to use improved, declined, recovered, and so on. (pp. 103–4)

This is no objection to Communicationism as I formulate it, for knowledge of how to commu- nicate with a language can be fully had by speakers who are better or worse at manifesting it (or who lack the ability to linguistically communicate entirely, if ability and know-how are

24To deny this, one would have to claim, implausibly, that losing one’s language must amount to a loss of belief. That is, if one maintains that beliefs are realized by sentences of the believer’s language inside their head.

22 distinguished).25

Chomsky claims that this reply—claiming that “property K” just is the property of having some ability or piece of practical know-how, shared by the impaired and the unimpaired—can only be made if one “departs radically from [the] ordinary usage” of ‘ability’, “[contriving] a new technical sense of the term ‘ability’: call it K-ability”, and meaning that by ‘ability’, something that is “completely divorced from ability” (pp. 103–4).

I think this is mistaken. First, even if talk of language consisting in a ‘practical ability’ needs to be carefully interpreted as talk of language consisting in know-how, this is no contrivance; know-how is surely not “divorced” from ability, and ‘know-how’ is not a technical term of art.

Second, the fact that a property F can be had by two people, one of whose abilities are vastly different, improved versions of the other’s, does not entail that F does not consist in having practical knowledge, and does not entail that F could only involve having ‘an ability’ in some technical sense of ‘ability’. Compare: a young child and a grandmaster might equally know how to play chess, which might manifest in a shared ability to play chess, even though the grandmaster’s chess-related abilities far surpass the child’s.26

1.2.5 Non-social, non-conventional linguistic communication

As I pointed out above, Communicationism is a view that, when combined with the view that linguistic communication is necessarily a social or conventional affair, results in the received view of language as socio-conventional communication.

This picture is most rigorously defended by Lewis, who argues that a population P has a language L just in case in P there is a convention to be ‘truthful and trusting’ in L that is sustained by P’s shared interest in communicating, where to be truthful and trusting in L just

25Chomsky might grant this point. For although he firmly denies that language is an ability, he has sometimes expressed sympathy for the idea that having a language might have something to do with know-how, so long as know-how is sharply distinguished from ability and understood cognitively, as having a “crucial intellectual component” (1980, p. 55; see also 1975a, pp. 316–18; 1975b, pp. 165, 223; 2000b, pp. 169–70). But he nowhere expresses sympathy for Communicationism as I state it; in Chomsky 1997, he clarifies his position as one which is inconsistent with it, on which language “yields” productive and comprehending linguistic know-how, which “of course does not exhaust” it (p. 12). 26See Devitt 2011 (pp. 324–26) for further discussion of Chomsky’s arguments against identifying knowing a language with practical knowledge. And see Pereplyotchik 2017 (pp. 153–80) for a detailed discussion of Devitt’s view that knowing a language consists having certain skills grounded in “embodied procedural knowledge” of grammar. As far as I can tell, my view is compatible and convergent with Devitt’s.

23 is to “try never to utter any sentences of L that are not true in L”, and to “tend to respond to another’s utterance of any sentence of L by coming to believe that the uttered sentence is true”, or, that the proposition the uttered sentence expresses in L is true (Lewis 1975, p. 167).

This view actually incorporates Communicationism, at least in Lewis’s eyes, although he nowhere spells this out. Here is why. Populations have languages, he says, because certain linguistic conventions are sustained in those populations by their shared interest in communi- cating. Crucially, for Lewis, there is some Φ such that populations have languages in virtue of their Φing by convention, and such that linguistic communication is achieved by conven- tional Φing.27 Call these communicative conventions ‘conventions to TT’ (i.e. to be truthful and trusting in a language for the sake of communication). Suppose, as Lewis thinks, linguistic communication is achieved by conventional TTing, and that one has a language just in case one conventionally TTs. Having a language thus requires knowing how to conventionally TT, and so suffices for knowing how to linguistically communicate. Also, knowing how to linguistically communicate thus requires knowing how to conventionally TT, which requires immersion in a population which conventionally TTs, which suffices (for Lewis) for having a language. So, for Lewis, knowing how to communicate with a language is necessary and sufficient for having that language.

So Lewis endorses my view. But I reject his. Language and linguistic communication are not necessarily socio-conventional. Knowledge of how to systematically mean things by speaking, and of how to systematically interpret others as meaning things by what they say, can be had by the asocial who partake in no conventions. Consider the “pure Robinson Crusoe case” discussed by Davidson, “a Robinson Crusoe who has never been in communication with others” living an asocial life of isolation (1992, p. 115). Such a Crusoe could know how to systematically talk and mean things to himself and so could have a language.

With a deep commitment to the view that language is essentially social (see Davidson 1984a), Davidson boldly takes the view that the pure Crusoe case is metaphysically impos-

27For Lewis, linguistic communication is made possible by conventions of truthful and trust. This is made clear in chapters 4 and 5 of Lewis 1969; for later restatements, see Lewis 1980 (p. 80), 1986 (p. 40), 1997 (p. 350).

24 sible (1992, p. 115).28 But this is surely not the right verdict. A pure Crusoe case is at best a present technological impossibility. Future technologies might very well allow us to run sci- entific experiments in which human subjects, secretly monitored from afar by experimenters (with whom they do not socially relate), live their whole lives in pure Crusoe situations. They might be ‘raised’ by mindless machines, and acquire language through interaction with them. Engineering effort toward this end would not be thwarted by the essence of language. Lewis considers a Crusoe-style objection to his conventional account of language:

Objection: A man isolated all his life from others might begin—through genius or a miracle—to use language, say to keep a diary. [...] In this case, at least, there would be no convention involved. (1975, pp. 181–82)

His reply is worth considering in full:

Reply: Taking the definition literally, there would be no convention. But there would be something very similar. The isolated man conforms to a certain regularity at many different times. He knows at each of these times that he has conformed to that regularity in the past, and he has an interest in uniformity over time, so he continues to conform to that regularity instead of to any of various alternative regularities that would have done about as well if he had started out using them. He knows at all times that this is so, knows that he knows at all times that this is so, and so on. We might think of the situation as one in which a convention prevails in the population of different time-slices of the same man. (1975, pp. 182)

Here, Lewis relies on his account of conventions as collective, reasonable, arbitrary, commonly known, regularities.29 He argues that Crusoe’s time-slices collectively partake in such a regu- larity of communicating with a language (i.e. of being truthful and trusting in it), and thereby collectively do so by convention. So, Lewis concludes, the pure Crusoe case involves conven- tion after all. Set aside that it is dubious that all collective, reasonable, arbitrary, commonly known, reg- ularities are conventions.30 Even granting this, Lewis’s reply fails. It is not essential to the pure 28Here Davidson is responding to a discussion in Chomsky 1986 (pp. 230–34, 240–41) in which the possibility of such a case is pressed against the Wittgensteinian social theory of rule-following advanced in part 3 of Kripke 1982 (pp. 55–113; see p. 110, and fn. 84 in particular, for Kripke’s discussion of Crusoe cases). 29See Lewis 1969 (pp. 52–82). 30Every part of Lewis’s definition of conventions is contested. For an influential critique, see Gilbert 1989 (pp. 315–407). See Rescorla 2015 (sec. 4) for an overview.

25 Crusoe case that Crusoe’s timeslices partake in a collective, reasonable, arbitrary, commonly known regularity of truthfulness and trust in a language. Does Crusoe’s having a language re- quire that he knows that he has conformed to certain regularities of language use in the past? Plausibly not; Crusoe might be an amnesiac with severe memory disorders. Does Crusoe’s hav- ing a language require that he has an “interest” in uniform language use over time? Plausibly not; Crusoe’s primary interest might be survival, and his island might be inhabited by predators with an appetite for those who speak uniformly. Does Crusoe’s having a language require that his language use exhibits conformity across time? Plausibly not; Crusoe might be a linguistic innovator, rapidly evolving through multiple different languages per day. For these reasons, I find Lewis’s reply unconvincing.31

And there are other counterexamples to language’s conventionality. Consider that a laboratory- fabricated creature could be biologically hard-wired to know how to linguistically communicate with a language, and thereby have it, entirely free of convention. Lewis considers this apparent counterexample to his view, the possibility of “creatures of instinct who are unable to use any language other than the one that is built into them” (1969, p. 195). His response is that such cases are so “bizarre” and “peculiar”, so “different from language use as we know it”, that we “will not want to classify them as clear cases under ordinary usage” of the word ‘language’. Still, it is clear that such a creature could have a language, which is compatible with its be- ing unclear that uttering ‘Such a creature could have a language’ counts as ordinary usage of ‘language’. Indeed, it is biologically possible that our distant descendants lose the capacity to acquire any language other than English, perhaps due to genetic engineering, linguistic impe- rialism, or cyberization. It would be incorrect to describe such a future event as the death of language.

31Additionally, the pure Crusoe case has been put forward as a counterexample to the claim that, necessarily, for any x, x has a language only if x partakes in some convention. Does Lewis’s reply dispute that this generalization has been counterexemplified? It seems not. He does not deny that Crusoe himself partakes in no conventions. Even if Crusoe’s time-slices do, that is neither here nor there.

26 1.3 The psycho-biology of language

Communicationism is compatible, I think, with the conception of language as a psychological and biological phenomenon. To argue for this, I will lay out this view precisely.

1.3.1 Cognitivism and Neurobiologicalism

Cognitivism is the view that tacitly knowing or ‘cognizing’ a grammar is necessary and suffi- cient for having a language. What is a grammar? George (1989) helpfully distinguishes gram- mars, psychogrammars, and physiogrammars. For our purposes, a grammar G for a language L is an abstract object modelable as a theory T—a set of axioms and rules—such that, for any expression e and meaning m, it is a theorem of T that e means m just in case e means m in L. So there can be distinct grammars for a single language.32 A psychogrammar is something like a state of cognizing a grammar. And a physiogrammar is a neurophysiological state underlying or realizing a psychogrammar, assuming there are such states. With this terminology in hand, we can say that Cognitivism is the view that someone has a language L just if they possess a psychogrammar for L.33

Some clarifications are in order. In order to allay worries about aliens, robots, and cyborgs who might have languages without cognizing grammars, Cognitivism should be read as a thesis restricted to cases of humans having complex humanly possible natural languages like English. This is clear if we examine the most celebrated argument for Cognitivism,34 the argument from ‘productivity’ (or ‘creativity’) and understandability.35

32For more detail, see Partee, Ter Meulen, and Wall 1993 (pp. 431–452). 33This is the view defended in Chomsky 1980. In discussion of the question of what it is for someone’s language to be L, his answer is that they have “a grammar determining L in [their] mind/brain” (p. 84). By ‘a grammar’ being ‘in his mind/brain’, he means that they cognize a grammar for L (pp. 69–70). A fuller, influential defense of Cognitivism is given in Chomsky 1986 (pp. 15–46). The label ‘Cognitivism’ is from K. Johnson and Lepore 2004, who call it “the received view in linguistics” (p. 709), and survey its mixed philosophical reception (pp. 708–714). More recent defenders include Ludlow 2011 (pp. 44–63) and Yalcin 2014 (pp. 36–39). Implicit endorsement of Cognitivism is found wherever it is argued that linguistic meaning is grounded in grammar-cognizing, i.e. wherever it is argued that ‘the actual language relation’ (of Schiffer 1993) is the ‘cognizes a grammar for’ relation. Something like this view is entertained (or put forward for serious consideration) by Loar 1976 (pp. 160–61), Loar 1981 (pp. 257–60), Larson and Segal 1995 (pp. 22–24, 126), Schiffer 1987 (pp. 253–55), Schiffer 1993 (pp. 242–44), Schiffer 2006 (p. 286), Schiffer 2015, Laurence 1996 (p. 284), and B. C. Smith 2008 (pp. 963–65). 34Or rather, for half of it. (P1)–(P3) alone fall short of an argument for Cognitivism; they only entail that cognizing a grammar for L is necessary for having L. To close the gap, one must assume that cognizing a grammar for L is sufficient for having L. But it is unclear why we should assume this. 35See Chomsky 2016 (pp. 14–16) for a quick recent statement. In philosophical contexts, these arguments are

27 Roughly, it involves the following three premises: Suppose someone h has a language L.

(P1) Then they have ‘infinite competence’: there are infinitely many sentences of L they can understand, parse, and pronounce (at least ideally or in principle). (P2) If (P1), then something finite in them can ‘generate’ infinitely many understandings, parsings, and pronouncings of L-sentences (i.e. something in them must encode finite information on the basis of which one could ‘deduce’ infinitely many L-sentences’ se- mantic, syntactic, and phonological forms). (P3) If something finite in them can ‘generate’ infinitely many understandings, parsings, and pronouncings of L-sentences, then they must cognize a grammar for L.

(P1) need not be true on the supposition that h has L, unless ‘L’ ranges only over infinitary nat- ural languages like English, for having a finite language does not require infinite competence. And arguably (P2) is false unless ‘h’ only ranges over humans, for a possible non-human crea- ture might have infinite linguistic competence thanks to their infinite memory bank. And as for (P3), it is extremely controversial that cognizing a grammar for L is the only possible way for a finite being to have the cognitive capacity to understand, parse, and pronounce any of an infin- ity of sentences.36 But it is less controversial, and widely thought to be a scientific discovery, that that is the only way humans can do it. So, (P3) should be restricted to humans, for it seems best motivated by the empirical hypothesis that, as a matter of nomological necessity, humans can only possess infinite competence by cognizing grammars. Indeed, many of Cognitivism’s defenders intend to be read as making a claim about human language, not language itself.37 A precise statement of Cognitivism, then, would be: usually given as arguments for compositionality; see Pagin and Westerstahl 2011 (pp. 107–10) and Szabo´ 2017 (section 3). 36See Schiffer 1987 (pp. 179–210) for an argument that cognizing a grammar containing a compositional se- mantics for L is not necessary for infinite semantic competence. Matthews 2003 points out that Schiffer’s argument can be re-run to show that cognizing a syntax for L is not necessary for infinite syntactic competence. And I think it can also be re-run to argue that infinite phonological competence does not require cognizing a grammar with a phonological component. 37Chomsky intends to be read everywhere as meaning human language by ‘language’: “By ‘language’ I mean ‘human language’.” (Chomsky 1994, p. 155). See also Chomsky 2000a (p. 19) and Fodor 1981 (pp. 206–7, fn. 2). This restriction potentially deflects certain objections to Cognitivism based on the multiple realizability of language in non-human non-grammar-cognizers, such as those made by Lewis 1975 (p. 22), Dummett 1976 (p. 37), Katz 1981 (pp. 89–90), Soames 1984 (p. 171), Devitt and Sterelny 1989 (p. 514), and Hanna 2006 (p. 50). Replies to these objections (and related ones) amounting to apparent denials that language is multiply

28 Cognitivism: It is nomologically necessary that, for any human h and natural language L, h has L just if h cognizes a grammar for L.38

Suppose that Cognitivism is true. It seems prima facie compatible with Communicationism. So what argument is there for their incompatibility? Because Communicationism is the conjunction of two conditionals, if it is incompatible with Cognitivism, then Cognitivism must entail that one of these conditionals is false. In other words, to argue for their incompatibility, one must find a path from Cognitivism to (A) or (B):

(A) Possibly, someone knows how to communicate with a language they do not have. (B) Possibly, someone has a language that they do not know how to communicate with.

Arguably, there is no path from Cognitivism to (A). If Cognitivism is true, it is not plausible that a human could know how to communicate with a natural language L, in all of its infinite complexity, by means other than cognizing a grammar for L. Someone without a psychogram- mar might learn how to communicate with some finite fragment of L, but a psychogrammar for L seems required for humans to possess the infinite competence manifested in communicating with L (or at least the cognitivist should think so).39 And if a human must cognize a grammar for L in order to know how to communicate with L, they must also have L (as per Cognitivism). In short, a human case that establishes (A) is unlikely given Cognitivism. Does Cognitivism entail (B)? Well, suppose, as many who endorse Cognitivism do, that Neurobiologicalism is also true, the view that psychogrammars are realized by neurophysio- logical properties of the brain:

realizable in such creatures are made by Chomsky 1980 (p. 111), Chomsky 1994 (pp. 163–64), Chomsky 2000b (p. 147), D’Agostino 1986 (pp. 34–36), Laurence 2003 (pp. 91–100), J. Collins 2008 (pp. 143–48), J. Collins 2009b (pp. 182–92), and J. Collins 2018 (pp. 175–78). These rebuffs make sense if they take ‘language’ in the mouth of the cognitivist as picking out human language, which they presumably take to be a natural kind. For problems with this proposal, however, see 4.3.6.3 below. 38Compare the similar statement of Cognitivism in Fodor (1981): “It is nomologically necessary that the gram- mar of a language is internally represented by speakers/hearers of that language; up to dialectical variants, the grammar of a language is what its speakers/hearers have in common by virtue of which they are speakers/hearers of the same language” (p. 199). Fodor attributes this view to Chomsky and Katz 1974 and J. D. Fodor, J. A. Fodor, and Garrett 1975; it is objected to in Devitt and Sterelny 1989, and in Devitt 2006 under the name “the Representational Thesis” (p. 4). 39Assuming, that is, that we should accept Cognitivism because psychogrammars are required for humans to possess infinite competence.

29 Neurobiologicalism: It is nomologically necessary that, for any human h and grammar G, if h cognizes G, then there is some neurophysiological property N such that (i) h has N and (ii) h’s having N realizes h’s cognizing G.

This is the view of Chomsky (1986), in which we are told that for a human to know a grammar for L is for their “mind/brain to be in a certain state; more narrowly, for the language faculty, one module of this system, to be in a certain state”, and that it is the “task of the brain sciences” to “discover the mechanisms that are the physical realization of [this] state”, or “what it is about [their] brain by virtue of which” they know L’s grammar (p. 22).40 This implies much more than the no doubt plausible view that the brain is somehow consti- tutively and causally related to language-having such that brain science is bound to illuminate language in various ways.41 Rather, his view is that psychogrammars are always realized by underlying physiogrammars.42 This view seems to be a foundational assumption of the thriving research industry of ‘biolinguistics’.43 If we take Neurobiologicalism on board, we can then argue convincingly for (B), the pos- sibility of someone having a language that they do not know how to communicate with: Given Cognitivism and Neurobiologicalism, it seems that there are neurophysiological properties the having of which by a human nomologically entail having a language, but which might be had in the absence of knowledge of how to communicate with that language. Take, for instance, the neurophysiological property, N1, my having of which realizes my cognizing a grammar for my language. Presumably, N1 could be had by a human in our world but who does not know how to

40Here Chomsky writes about “knowledge of language” being realized by the brain, but in a context in which he has already made clear that knowledge of language is constituted by knowledge of grammar (1986, p. 3–4). 41Consider how Chomsky regiments his account, analyzing what it is for a speaker h to have a language L as h’s standing in relation R to L: h has language L just if R(h, L)(R just is ‘the actual language relation’; see Schiffer 1993), and then reiterating (emphases mine): “one task of the brain sciences will be to explain what it is about h’s brain (in particular, its language faculty) that corresponds to h’s knowing L, that is, by virtue of which R(h, L) holds and the statement that R(h, L) is true” (1986, p. 22). His ‘R(h, L)’ is said to be “about structures of the brain formulated at a certain level of abstraction from mechanisms” (p. 23). 42On this point, see Ludlow 2011 (pp. 46–47), McGilvray 1998, (pp. 240–46), Chomsky 2003a, and Appendix I of Chomsky 2012. 43For an opinionated overview, revealing the influence of Chomsky’s work on this research, see Jenkins 2000, 2013. (But on the opinionatedness of the former, see Bickerton 2001.) See also any of the articles published in Biolinguistics (https://www.biolinguistics.eu/index.php/biolinguistics/). For an overview of this research, see the contributions to Boeckx and Grohmann 2013, especially McGilvray 2013; see also the very helpful Martins and Boeckx 2016. Lenneberg 1967 and Chomsky 1976 are loci classicus.

30 communicate with a language; perhaps even by me! One might hold fixed the N1-instantiating regions of my brain, but remove my communicative know-how by disabling the regions and other organs which I require for inner and outer production and comprehension.44 Afterward, I

would still have N1, and so must cognize a grammar for my language—assuming that realizers nomologically suffice for what they realize—and so must have my language. That is, so long as we assume the following about the realization relation appealed to in Neurobiologicalism:

Realizers Suffice: If X realizes Y, then X is nomologically sufficient for Y.

If this is true, then it seems that, given Cognitivism and Neurobiologicalism, one can have a language without knowing how to communicate with it.

It looks, then, like the conjunction of Cognitivism and Neurobiologicalism is inconsistent with Communicationism because they entail (B).45 If their conjunction makes up part of the psycho-biological conception of language, then I must argue against this apparent inconsis- tency. I will do so next by arguing against Realizers Suffice. There is a plausible account of how psychogrammars are related to their underlying brain states which vindicates Neurobio- logicalism while abandoning Realizers Suffice.

1.3.2 How psychogrammars are realized

1.3.2.1 Functional realization

Linguists’ ascriptions of psychogrammars to humans are made at the ‘computational level of description’.46 The claim that a human cognizes a grammar is like David Marr’s claim that

44If this is not nomologically possible, then the neurophysiological realizers of psychogrammars for L nomo- logically suffice for knowing how to communicate with L, and so (B) is false given Cognitivism and Neurobiolog- icalism. If so, Cognitivism and Communicationism are plausibly compatible. 45Might Cognitivism together with Neurobiologicalism entail (A)? They entail that having a brain is nomolog- ically necessary for a human to have English. If it could be argued that it is nomologically possible for humans to know how to communicate with English without a brain, then we could argue our way to (A). I think this could be well-argued; perhaps we will one day know how to communicate with English brainlessly with cyber-brains. But I take it that this suggests that Neurobiologicalism has been formulated too strongly; it requires a restriction not just to humans, but to normal humans. I pass over the hard question of whether this normalcy condition can be spelled out without trivializing Neurobiologicalism, i.e. without building it into normalcy that one’s psychogrammar is realized by one’s brain. 46This is suggested in Marr 1982 (pp. 28–29, 357), Egan 2003, Rey 2003 (pp. 120–23), Devitt 2006 (pp. 66– 71), K. Johnson 2014 (pp. 52–3), Berwick and Chomsky 2016 (pp. 128–33), and M. Johnson 2017. See 4.3.4 for more discussion. Peacocke 1986, 1989 disagrees that ascriptions of psychogrammars are made at the computa-

31 the visual system computes a certain mathematical function in detecting edges. The property of cognizing a grammar is thus a ‘computational property’. A computational property can be thought of as a special kind of functional property, where F is a functional property just if there is some functional role R such that F just is the property of having some property that plays R.47 A functional role R is any (second-order) property such that a property H’s playing (i.e. having) R entails that H is causally related to certain other properties; functional roles are causal roles. A computational property, then, is a functional property with a special kind of defining functional/causal role: a computational role, Rc, a second-order property such that a property H’s having Rc entails that H is causally related to properties the having of which by a system consist in that system’s tokening ‘syntactic objects’, i.e. structured ‘strings’ inner tokenings of which are thought to enable machines or brains to carry out computations. All of this is just to recommend the following way of thinking about what it is for a neuro- physiological state to ‘realize’ a psychogrammar:

Psychogrammar Computational Functionalism (PCF): There is some computational role

Rc such that: the property of cognizing a grammar for a natural language = the property

48 of have some property that plays Rc.

Call this computational role ‘the psychogrammar-role’. What is the psychogrammar-role? It is likely that we do not yet know. But we know that it will be specified, implicitly, by whatever psycholinguistic theory is the true one.49 (Assuming, tional level, but discussion of his alternative account would take us too far afield here; I offer arguments against Peacocke’s account in 4.3.5. 47For a fuller defense of this conception of computational states or properties, see 4.3.4.1 through 4.3.4.4, where I argue, in effect, that computational properties are equivalent to functional properties even on non-functionalist accounts of computational implementation. 48PCF, and the assumption that computational properties are functional properties, are inessential for my ar- gument. If one thinks that computational implementation is distinct from functional realization and is perhaps instead a kind of structural isomorphism (Chalmers 1994, 2012) then one might instead adopt ‘Psychogrammar Structuralism’: the view that to cognize a grammar G is to be in a state whose ‘causal organization’ is isomorphic with G’s formal structure. If this is your cup of tea, read my talk of a property ‘playing the psychogrammar role’ as shorthand for talk of a state’s ‘having a causal organization isomorphic with a natural language grammar’s formal structure’. Also, see section 4.3.5 for an argument that Psychogrammar Structuralism entails PCF. 49How? Perhaps in roughly the way Lewis 1970b, 1972 thinks folk psychology implicitly specifies the func- tional roles of belief, desire, etc. See Rey 1997 (pp. 165–203) for an overview, and Loar 1981 (pp. 44–56) and Schiffer 1987 (pp. 19–47) for critical discussion and refinements.

32 that is, that such a theory will postulate that we cognize psychogrammars.) For this theory will specify exactly how our psychogrammars are causally related to other sub-personal and personal mental and behavioral occurrences.50 Now, if PCF and Neurobiologicalism are true, then we can say that, whenever a human cognizes a grammar for a natural language, this is because some neurophysiological property of theirs plays the psychogrammar-role; this is how psychogrammars are realized.51

1.3.2.2 Realizers do not suffice

If humans’ psychogrammars are realized by states of their brains playing the psychogrammar- role, then Realizers Suffice is true of this realization relation—X realizes Y only if X is nomo- logically sufficient for Y—only if the following is true:

(1) If a neurophysiological property N plays the psychogrammar-role, then it is nomologi- cally necessary that it does so.

For if N plays the psychogrammar-role but it is nomologically possible that it does not, then it is nomologically possible for a human to have N while having no property that plays the- psychogrammar role, and so (given PCF) while not cognizing any grammar; and if that is nomologically possible, then Realizers Suffice is false. But (1) is plausibly false. Arguably, no neurophysiological properties that play the psy- chogrammar role do so with nomological necessity. First I want to show that this true on Chom- sky’s view of psychogrammars, which I will take to reflect the orthodox psycho-biological conception. For Chomsky, the neurophysiological states that realize psychogrammars—i.e., physiogrammars, or what he calls ‘I-languages’52—play the psychogrammar-role by virtue of their “integration” with independent “performance systems” (emphasis mine):

50I am thus recommending an a posteriori ‘psychofunctionalist’ account of psychogrammars. This is the view Lycan 2003 pushes on Chomsky (p. 24, fn. 4), but which he curiously rejects in Chomsky 2003b. 51In Shoemaker’s 1981 terms, a psychogrammar’s “core realizer” will be some neurophysiological state, whereas its “total realizer” will be that state together with the fact that it plays the psychogrammar-role. 52Here I follow the clear exposition of J. Collins 2008, who writes that ‘I-language’ refers to “an aspect of the mind/brain that subserves linguistic competence” (p. 152); an I-language is “simply a state of the mind/brain, albeit abstractly described” (p. 220). Confusingly, Chomsky also uses ‘I-language’ to refer to a language L, qua abstract object, that is had by a speaker by virtue of their being some brain state, i.e. by virtue of their ‘possession’ of an I-language (see George 1989, p. 95).

33 The I-language is a (narrowly described) property of the brain, a relatively stable element of transitory states of the language faculty. [...] It is only by virtue of its integration into such performance systems that this brain state qualifies as a lan- guage [...] [i.e.] performance systems that play a role in articulation, interpretation, expression of beliefs and desires, referring, telling stories, and so on. (Chomsky 2000b, p. 27)

By ‘qualifies as a language’, I read ‘qualifies as a realizer of a psychogrammar for a language’. This implies that the physiogrammars that realize humans’ psychogrammars do not play the psychogrammar-role with nomological necessity. This is made clear by Chomsky’s additional claim that an “organism might, in principle, have the same I-language (brain state) as” someone in whom it underpins their language, “but embedded in performance systems that use it for locomotion” (p. 27).53 So human physiogrammars might not play the psychogrammar-role; it is nomologically possible for a physiogrammar underlying a psychogrammar to be possessed while not inte- grated with the right performance systems, in which case it would not play the psychogrammar- role. If this is correct, or at least correct according to the psycho-biological view of language, then (1) is false on that view, and so Realizers Suffice is likewise false of the psychogrammar realization relation. But why think Chomsky is correct on this point? Why think physiogrammars must be “integrated” in order to play the psychogrammar-role? Recall the main argument for Cog- nitivism and for belief in psychogrammars: that they undergird our infinite competence with natural languages, enabling us to ‘generate’ infinitely many understandings, parsings, and pro- nouncings. If doing that is part of the job description or theoretical role of a psychogrammar, then it is part of the psychogrammar-role and is something that an integrated physiogrammar must do. In order for a physiogrammar to do that—to undergird our infinite competence—it must be causally networked with the performance systems for understanding (“interpretation”), pronouncing (“articulation”), and so on. If playing the psychogrammar-role did not require a physiogrammar to be integrated in this way, then a psychogrammar would be insufficient for

53For further discussion, see Burton-Roberts and Carr 1999 (pp. 386–89), Egan 2003 (pp. 90–92), J. Collins 2004 (pp. 507–13).

34 infinite competence, and we would lose our main reason to believe in them. So, if one disagrees with Chomsky about physiogrammar integration, then one risks undermining support for the psycho-biological conception itself.

1.3.2.3 Are integrated physiogrammars neurophysiological?

One might worry that even if Realizers Suffice is false of the psychogrammar realization relation, Communicationism is still threatened. The threat is that possession of an integrated physiogrammar might simply be a matter of having some more complicated neurophysiological property which is nomologically sufficient for cognizing a grammar. If so, then one might hold fixed the right brain regions of someone with an integrated physiogrammar while removing their practical knowledge of how to linguistically communicate, and Communicationism would be refuted as before. More precisely, the threat is of the following two claims being true:

(2) There is a neurophysiological property N such that having N is nomologically sufficient for having an integrated physiogrammar that plays the psychogrammar role. (3) If (2) is true, then it is nomologically possible to have a psychogrammar for a language L while not knowing how to communicate with L.

If (2)–(3) are true, then Communicationism is false. And if they are true on the psycho-biological conception, then Communicationism is inconsistent with it. I will not dispute (3). So I will argue that (2) is implausible and not forced on us by the psycho-biological conception. On that conception, removing someone’s knowledge of how to linguistically communicate plausibly nomologically suffices for disintegrating their physiogrammar, and so (2) is plausibly false. The reason is that physiogrammars realize psychogrammars only when and because they are integrated with performance systems for linguistic communication, given that the human systems for articulation, interpretation, expression, referring, and so on enable us to systemati- cally mean things (i.e. express things) and interpret others as doing so. Hauser, Chomsky, and Fitch (2002), in their seminal paper “The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?”, concur. What I call a ‘physiogrammar’

35 or ‘I-language’, they call the “faculty of language” in the “narrow sense”, the ‘FLN’; what I call an ‘integrated physiogrammar’, they call the faculty of language in the “broad sense”, the ‘FLB’ (pp. 1570–71). The FLB consists of the FLN together with “functional components that underlie communication”; it thus “serves the function of communication with admirable effectiveness” (p. 1572); it is “a communication system” (p. 1574).

So, physiogrammars realize psychogrammars only when and because they are integrated in such a way that their hosts know how to communicate with their language.54 So having an integrated physiogrammar requires having performance systems which constitute the human capacity for linguistic communication.55

What this means is that if a neurophysiological property does not nomologically suffice for knowing how to communicate with L, then it cannot likewise suffice for having an integrated physiogrammar for L. So (2) is false, or is anyway not an essential component of the psycho- biological picture of language.

Moreover, there are good reasons to think (2) is false period. There is an argument from a weak form of externalism about linguistic meaning that nothing neurophysiological ever nomo- logically suffices for having an integrated physiogrammar.56 Suppose a human h has L, and so has a psychogrammar and an integrated physiogrammar for L. Let N be the neurophysiological

54This explains why possessing a psychogrammar for L suffices for having L; it suffices for knowing how to communicate with L, which, given Communicationism, is equivalent to having L. As I noted in fn. 33 above, it is unclear how else the cognitivist could explain this aspect of their view. 55This is confirmed by Chomsky 2002b, who claims that possession of “the language faculty”—an integrated I- language—is sufficient for “thinking as we do in inner speech” (p. 148), which I argue suffices for knowing how to linguistically communicate. The following passage from Chomsky 1997 is also suggestive (emphases mine): “A person whose mind incorporates the language English (meaning, a particular I-language that falls within what is informally called ‘English’) knows how to speak and understand a variety of sentences, knows that certain sounds have certain meanings, and so on. These are typical cases of knowing-how and knowing-that [...] It seems entirely reasonable then to think of the language as a system that is internalized in the mind/brain, yielding specific cases of propositional knowledge and knowledge how to do so and so” (p. 12). 56Because many who endorse Cognitivism, including Chomsky, express semantic internalist sentiments, this argument may not be dialectically effective against an internalist cognitivist advocate of (2). To such an opponent, I would stress that the thesis of semantic externalism that I employ in what follows does not state what Chomsky 1995a takes issue with in objecting to semantic externalists, namely, that linguistic meanings are externalia, and that semantics must be a science specifying which words hook up to which externalia. Rather, I employ only the less controversial thesis that, as a matter of nomological necessity, humans must be causal-historically connected to their external environment in order for expressions to have meanings for them. Chomsky does not seem to dispute this claim. Indeed, as J. Collins 2009a argues, semantic externalism is contentious for Chomskyans as a claim about methodology, i.e. as the claim that semantic inquiry “targets”, “presupposes”, or is “about” externalia (pp. 60, 63, 65). My argument is consistent with the denial of this claim.

36 property specifying h’s total brain state. If having N is nomologically sufficient for having an integrated physiogrammar, then it suffices for having L, and so suffices for linguistic expres- sions having meanings for h (assuming that one’s having a language entails that some linguistic expressions are meaningful for one). Now, our weak thesis of semantic externalism states that if expressions have meanings for h, then h must stand in some causal-historical relations to their external environment. But note that having N does not nomologically require standing in any such relations; N could be surgically bestowed upon a human who has spent their entire life laying in bed unconscious in causal isolation from the outside world, or by an envatted brain. Thus, having N does not nomologically suffice for possessing an integrated physiogrammar. And the same goes for any neurophysiological property. Therefore, (2) is false.

There is thus no worry that an integrated physiogrammar might be preserved while knowl- edge of how to linguistically communicate is eliminated. And so there is no threat to Commu- nicationism from (3).

For these reasons, I take it that Cognitivism and Neurobiologicalism can be plausibly and faithfully implemented without Realizers Suffice, and without having to reject Communica- tionism.

1.3.3 The evolution of language

Communicationism is thought to not sit well with the following hypotheses, which many take to be plausible and empirically confirmed: (i) that the human language capacity did not evolve because it enables communication, (ii) that the function of language is not to enable, or to be used for, communication, and (iii) and that human language is radically different from systems of animal communication. Berwick and Chomsky (2016) argue for (i)–(iii) at length, and on their basis reject Communicationism.57 They take (i)–(iii) to support the conclusion that the “modern doctrine” that language is a system of communication is “mistaken”; from a biologi- cally informed evolutionary perspective, language is most plausibly seen as an “instrument of thought” (p. 102).

But I think it is relatively clear that Communicationism is compatible with (i)–(iii), and that

57See also Reboul 2015 for arguments for a similar conclusion.

37 they are not evidence against its truth. Take (i). Compare: Suppose we were to identify being musical with knowing how to play a musical instrument. It would be irrelevant to the truth of this identification that the human capacity for being musical—some set of biological traits of humans—did not evolve because it enables us to play musical instruments (i.e. because it is adaptive to play the guitar, piano, etc.), as it surely did not. The human capacity for musicality could have been selected for a completely different reason. For example, it is conceivable that genes strengthening our immune system, and selected for that reason, grant the capacity for musicality as an evolutionarily unintended side-effect. Likewise, it is irrelevant to the iden- tification of language with linguistic communicative know-how that the human capacity for language did not evolve because it bestowed that know-how upon us.

As for (ii), Communicationism implies nothing whatsoever about what the evolutionary ‘function’ of language is. Suppose language’s evolutionary function is to enable thought in the form of inner speech, as Chomsky suggests. Communicationism only implies that having a language is related to the fulfillment of this function in the same way that knowing how to com- municate with a language is related to its fulfillment. So, if language-having was selected for because it enables thought, then, if Communicationism is true, it must also be true that linguistic communicative know-how was selected for because it enables thought. This is not implausi- ble. If a human knows how to communicate with L, then, ceteris paribus—i.e., barring any linguistic disabilities—they will possess the ability to think in L. And so it would make sense to think that the former could have been selected for on the basis of the evolutionary benefit of the latter.

And as for (iii), this is no problem because systems of human linguistic communication are also radically different from systems of animal communication.

Lastly, while in endorsing Communicationism I do take language to be a system of commu- nication, this is consistent, as I point out above, with the view that human language evolved as an “instrument of thought”. And it is also consistent with the view I called Thoughtism in section 1.2.2, namely, the view that one’s language is L just if one thinks in L.

I say half of Thoughtism is correct: if you think in L, then you know how to communicate

38 with L with yourself in inner speech, and so you must have L. The other half of Thoughtism is dubious. Among us there may be those who know how to communicate with L by outward production or comprehension but cannot think with L in inner speech.58 And in children, inner speech develops later than knowledge of how to outwardly linguistically communicate; but young children are surely among the language-havers.59 The view of language as essentially a system of communication, rather than of thought, gets right the inclusivity of the linguistic community.

1.4 Conclusion

I have argued that language is essentially connected to communication, in that having a lan- guage is equivalent to knowing how to communicate with it. This view cannot be dismissed on the basis of the psycho-biological view of language. Indeed, this view seems to entail that humans have languages because the language faculty bestows on them knowledge of how to linguistically communicate. Chomsky is right that Communicationism is a “virtual dogma”, and philosophers can be faulted for uncritically assuming it. But some dogmas are true.

58See fn. 19 above. 59For an overview, see Geva and Fernyhough 2019.

39 40 Chapter 2

Meaning, use, and know-how

2.1 Meaning and use

Words have meanings. The science of semantics says so. Sometimes, what the meaning of a linguistic expression is said to be is no surprise. It is not surprising to be told that ‘Goats eat cans’ means that goats eat cans. But often it is very surprising what a expression’s meaning is said to be. For instance, some say the meaning of ‘the’ is a something like a function, perhaps a function from functions from individuals to truth-values to functions from functions from individuals to truth-values to truth-values.1 If so, this demands explanation. When I utter ‘the’, why does it mean some function? In virtue of what do expressions have their meanings?

In virtue of use. Meaning ultimately is use, or so goes our tradition. I will not break with it. With Lewis I agree that “surely it is our use of language that somehow determines meaning”; the task is to “try to say how” (1992, p. 106). To say how, I think we need to look one level deeper than patterns of linguistic usage. Actual usage is a manifestation of our communicative competence or know-how. And that, I say, is what meaning is ultimately grounded in.

‘Meaning is fixed by the use of language’ is true, but this is subtly ambiguous. On a standard reading of this slogan, it says that the semantic facts are fixed by worldly patterns of linguistic usage. But on a non-standard reading—which I take to be the correct reading—it says that the semantic facts are fixed by our linguistic know-how. In more detail, the semantic facts about expressions in the mouth of a speaker (or in the mouths of a population) are fixed by the fact that

1This is suggested by Montague (1970).

41 they use a language, but in the sense of ‘uses’ on which ‘Bob uses a sextant’ is equivalent with ‘Bob knows how to use a sextant’, and not equivalent with some fact about Bob’s sextant-usage, or, a description, however complicated, of a history of Bob’s using a sextant in such-and-such ways. It is in this forgotten sense of ‘our use of ‘Goats eat cans’ ’ that it is true to say that the meaning of ‘Goats eat cans’ is fixed by our use of ‘Goats eat cans’.

By retrieving an understanding of use as know-how, we can solve Kripke’s (1982) puzzle. The puzzle is to explain how, say, our past use of ‘+’ (a historical pattern of usage) could possibly “compel” or “justify” us in using ‘+’ now or in the future to mean plus.2 If past usage cannot compel us forward, then it is hard to see how it could fix the meaning of ‘+’. Usage of ‘+’ can only determine that ‘+’ means plus if it rationalizes using ‘+’ to mean plus. (For if ‘+’ means plus, it is rational to use ‘+’ to mean plus.) But, as Kripke taught us, it is entirely unclear how historical patterns of usage could rationalize using ‘+’ in one way or another. Knowing the whole history of the use of ‘+’ does not by itself set a normative standard for how we should use ‘+’, and does not by itself rationalize using ‘+’ in any particular way or rationalize meaning what we mean by uttering it.3 As Nagel puts it, meaning cannot be “just use—unless we understand “use” in a normative sense” (1997, p. 41). So we need a sense of ‘use’ that rationalizes usage.4

The know-how sense of ‘use’ fits the bill. Know-how rationalizes. We can make sense of ourselves when asked ‘Why did you do it that way?’ by answering ‘That’s the way I know how’.5 And so when asked ‘Why did you mean plus rather than quus by ‘+’?’ we can answer

2That is, as opposed to quus (p. 9). 3More specifically, neither patterns of usage nor the dispositions of which they are manifestations seem to satisfy what Soames 1997 helpfully calls the ‘normativity requirement’: “If the fact F determined that (in the past) one meant addition by ‘+’, then knowing that F would, in principle, provide one with a sufficient basis for concluding that one ought to give the answer ‘125’ to question What is 68 + 57?, provided one intends to use ‘+’ with the same meaning it had in the past” (pp. 220–21). 4Those in agreement with Kripke on this point include Blackburn 1984b (pp. 286–90), McDowell 1984 (pp. 336–37), Wright 1984 (pp. 771–75), Boghossian 1989 (pp. 511–14), Pettit 1990 (pp. 6–7), and Brandom 1994 (pp. 13–15, fn. 10). For an overview, see Kusch 2006 (pp. 50–93). Hattiangadi 2006 challenges the view that meaning is normative in any sense that makes trouble for metasemantics (see also Hattiangadi 2007, p. 179– 207; 2017), but see Whiting 2007, 2016, Connelly 2012, and Kiesselbach 2014 for ripostes. I do not here defend the thesis that meaning and whatever determines meaning must normatively constrain future usage. I only want to point out that if meaning is fixed by know-how, then there is a clear and plausible sense in which meaning is normative. 5Note that we do not make sense of ourselves in the same way, or, arguably, at all, by answering ‘I am able to do it that way’ or ‘That is how I did it yesterday’.

42 ‘That’s how I use ‘+’ ’ in the sense that is equivalent with ‘That’s how I know how to use ‘+’ ’. So our linguistic know-how makes sense of what we mean by what we utter.

Imagine Euclid in the midst of calculation. Why is Euclid justified in using ‘+’ to mean plus as opposed to quus? The answer, roughly, is that plus is the only thing Euclid knows how to mean by ‘+’. Euclid’s linguistic agency does not include knowledge of how to mean quus by ‘+’. So, given that it is not a mystery that Euclid is justified in meaning something by ‘+’—or, that it is not a mystery that he is justified in practicing mathematics—it is no mystery that he is justified in meaning plus by ‘+’. Generalizing, our use of language—our linguistic know-how—rationalizes us in using expressions to (speaker-)mean what they mean. For almost always the only thing we know how to mean by uttering ‘Goats eat cans’ is that goats eat cans, and the only thing we know how to mean by uttering ‘Horses eat hay’ is that horses eat hay, and so on.

What remains to be argued is that our linguistic know-how fixes meaning. This is my task for the remainder of the chapter. I will argue that what an expression means for a speaker is fixed by which language they know how to communicate with. Call this view ‘Communicationism’. I know how to communicate with English, a language in which ‘Goats eat cans’ means that goats eat cans. That is why, according to Communicationism, ‘Goats eat cans’ means that goats eat cans for me.

Communicationism is closely related to the Gricean view that what it is for a sentence to have a meaning, or to express a thought, is for us to use it in such a way that it serves as a conventional device for expressing that thought.6 I think this view is nearly true, but false.7 Instead, I say that for a sentence to express a thought for us is for us to know how to use it to express or communicate that thought. Now, perhaps, as a matter of empirical fact, we know how to use sentences to express our thoughts because we enact conventions on which they are to be used as devices for such expression; and so perhaps, in the actual world, our communicative knowhow is causally or constitutively dependent upon linguistic conventions. But there are

6For a rigorous presentation of such a view, see Schiffer 1982, developing ideas that trace back to Grice 1957, 1968, 1982 and Lewis 1969, 1975. (See fn. 13 for others.) And see Schiffer 2017b for a somewhat pessimistic retrospective on the whole Gricean metasemantic enterprise. 7See 1.2.5 for my argument.

43 possible worlds in which we know how to communicate by uttering sentences in the absence of convention; and in those worlds it seems that the sentences of our language have meanings, albeit non-conventional ones. So, although it is no doubt true of me that the words of my mouth are made meaningful with “help from my friends” (Lewis 1969, p. 177), this need not be true of, say, Robinson Crusoe. And so my view jettisons convention from the analysis of meaning.

Communicationism is also a ‘sentence-first’ theory of meaning. In the first instance, the linguistic entities that we know how to communicate with are sentences; of course, we also know how to communicate with words, but only insofar as those words are constituents of sentences we know how to use. We do not know how to communicate with ‘in’, ‘of’, ‘well’, ‘way’, or ‘who’ on their own, so to speak. There are some words, like proper names, that we know how to use in isolation in order to communicate. But it seems that for many words we are communicatively competent with them only in virtue of being communicatively competent with sentences. For these reasons, I take communicative competence to be fundamentally sentential. And so I say that the meanings of sentences are fixed ‘first’, and that words get their meanings ‘second’ in the order of determination. The sentence-first orientation of meaning-determination was once widely defended.8 But it seems to have been largely abandoned. Despite this, the appendix to this chapter is dedicated to defending the thesis that word-meaning supervenes on sentence-meaning. And so if our linguistic know-how can fix the latter, then it can fix the former.

The rest of this chapter motivates Communicationism. In section 2.2 I build up to a precise statement of Communicationism, and argue that it is an improvement over Lewis’s (1975) re- lated account of sentence-meaning. Then in section 2.3 I offer an a priori argument for each direction of the necessitated biconditional component of Communicationism. Next, in section 2.4, I argue that knowing how to communicate with a language is also prior to having a lan- guage, in addition to being equivalent with it, and I also outline how semantic facts can be thought of as ‘arising’ out of facts about our linguistic know-how. Lastly, in section 2.5, I reply

8Davis 2003 contains a near-exhaustive list of those have endorsed a sentence-first approach to meaning that reads as quite the who’s who (p. 175, fn. 16): Alston, Armstrong, Austin, Avramides, Bennett, Bentham, Black- burn, Brandom, Chierchia and McConnell-Ginet, Danto, Davidson, Dummett, Frege, Grice, Harrison, Hobbes, Hugly and Sayward, Lewis, Loar, McDowell, Neale, Peacocke, Ryle, Schiffer, Vanderveken, and Ziff.

44 to a series of objections to Communicationism.

2.2 Communicationism

To get an implementation of Communicationism on the table, the first step is to get clearer on the subject matter of metasemantic questions. When we ask, “In virtue of what does ‘goat’ mean goat?”, what are we asking? We are asking about what explains the fact that, for us, ‘goat’ means goat. For others, its meaning is different. If the etymologists are to be believed (and they are), ‘goat’ meant she-goat to our 13th century English forebears. For the subjects of Edward I, ‘bucca’ meant he-goat.9 So let us think of the subject matter of metasemantics as the totality of speaker-relative semantic facts.

2.2.1 A framework for metasemantics

How should we go about explaining this facts? Not in a vacuum. We should first adopt some fruitful theoretical framework as a background. I adopt the framework of Lewis (1975). Lewis’s metasemantics is one of a family of theories that attempt to explain speaker-relative semantic facts against the backdrop of two shared commitments: (i) a conception of semantics, and (ii) a conception of metasemantics that flows from (i). I will give voice to these commitments in turn.

(i) Semantics: What is semantics? It is a descriptive project aimed at characterizing a certain abstract object: a ‘semantic interpretation function’, or, a ‘language’, a function from expression-types to meanings.10 When the semanticist’s inquiry concerns the semantic facts that obtain relative to a particular population or relative to the idiolect of a particular speaker, we can say that their inquiry aims to characterize whichever language is the actual language of that population or speaker. So, when we propose semantic hypotheses, like ‘ ‘cow’ means cow’, in order to state a semantic fact about our language, we can think of ourselves as getting

9See: https://www.etymonline.com/word/goat. 10Early Lewis calls these functions ‘languages’ (1969, pp. 160–165; 1975, p. 163) or ‘grammars’, “abstract semantic systems whereby symbols are associated with aspects of the world” (1970, p. 190). Later Lewis opts for the arguably less misleading label ‘semantic interpretation’ (1986, p. 40; 1997, p. 337). But I will follow the earlier usage of ‘language’. In appendix 2.B to this chapter, I explore the relationship between ordinary languages, like English, and languages qua semantic interpretation functions.

45 at the fact that L(‘cow’) = cow, where L is our language.

(ii) Metasemantics: What is metasemantics? Metasemantics aims to specify whatever it is that ‘grounds’ facts about which languages ‘belong’ to which populations or speakers. A metasemantic theory is a theory of what makes a possible language a population’s or speaker’s actual language.11 We can think of such a theory as proposing an account of ‘the actual lan- guage relation’: the relation such that a person or population’s language is L because they stand in that relation to L.12

Thinking of semantics and metasemantics in these ways, Lewis’s recommendation is to analyze speaker-relative semantic facts in terms of a speaker’s standing in the actual language

relation, R, to some language L. In other words, the idea is that there is some relation R such that:

Metasemantic Schema For any speaker x, linguistic expression e, and meaning m, e means m for x just if and

because there is some L such that R(x, L) and L(e) = m.

I follow Lewis in taking Metasemantic Schema as the form of a metasemantic theory. The metasemantic question is thus reconceived: In virtue of what is a particular language L the actual language of a speaker x? What makes it the case that R(x, L)? If an answer can be given in non-semantic terms, then, via Metasemantic Schema, we can state the non-semantic basis of speaker-relative semantic facts.

2.2.2 Communication and speaker-meaning

To build up to an account of R, let us us start with the idea that linguistic meaning has got to be ultimately derivative from meaningful or contentful psychological goings-on.13 Mental

11See Lewis 1969 (p. 162). Lewis also characterizes metasemantics as “the description of the [...] facts whereby a particular [...] abstract semantic [system] is the one used by a person or population” (1970, p. 190). 12Endorsement of an ‘actual language relation’ approach to metasemantics is given, implicitly or explicitly, by Cresswell 1994, Davies 1981, 2002, Kolbel¨ 1998, 2002, Loar 1976, 1981, S. R. Miller 1986, Partee 2015, Peacocke 1974, 1976, Rattan 2006, Rescorla 2015, Schiffer 1982, 1987, 1993, 2003, 2006, 2017b, and B. C. Smith 2006, 2008. See also the references to Lewis’s metasemantics in Burgess and Sherman 2014. 13This stance is shared by Grice 1989, Stalnaker 1984, 1999, 2014, Strawson 1971, J. Bennett 1976, Lewis 1969, 1975, and Schiffer 1972. Sometimes it is called ‘intention-based semantics’; see Schiffer 1982 and Borg 2008. But this label misleadingly suggests that intentions must play the starring role in the derivation of meaning

46 intentionality is ‘original’; linguistic intentionality is ‘derived’.14 But how exactly does this ‘derivation’ go? My proposal is that linguistic intentionality is derivative from the intentionality of prac-

tical knowledge. Specifically, I think R can be understood in terms of knowledge of how to linguistically communicate. What is it to know how to linguistically communicate? When a speaker of a language L knows how to communicate with L, they typically know how to utter sentences of L and to thereby mean the contents those sentences express in L, and how to do so in a systematic way, as if guided by rules; they have the practical capacity for systematic, meaningful linguistic production. They will also typically know how to interpret other speakers of L, when they utter sentences of L, as thereby meaning the contents those sentences express in L, and know how to do so systematically, as if guided by rules; they have the practical capacity for systematic, reliable linguistic comprehension. I say that a speaker knows how to communicate with a language just if they have either or both of these capacities, since one can fully partake in communication even if one only knows how to produce or only knows how to comprehend.

Let us turn this into an account of R. To start, I propose:

Communicationism:

For any speaker x and language L, R(x, L) just if x knows how to communicate with L.

Given the account of communication just sketched, Communicationism should be advanced alongside this account of what it is to communicate with a language:

Linguistic Communication: For any speaker x, language L, and linguistic act Φ, x communicates with L in Φing just

if in Φing, x follows the rule SpeakL or the rule ListenL. from the mind; this might be resisted. Also, this stance is closely related to, but should not be conflated with, the ‘naturalistic’ stance driving the project of ‘naturalizing meaning’: to reduce intentional or semantic notions to non-intentional, non-semantic, physicalistically acceptable notions. This project sees the reduction of linguistic meaning to the physical as possible only by way of an initial reduction to mental intentionality, given the viability of an end-phase reduction of the mental to the physical. See Schiffer 1987 (pp. 1–17) for an expression of this motivation for the Gricean stance. And see Loewer 1997 and Schulte 2017 for overviews of the naturalization project. 14For this distinction, see Searle 1983 (pp. 26–29).

47 SpeakL: Say S of L only if you thereby mean L(S )!

ListenL: If someone says S of L, interpret them as meaning L(S )!

Here I also follow Lewis in thinking that we communicate with our language by rule-following. But I disagree with him about two things. I deny that it is essential to communication with a language that it be by conventional rule-following; linguistic communication in the absence of convention seems to me to be perfectly possible.15 And I disagree with him about which rules we communicate by following. Lewis argues that we communicate with a language L by following these linguistic rules:16

Truthfulness: Say S of L only if you believe L(S )! Trust: Believe L(S ) if you witness someone utter S !

Here is his argument that communication proceeds by following Truthfulness and Trust as it is put in Lewis (1980):17

The foremost thing we do with words is to impart information, and this is how we do it. Suppose (1) that you do not know whether A or B or ...; and (2) that I do know; and (3) that I want you to know; and (4) that no extraneous reasons much constrain my choice of words; and (5) that we both know that the conditions (1)– (5) obtain. Then I will be truthful and you will be trusting and thereby you will come to share my knowledge. I will find something to say that depends for its truth on whether A or B or ... and that I take to be true. I will say it and you will hear it. You, trusting me to be willing and able to tell the truth, will then be in a position to infer whether A or B or ... . (p. 80).

There can be no doubt that Lewis has picked up on a genuine phenomenon here, and that it is intimately related with communication.18 Sometimes we do ‘impart information’ in just this

15See section 1.2.5 for an argument for this. 16Being truthful and trusting in some L is not often explained as an instance of rule-following. But it is clear that Lewis took it that way. See section 4, “Rules”, of part III of Lewis 1969 (p. 100–107, especially p. 107). Also, Lewis admits his conventions of truthfulness are reformulations of the “rules of truthfulness” in Stenius 1967; see Lewis 1969 (p. 177, fn. 8), and then compare Truthfulness with Stenius’s semantic rule for declaratives “(R3)” (1967, p. 267–268). 17Essentially the same account of communication is given in Lewis 1986 (p. 40) and in Lewis 1997 (p. 350). 18See MacFarlane 2014 for a recent deferral to this passage from Lewis in characterizing the essence of what we do when we converse (pp. 53–54).

48 way. So let us admit that Lewis has successfully defended the existence of the phenomenon of signaling with an language, defined as follows:

Signaling:

x signals with L in Φing just if in Φing, x follows TruthfulnessL or TrustL.

The question now is: What is the relationship between signaling and communicating?19 Signaling is neither necessary nor sufficient for communicating. It is not necessary because a communicative exchange between two people need not involve the listener forming a belief in the content of the speaker’s utterance; the listener might only come away with the belief that the speaker believes or meant the content of the sentence uttered. Sometimes, communication involves only a speaker’s revealing their mind, rather than their conforming the listener’s mind to theirs. And signaling is not sufficient for communication because we can signal in conversationless contexts. Imagine a teacher who punishes a student with a special form of detention. The student must write out sentences on a chalkboard continuously for an hour. And they are to write S only if they believe the content expressed by S in English. If they were to comply, the student would thereby be following TruthfulnessEnglish. They would be signaling many propositions by the lines they draw on the board. But they would not be communicating. There would be no conversation happening, not even between the student and themselves. (We can imagine that the teacher has left the room.) And this is because, I say, the student would not mean anything by writing these lines. So I think Linguistic Communication is an improvement upon Lewis’s account of commu- nication. I will assume it going forward. But why should one accept Communicationism?

19Lewis himself, I think, would accept that conventional signaling entails communicating as I define it. For he offers an informal proof that if one conventionally signals that p by uttering S , then one thereby speaker-means that p by uttering S , and that the converse fails (1969, pp. 152–159). (His proof assumes the definition of speaker- meaning given by Grice 1957, on which x means that p in uttering S just if x utters S with an intention that their audience believe p by means of recognizing that intention.) I conjecture, then, that Lewis would accept that any linguistic act of conventionally signaling with L will involve a speaker uttering a sentence S of L and meaning L(S ) thereby, and will be an instance of following SpeakL, and so will be an act of communicating with L. But he would not accept that every act of communicating with L is an act of conventional signaling.

49 2.3 An argument for communicationism

Communicationism can be motivated by the following two ideas. The first is that linguistic ex- pressions can only have meanings for those who know how to communicate with languages. And the second is that one’s knowing how to communicate with a language suffices for linguis- tic expressions having meanings for one. In what follows, I argue that, if these two thoughts are correct, then, given the connection between speaker-relative semantic facts and facts about

the actual language relation R, it follows that a speaker x knows how to communicate with a

language L if and only if R(x, L), or, just if x has L.

2.3.1 Having L entails knowing how to communicate with L

Suppose someone has L but, for some reason, does not know how to communicate with L. One reason for this might be that they do not know how to mean anything or how to interpret others as meaning things. Call beings like this ‘Meaningless’.

The Meaningless cannot have languages. If one Meaningless person could have a language, then it is possible for a Meaningless population to share a language.20 If so, then there could be a world in which the only language-havers are part of one Meaningless population. If so, sentences could have meanings for them; in other words, theirs could be a world containing linguistic intentionality, in which they produce linguistic tokens that bear contents, but in which no one ever means anything by uttering them, in which no one has ever meant anything or taken someone as meaning something, and in which no one knows how to.

This is difficult to make sense of. It is not because such a world would involve speakers producing contentful tokens without meaning anything thereby; we might do this by producing sentences for non-communicative ends, say, in practicing our handwriting. Such local examples of linguistic intentionality without speaker-meaning are unobjectionable. Rather, the difficulty is that the absence of speaker-meaning here is global. To bring this out, take an acceptable local

20One might deny this. Perhaps a Meaningless person could have a language L, but only when socially embed- ded in a population of ‘Meaningful’ people, a population of mostly people who know how to communicate with L. My reply is to deny that the Meaningless person would fully lack the knowledge of how to communicate with L; they would presumably know how to mean things by participation in communicative joint actions within their population, and so would count as having L. Consider that even someone with severe linguistic disabilities might know how to take part in a group statement.

50 case and make it global: Imagine a world of Meaningless people who only ever practice their handwriting by writing out English sentences. Could these sentences express contents that are, for their producers, thoughts that they have no idea how to express?

The local case is tolerable perhaps because there are clear facts of the matter about what we the Meaningful would mean by these sentences if we were to produce them in the course of linguistic communication, or perhaps because we know what those sentences mean in our language.21 But there is nothing determinate that the Meaningless would mean by the sentences they produce, and we cannot appeal to what those sentences mean in their language without begging the question. So we are left mystified by how their written inkblots could manage to bear contents ‘on their own’.22

I doubt, then, that sentences produced by the Meaningless could be meaningful, and so think that the Meaningless cannot have English as their language, or any other language. The source of my doubt is perhaps that, like many, I hear something deeply right in Strawson’s (1950) claim that referring “is not something an expression does; it is something that someone can use an expression to do” (p. 326). As for reference, so for meaning generally. If linguistic intentionality must bottom out in acts of speaker-meaning, then we should say that people who do not know how to communicate in a language due to being Meaningless cannot have a language.

One might object, however, that the Meaningless might know how to read or hear sentence- tokens as meaning or expressing contents, even though they cannot interpret speakers as mean- ing things or themselves mean anything. Perhaps the Meaningless can read text or listen to utterances as expressing raw authorless data. If so, perhaps there is no problem in thinking that those tokens bear those data as contents for them their readers. And if so, perhaps the Meaningless are capable of having a language after all.

Two things in reply: First, contentless text can be ‘read’. One can systematically judge

21For given that we are Meaningful, we must have a language, as I argue below. 22If it is possible for words to have their meanings when tokened outside the context of a sentence, then we can bring out the oddness here by imagining that the Meaningless practice their handwriting by writing the English word ‘a’ over and over. Let ~a be its semantic value in English. Could every token of ‘a’ in their notebooks mean ~a, even though no one means anything by them?

51 bits of meaningless text as meaning things; one’s judgments do not imbue that text with those meanings. This is clear if we imagine that the ‘text’ consists of tea leaves, and the ‘reader’ is a tasseomancer. So, even if the Meaningless can ‘read’ English sentences as meaning things, this is no reason to think that they actually mean those things for them.

Second, the capacity to form beliefs, even true beliefs, about what sentences of L mean is not sufficient for having L. So if that is what the capacity to ‘read L’ is, then possessing it does not entail having L. Consider: There are many ways I might ‘read’ German sentences printed in the German edition of The Critique of Pure Reason; I might consult the English edition, for instance. This would not suffice for having German as one of my languages.23

Another way someone might lack knowledge of how to communicate with L is for them to not know how to token sentences of that language or how to detect when others do so. Call beings like this ‘Sentenceless’. Like the Meaningless, the Sentenceless cannot have languages. If they could, then there could be linguistic intentionality in a world where talking is impossible and imperceivable. But there could not be. Sentences there would have no semantic signifi- cance for anyone, for they would be invisible. If the vehicle is nothing for someone, then so are its would-be contents. To see this, imagine I were to tell you that there is a secret genus of humanly imperceptible, unproducible entities that, for you, refer to Jupiter, in the very same way that ‘Jupiter’ does (assuming that it does).24 It would sound worse than magic.25

Yet another way someone might not know how to to communicate with L is for them to know how to token sentences of L, know how to detect others as doing so, and know how to mean things and interpret others as doing so, but not know how to do these ‘in concert’

23Note, however, that if reading L requires knowing how to tell what the author of some S of L meant in writing S , as is plausible, then knowing how to read L does suffice for having L, on my view, because it amounts to knowing how to follow ListenL. 24Might this secret genus be expression of Mentalese? No. When expressions of our language of thought have contents, they do not necessarily have contents for us in the sense defined as having those contents in our natural language, for our language of thought need not be our natural language. And if our language of thought is also our natural language, then expressions of our language or thought are on that account not secret and inaccessible to us. 25Now, particular sentences arguably can have meanings for speakers who do not know how to produce or recognize them. So I am not claiming that every sentence of a language must be producible and recognizable by speakers of that language. Rather, I am claiming that some sentences of a language must be producible or recognizable by speakers of that language.

52 in enough of a systematic way to count as knowing how to follow the rules constitutive of communicating with L. Call beings like this ‘L-Unsystematic’. Given Linguistic Communi- cation, the L-Unsystematic do not know how to follow SpeakL and ListenL in production and comprehension.

Now, if being English-Unsystematic is compatible with having English as a language, then it should be compatible with having English monolingually. But these are arguably not com- patible. To see why, consider how quickly we are to judge that someone is not speaking English if what they mean by a sentence deviates from what that sentences means in English. Suppose we have just met Floyd, and we hear him say ‘Goats eat cans’, in what appears to be an attempt to communicate (and is), but then we come to learn that he meant thereby that horses eat hay. Next, we hear Floyd say ‘Horses eat hay’, and come to learn that he meant thereby that snow is white. Quickly, on the assumption that Floyd is acting rationally and that everything is right with him, we get the strong sense that English must not be his language.26

Why? Here is my rational reconstruction of our judgment: We know that communication relies on systematically meaning certain things, and not others, by saying certain sentences. Since we judge Floyd to be attempting to communicate, we judge that his meaning that horses eat hay by saying ‘Goats eat cans’ must be rule-governed in some way; we judge that he is following a rule in meaning that horses eat hay by saying ‘Goats eat cans’. This, we take it, entails that English is not his language.

But how? Only if having a language constrains one’s capacity to linguistically commu- nicate. And only if having English, in particular, constrains one’s linguistic communicative know-how in such a way that, if English is one’s only language, then one does not know how to rule-followingly mean that horses eat hay by saying ‘Goats eat cans’. So, if we assume that our judgments about cases like Floyd’s are correct, then having English as your only language rules out knowing how to systematically say English sentences and thereby mean contents wildly dif- ferent from what those sentences mean in English. But the English-Unsystematic will know how to do this to some significant degree, lest they not be that Unsystematic. So, if someone

26Assuming, that is, that we know him to be monolingual.

53 is English-Unsystematic, then English cannot be their only language, and so cannot be among their languages either.

To wrap up: In three salient ways in which one might not know how to communicate with L—by being Meaningless, Sentenceless, or L-Unsystematic—one’s language plausibly cannot be L. This argument—an admittedly non-deductive argument by cases—suggests to me that having L plausibly entails knowing how to communicate with L.

2.3.2 Knowing how to communicate with L entails having L

Suppose someone knows how to communicate with L without having L as their language. They must either have no language, or have some language but not L. Neither alternative makes sense.

If someone can know how to communicate with L but have no language at all, then it should be possible for there to be a world only containing people like this, but in which there is no linguistic intentionality. In a world like this, people know how to communicate with English, say, knowing how to systematically mean the same contents as we do in saying English sentences, even though none of these sentences have meanings for them; people would thus be completely unable to utter sentence-tokens that bear contents.27 This means that they would be able to engage in linguistic communication just as we do but by producing meaningless utterances. This is possible. No world could be a near-duplicate of ours at the level of speakers’ linguistic communicative capacities but be empty of semantic facts.

This shows that if people can communicate with, say, English, then they must have some language L. Based on my arguments above, such people must then know how to communicate with L. If they can only communicate with a single language, then L = English. And if they

27An opponent might deny this by claiming that although knowing how to communicate with L does not suffice for having L, manifesting that know-how does suffice. If so, one might find room to deny my claim that lacking a language entails that one is unable to say a contentful sentence-token; perhaps by saying a sentence, one might come to acquire a language, thereby making it the case that the token one produces expresses a content. Of course, I accept that manifesting knowledge of how to communicate with L, because it requires having that know- how, which I say suffices for having a language. Why does my opponent deny that possessing the know-how alone suffices? Presumably because they think that languages are necessarily had in virtue of a speaker’s actual linguistic activity. Such a view is mistaken, I think; when we ascribe languages to those who sleep, it is not in virtue of what they accomplished yesterday or before that our ascriptions are true.

54 can communicate with distinct languages English and L, then it is implausible to think that they do not have English as well as L. The reason is this: If of these two languages only L is had by them, then, if they were to communicate together with English by saying (or hearing) English sentences, then the contents of the tokens they produced would, assuming that they must bear contents, have to have their contents wholly fixed by the fact that their language is L. But that fact seems too ‘remote’ from their communicative exchange to be relevant to the meanings of the tokens produced across its duration. The contents of linguistic tokens produced in this round of communication should have their meanings fixed ‘locally’, by the language the speakers are using in the conversation. L is not the language in use on this occasion, English is. So, only a speaker’s having English could plausibly be relevant to the meanings of tokens produced in the course of communication with English.28 Therefore, given that knowing how to communicate with English entails knowing how to produce or hear contentful sentences in the course of communicating with English, it also entails having English as one’s language.

For these reasons, I take it that anyone who knows how to communicate with some L must have L. And so we have now arrived at Communicationism.

2.4 Another argument for communicationism

I have argued that Communicationism is a priori plausible, at least when construed as a neces- sitated biconditional. But one might worry whether there is any reason to accept Communi- cationism as a analysis, explanation, or statement of the grounds of facts involving the actual language relation. In other words, why accept Communicationism+?

Communicationism+:

For any speaker x and language L, R(x, L) just if and because x knows how to communi- cate with L.

After all, for all I have said, it might be that speakers know how to communicate with their

28 More abstractly, suppose x and y know how to communicate with languages L1 and L2. There is a significant range of sentences such that every S in that range is such that L1(S ) = L2(S ). Now suppose x and y communicate in L1, but only produce sentences in this overlapping range. If a token of S that they produce bears the content p, it is most natural to think that this is (in part) because L1(S ) = p, not because L2(S ) = p.

55 language because they have it, not vice versa, in which case it cannot be said that semantic

facts, grounded as they are in R-facts, are grounded in facts about communicative know-how. I accept the challenge. My argument for Communicationism+ is an argument for the follow- ing claim:

Sentence-Meaning: For any speaker x, sentence S , and proposition p, S means p for x just if and because there is some L such that x knows how to communicate with L and L(S ) = p.

For suppose Sentence-Meaning is true, along with Metasemantic Schema—which we have been assuming—as applied to sentences:

Metasemantic Schema for Sentences For any speaker x, sentence S , and proposition p, S means p for x just if and because

there is some L such that R(x, L) and L(S ) = p.

How then should we understand how these two different statements of the grounds of sentential semantic facts relate to each other? The most natural thing to say, I think, is that Sentence- Meaning is true because (i) Metasemantic Schema for Sentences is analytic, as Lewis takes it to be, and because (ii) R(x, L) because x knows how to communicate with L, or, because Communicationism+ is true. So if Sentence-Meaning is true, then plausibly so is Communicationism+.

2.4.1 Sentence-meaning from know-how

Why accept Sentence-Meaning? Let us start by restricting Sentence-Meaning to cover only those sentences a speaker knows how to utter and those propositions a speaker knows how to mean. So ‘S ’ will range over sentences of a speaker’s ‘sentential vocabulary’. And ‘p’ will range over propositions within a speaker’s ‘expressive range’. Now we can argue as follows. Left-to-right: If S means p for x, given that x knows how to utter S and how to mean p, then x must at least know how to communicate with the single-sentence language L∗ =

56 29 {hS, pi}. That is, x must know how to follow the rule SpeakL∗ . And so x must then know how to communicate with some language L such that L(S ) = p.

Suppose we were to create a new single-sentence language, Bingese, by stipulating that ‘Bing bing’ means that goats eat cans in Bingese. ‘Bing bing’ would then mean that goats eat cans for us. And we would know how to follow SpeakBingese, and so would know how to communicate with Bingese. Thus, arguably, if ‘Bing bing’ means that goats eat cans for us, then we know how to communicate with some language L such that L(‘Bing bing’) = that goats eat cans.

Right-to-left: Suppose a speaker knows how to communicate with some L such that L(‘Ziegen fressen Dosen’) = that goats eat cans. And suppose they know how to do this by knowing how

30 to follow SpeakL. Now take this sentence-specific instance of SpeakL: Say ‘Ziegen fressen Dosen’ only if you mean thereby that goats eat cans! The speaker also knows how to follow this rule; they know how to perform speech acts that involve meaning that goats eat cans by saying ‘Ziegen fressen Dosen’ in a systematic, non-accidental way. Thus, the speaker has a way to mean that goats eat can by saying ‘Ziegen fressen Dosen’. This makes ‘Ziegen fressen Dosen’ a device for meaning that goats eat cans, in the same way that a shovel is made a device for digging a hole for a worker by the fact that they know how to use it to dig a hole. And this, in turn, makes ‘Ziegen fressen Dosen’ itself mean that goats eat cans for the speaker, or so I submit, in the same way that a shovel digs holes for a speaker who knows how to use it to dig holes.

29One might worry that this entails that monolingualism is impossible. For this is right, then anyone whose language is English and can systematically mean that goats eat cans in saying ‘Goats eat cans’ will also have the distinct language L+ = {h ‘goats eat cans’, that goats eat cans i}, and will so be multilingual. If so, then one might worry that my arguments for Communicationism in 2.3 that appeal to the possibility of speakers who have a single language must fail. The worry is understandable, but can be calmed. Strictly speaking, the arguments in 2.3 should be read, by the close reader, anyway, as about having languages in the ordinary sense. In this sense, monolinguality is surely possible; I have one language and it is English. But the arguments in 2.4 should be read by the close reader as about having languages qua semantic interpretation functions. The notion of having a language in this sense—of having an interpretation—is a theoretical notion, and so it is not contrary to commonsense to think that we ‘have’ a multitude of interpretations. Why conflate these? Why conflate Communicationism read as a thesis about what it is to have an ordinary language with the distinct though similar thesis about what it is to have a semantic interpretation? Because I endorse both theses, and think that (ordinary) language-having supervenes on interpretation-having in such a way that these theses can be seen as standing or falling together; see appendix 2.B for an argument for this. See also Lewis 1975 on the distinction between our languages (i.e. the interpretations we use) and “the language” we have, or, “the most inclusive language” we have (pp. 185–86). 30 The argument that follows can be reworked for a speaker who only knows how to follow ListenL.

57 This proposal about what makes ‘Ziegen fressen Dosen’ mean what it does for our speaker falls within the Gricean tradition of intention-based semantics. As Schiffer (1987) explains, it is “definitive” of intention-based semantics that for any sentence S of a speaker’s language,

there is some feature Φ, constitutive of the meaning of [S ] [...] such that (a) Φ has a specification in wholly psychological terms [...] (b) by virtue of having Φ,[S ] is an especially efficacious device [...] for performing acts of speaker-meaning of a certain type; (c) [S ’s] having Φ entails that [S ] has whatever meaning [S ] happens to have (p. 244)

The feature or property of ‘Ziegen fressen Dosen’ that fixes its meaning for a speaker, I claim, is the property being such that that speaker knows how to, in a systematic way, mean that goats eat cans by saying it. This property—call it ‘F’—can be understood wholly in psychological terms, given that know-how is psychological. And F is also such that the fact that ‘Ziegen fressen Dosen’ has F explains why ‘Ziegen fressen Dosen’ is, for our speaker, an efficacious device for performing the act of speaker-meaning that goats eat cans. And F also entails, arguably, the semantic property meaning that goats eat cans for our speaker. A property like F, then, is just the kind of property that Griceans have been looking for. So I think Sentence-Meaning is true when restricted to a speaker’s sentential vocabulary and expressive range. But is this uninteresting due to its restrictedness? Arguably not. For it is arguable that the semantic facts that hold relative to a speaker must involve sentences they know how to utter and things they know how to mean. Admittedly, sometimes sentences have meanings for us that we are unable to utter. But, given that know-how and ability are distinct, this is neither here nor there. At the very least, it is not easy to come up with a clear case of a sentence that has a meaning for a speaker to whom we would not attribute the capacity or the knowledge of how to utter that sentence, or to come up with a clear case of a content that a sentence expresses for a speaker that they do not know how to mean. Still, I am open to the possibility that an incredibly complex sentence could express a propo- sition for someone even though they do not know how to it utter it, or that a sentence could express an incredibly complex or ineffable proposition for someone even though they do not know how to mean it. If this is indeed possible, my proposal for handling this is as follows:

58 Our communicative know-how first fixes the semantic pairings between our sentential vocab- ulary and our expressive range. By fixing these pairings, our communicative know-how also fixes what the subsentential expressions of our language mean, either in the way sketched in 2.5.6 or in the way sketched in Appendix 2.A below. Then, once our communicative know-how fixes subsentential meaning, compositionality takes over to fix the semantic pairings between sentences and contents outside our grasp.

But even if this proposal cannot be made to work, and so even if Communicationism only succeeds in explaining the semantic facts about sentences within our grasp, this is nothing to shrug at. The explanandum here is not a gerrymandered totality of facts accounting for which is a shoo-in. For our knowledge of linguistic meaning is as worthy a target of inquiry as linguistic meaning writ large. If so, then the semantic facts that are the objects of our semantic knowledge are also a worthy target. But those will arguably all be pairings between sentences and contents within our grasp. Thus, an explanation of the totality of semantic pairings between our sentential vocabulary and expressive range is something we should take if we can get.

2.5 Objections to communicationism

I have given two arguments for Communicationism. But I have not considered any objections to it, so that is what I will do next. The final objection that I consider—the sub-sentential meaning objection—will require an extended discussion to adequately address. That will take place in appendix 2.A.

2.5.1 Individuating languages

Objection 1: “Languages are causal-historically individuated in a way that Communicationism does not respect. Suppose two causally and historically isolated populations always mean the same things by the same sentences and are otherwise disposed in such a way that for some L they both know how to communicate with L. Still, we would treat their languages as distinct, despite this amazing coincidence.”

Reply: If languages are causal-historically individuated, then plausibly linguistic expres-

59 sions are as well.31 Suppose the ancient Egyptians pronounced their pintail duck-shaped hi- eroglyphic coincidentally just like we pronounce ‘Justin Trudeau’. These would not be the same word; or at least no etymologist or historical linguist would classify them as such. So the objector’s isolated populations plausibly assign the same meanings to distinct expressions.

Objection 2: “Languages are syntactically, morphologically, and/or phonologically individ- uated in a way that Communicationism does not respect. It seems two speakers could share the knowledge of how to communicate with L while, say, a syntactic theory true of the language of one speaker was false of the other.”

Reply: If languages are individuated syntactically, phonologically, etc., then plausibly lin- guistic expressions are as well. If we agree to defer to, say, syntacticians when it comes to individuating languages, then we should do likewise when it comes to individuating expres- sions on the basis of their syntactic (or phonological, etc.) structure.32 If so, expressions have their syntactic or other structural properties necessarily, and the objector’s alleged counterex- ample is not possible.

2.5.2 Non-literal communication

Objection 3: “In defending Communicationism, you rely on an account of communication, Lin- guistic Communication, that is incorrect. It seems to imply that one can only count as com- municating with L by uttering a sentence if one thereby means what that sentence ‘literally’ or ‘standardly’ means in L. But what about acts of speech in which we seem to, for instance, communicate with English by uttering S but mean something distinct from what S means in En- glish? And what about acts of linguistic comprehension in which we communicate in English but correctly interpret our interlocutors as meaning things other than what the sentences they utter mean in English? These cases are commonplace: non-literal or loose speech, metaphor, malapropism, idiolectical word usage—the list goes on!”

Reply: There are many options for handling these cases:

31See Kripke 1980 (p. 8 fn. 9) and Kaplan 1990. 32For instance, a linguistic expression (i.e. a ‘lexical item’ or ‘syntactic object’) is modelled as a pair of a syntactic or logical form and a phonetic form in minimalist syntax: hPF, LFi. See Chomsky 1995b for discussion (p. 26–29, 201–202), J. Collins 2008 for an informal treatment (p. 194–95), and C. Collins and Stabler 2016 for a formal treatment.

60 (A) I might distinguish ‘communicating-with-English’ from ‘communicating while speaking English’, taking the former to be a theoretical notion stipulatively defined by Linguistic Communication. We would then not question whether Linguistic Communication is true, but instead whether Communicationism can be advanced on its basis. Based on the con- siderations given above, I think it can; note that these considerations nowhere presuppose that all possible instances of linguistic communication, broadly construed, fall under my technical notion of ‘communication-with-a-language’. (B) I might interpret the ‘Φ’ variable in Linguistic Communication not as ranging over indi- vidual speech acts, but rather as ranging over discourses, or perhaps stretched out linguis- tic actions encompassing speakers’ entire talking-and-listening lives. Then the question would arise about whether someone might have a whole linguistic career that was, say, all non-literal English speech. In such a case, it is doubtful that they would be thereby communicating with English, or that they were speaking English at all. (C) I might model a language like English not as a function that takes sentences to single contents but rather as a function taking a sentence to a set of contents, each such that it is feasible to mean one of those contents by uttering a sentence of English while still

counting as speaking English. The rules SpeakL and ListenL could then be modified as follows:

SpeakL: Say S of L only if you thereby mean something in L(S )!

ListenL: If someone says S of L, interpret them as meaning something in L(S )!

(D) I might admit to massive unrealistic idealization. Perhaps it is fruitful to restrict attention to extremely simplified cases of ‘literal linguistic communication’, leaving non-literal communication to be explained either on its basis or in other terms altogether.

I find all of these solutions equally attractive.

Moreover, there is no risk that by ignoring, say, metaphorical speech we are ignoring part of the essence of language. I agree with Lewis that we are excused to ignore metaphor and other complications for the purposes of metasemantic inquiry because “the phenomenon of language

61 would be not too different if these complications did not exist, so we cannot go far wrong by ignoring them” (1975, p. 171). As I have argued, having a language requires knowing how to (literally) communicate with English. But it does not require knowing how to speak metaphorically with English. Not every Englishman is a poet in waiting.

2.5.3 Speaker-meaning

Objection 4: “Your defense of Communicationism makes liberal appeal to Grice’s notion of ‘speaker-meaning’, and in advancing Linguistic Communication you take linguistic commu- nication to consist in knowledge of how to follow rules governing acts of speaker-meaning and acts of interpreting others as performing acts of speaker-meaning. But no one has ever successfully said what speaker-meaning is!”

Reply: Speaker-meaning is not Grice’s notion. It is an off-the-shelf resource in the ordinary stock of concepts. So I deny that we need to have handy an analysis of speaker-meaning in order to fruitfully use the notion in the . We can safely rely on our first-person understanding of what it is to mean something in saying something.

Does my use of speaker-meaning nonetheless require the in-principle reductive analyzabil- ity of speaker-meaning in non-semantic terms? It does not. I take the semantic facts that hold relative to a speaker x to hold in virtue of facts about x’s having a particular language L. And I take the fact that x has L to be modally equivalent to, and to ‘wholly consist in’, the fact that x knows how to mean things or to discern that others mean things. The only constraint this places on me is that this practical knowledge is not had by x in virtue of the semantic facts that hold relative to x. One might question whether there is a threat that this constraint cannot be met; I bet it can be. But this is not a concern about the notion of speaker-meaning.

2.5.4 Semantic value versus content

Objection 5: “In defending Communicationism, you treat languages as functions mapping sen- tences to contents. But the meanings of sentences are not propositions. Falsely equating se- mantic values with contents is by now retrograde (Lewis 1980, Rabern 2012, and Yalcin 2014). Therefore, Communicationism is not a theory of language-having, if languages are properly

62 thought of as pairings of sentences with their semantic values or meanings.” Reply: Keeping with tradition, I model languages as functions from sentences to proposi- tions for ease of exposition. I could instead model a language as something like a function from a sentence to something like a Kaplanian character (Kaplan 1989a,b; Braun 1995), a function from a context to a proposition, which is not as clearly distinct from a sentence-meaning. Ev- erything could have been done at this level of complexity, or higher, with only loss of lucidity of prose. I had hoped to spare the reader the following:

Linguistic Communication+:

For any speaker x, language ~·, and linguistic act Φ, x communicates with ~· in Φing

just if in Φing, x follows the rule Speak~· or the rule Listen~·.

c Speak~·: Say S of ~· in context c only if you thereby mean ~S  !

c Listen~·: If someone says S of ~· in context c, interpret them as meaning ~S  !

2.5.5 Meaning without use

Objection 6: “Remember the devastating problem for Lewis’s metasemantics, the ‘meaning- without-use’ problem? Unfortunately, it applies to Communicationism.33 Let English+ be the same as English, except it assigns crazy propositions to the sentences we never use. For instance, take ‘Goats are cute’, which we use. Now insert 10,000 ‘very’s in between ‘are’ and ‘cute’, forming the sentence ‘Goats are very, very, ... very cute’, which has never been used. English+ maps this sentence, and every other unused sentence, let us say, to the proposition that bears tap dance.

Now, it seems that an ordinary English speaker will know how to follow SpeakEnglish+ . That is, once we display its logical form:

+ SpeakEnglish+ : Make it the case that: you say S ⊃ you mean English (S ).

For it seems we do know how to make that the case. But if so, then we know how to communi- cate with English+. And so, given Communicationism, we stand in the actual language relation

33This worry is discussed by Hawthorne 1990, 1993, Lewis 1992, Schiffer 1993, and many others.

63 to English+, and so, incredibly, ‘Goats are very, very, ... very cute’ means that bears tap dance for us.”

Reply: The problem here stems wholly from the material conditional analysis of rules.34 You do not enforce the rule ‘Assert p only if you believe p!’ by sewings all the mouths shut. You do not follow the rule ‘Wear a seatbelt if in a moving car!’ by bringing all cars to a halt.

A better analysis of the logical forms of Speak and Listen is called for. Perhaps what it is to follow a rule of the form ‘if p, Ψ!’ is in part to endorse a particular practical content, which amounts to being in a particular mental state involving conditional plans or commitments. Or perhaps it involves simply having the intention to Ψ if p. I am not sure what else it might involve. But, in any event, I argue that this problem is orthogonal to the viability of Communi- cationism as a theory of meaning. An account of rule-following and an accompanying theory of rules themselves should be wanted by all. Whatever the correct logical forms of Speak and Listen are, spelling them out is a matter of semantic, not metasemantic, inquiry.

2.5.6 The subsentential meaning problem

Objection 7: “Communicationism leaves subsentential meaning massively underdetermined. There are infinitely many languages that agree with English on the meanings of its sentences, but that disagree wildly on the assignment of semantic values to subsentential expressions. How is our communicative know-how supposed to ‘single out’ English from this crowd, given that knowing how to communicate with some L requires only that we know how to utter sentences of L and to thereby mean what they mean in L? There is no hope that the correct assignment of meanings to all of the sentences of someone’s language will uniquely fix the correct assign- ment of meanings to all of their language’s subsentential expressions, as Putnam (1980) clearly showed.”

Reply: It has not been shown that sentence-meaning never fixes expression-meaning. The arguments that expression-meaning is always underdetermined by sentence-meaning assume

34Lewis 1975 does backflips to try to respond to this worry while maintaining the material conditional analysis (p. 187), and wound up changing his mind multiple times about which solution was preferable; see Lewis 1992. I will not discuss those strategies here, as they cannot be reimplemented as ways of defending Communicationism.

64 that sentence-meanings are possible-worlds propositions.35 And we have just admitted above that this is false. It remains to be seen whether, for some languages, an assignment of charac- ters, or two-dimensional intensions, to all of its sentences does uniquely pin down an assign- ment of its expressions’ semantic values. I interrogate this question in an appendix. In any event, even if sentence-meaning cannot fix word-meaning no matter how it is con- strued, we can get a start at accommodating expression-meaning by introducing additional rules the following of which, we might plausibly say, also constitute communicating with L, such as:

Name-SpeakL: Use name N of L only if you thereby refer to L(N)!

Predicate-SpeakL: Use predicate F of L only if you thereby ascribe L(F)!

(Where L(N) is the individual N refers to in L, and L(F) is the property F expresses in L.) More complicated, recursive rules might be lifted from the unjustly neglected speaker-meaning- based accounts of expression-meaning sketched by Schiffer (1972, p. 156–66) and Davis (2003, p. 229–64).36 I do not deny there is a lot more work to be done.37

35The argument given by Putnam 1981 is that an assignment of truth conditions to all of a language’s sentences fails to fix a unique interpretation of its word. 36Though see Szabo´ 2008 for a penetrating critique of the acount in Davis 2003. 37For a selection of serious problems facing neo-Gricean meaning-based accounts of expression-meaning, see Schiffer 1987 (p. 249–61). For a thorough overview of approaches to word-meaning in linguistics, see Geeraerts 2010.

65 66 Appendix

2.A Word-meaning from sentence-meaning

Although it is generally true that fixing the meanings of a language’s sentences does not always thereby fix the meanings of its expressions, this leaves open whether there might be some languages such that, whenever they are had by a speaker, the word-meaning facts that hold relative to that speaker do supervene on the sentence-meaning facts that hold relative to that speaker. That is, there may be a language L and a speaker x such that there are truths of the form ‘e means m for x’ (for any expression e of L) all of which are entailed by all of the truths about what the sentences of L mean for x.

In fact, there are such languages. Word-meaning does supervene on sentence-meaning in certain conditions. Take a speaker x who has the language L. And suppose the expressions of L have meanings for x. What expressions of L mean for x supervenes on what sentences of L mean for x if two constraints are met. To state these constraints, we will need to employ the familiar apparatus of two-dimensional possible worlds semantics.38 Let us say that the proposition horizontally expressed by S —‘the horizontal’—is the set of possible worlds at which S , given its actual meaning, is true. And let us say that the proposition diagonally expressed by S —‘the diagonal’—is the set containing a possible world w just if the proposition horizontally expressed by S in w is true at w. By ‘the horizontal proposition expressed by S in w’, I mean (roughly) the proposition that would be expressed by S if it had its actual meaning while uttered in a context in w. A sentence expresses a necessary diagonal, then, just if it

38Within the taxonomy of Chalmers 2006, I will be presupposing “The Contextual Understanding” of two- dimensionalism, and taking two-dimensional intensions to be what he calls “linguistic contextual intensions” (p. 67–68), which are similar to Kaplanian characters.

67 “expresses a truth in every context” (Stalnaker 1978, p. 83). Here are the constraints the meeting of which by x and L guarantee that sentence-meaning fixes word-meaning for x:

2D Constraint: The meanings assigned to sentences by L determine 2D intensions, i.e. L(S ) fixes the propositions horizontally and diagonally expressed by S .

Metalinguistic Constraint: For every expression e of L, there is a sentence S of L such that (i) the horizontal proposition expressed by S is the set of worlds {w : in w, e means m(e) for x}, and (ii) the diagonal proposition expressed by S is necessary.

(Where m is a function from expressions of L to their meanings for x.) The 2D Constraint is satisfied just if the sentence-meanings for x determine which propo- sitions are horizontally and diagonally expressed by sentences for x. And the Metalinguistic Constraint is satisfied just if all the word-meaning truths that hold relative to x are expressed by sentences of L with necessary diagonals. And if both are met, then: the truths of the form ‘e means m for x’ supervene on the truths about what sentences of L mean for x. Proof : Suppose the 2D Constraint and the Metalinguistic Constraint are met by a speaker x whose language is L. By part (i) of the Metalinguistic Constraint, every word-meaning proposition true for x is expressed by a sentence with a necessary diagonal for x. Sentences with necessary diagonals must express true horizontals in every context. So they express true horizontals in our context, and so express truths in the actual world. So, the truth of those word- meaning propositions is modally guaranteed by the assignment of diagonals to the sentences of L which have them as their horizontals. By the 2D Constraint, the assignment of diagonals to the sentences of L is fixed by what they mean for x. Therefore, what the L-expressions mean for x is entailed by and so supervenes on what the L-sentences mean for x. But do human speakers with natural languages satisfy 2D Constraint and Metalinguistic Constraint? Many have thought that natural language sentences have meanings that determine or are their two-dimensional intensions.39 So it is not implausible to think that we and our

39See Lewis 1980 (p. 38), Lewis 1994 (p. 299), Lewis 2002 (p. 96), Lewis 2009 (p. 222). See also Kaplan 1989a,b and Chalmers 2006.

68 languages satisfy 2D Constraint.

And we also seem to satisfy Metalinguistic Constraint. For any expression of English, say, ‘goat’, there is a sentence of English stating its meaning that has a necessary diagonal, namely, ‘ ‘goat’ means goat’. English contains disquotational specifications of word-meaning that have a status akin to apriority; they are ‘Stalnaker a priori’ or true at every context.

More familiar sentences of this kind include: ‘ ‘no’ means no’, ‘ ‘forever’ means forever’, ‘ ‘never’ means never’, ‘ ‘stop’ means stop’, ‘ ‘Brexit’ means Brexit’, and so on. These examples owe their familiarity to the fact that they are often used to serve further pragmatic, expressive, and normative ends. But it seems to me that they are able to serve these ends at least in part because the contents literally expressed by these sentences are, on the one hand, clearly (and often importantly) true, and, on the other hand, clearly correctly ascribe the meanings of ‘no’, ‘forever’, ‘never’, ‘stop’, and ‘Brexit’.

Admittedly, sentences like ‘ ‘goat’ means goat’, ‘ ‘the’ means the’, and ‘ ‘if’ means if ’ are quite unfamiliar. But it seems to me that they are syntactically well-formed and true nonethe- less, and express contents similar to those expressed by the more familiar examples. I go so far as to say that English contains its own disquotational lexicon.40

How am I intending italicized expressions like ‘goat’ to be understood? In the most natural way such that the meaning of ‘ ‘goat’ means goat’ is as I have described. What comes to mind is that ‘goat’ is associated with something like a meta-linguistic reference-fixing description, such that ‘goat’ has its reference fixed by the description ‘the meaning of ‘goat’ ’.41 This would explain why ‘ ‘goat’ means goat’ is a contingent a priori truth: a true sentence horizontally expressing a true contingent proposition while diagonally expressing a necessary diagonal. In other words, it would show that ‘ ‘goat’ means goat’ is true at every context. For it is everywhere true that ‘goat’ means the meaning of ‘goat’, or, at least at worlds in which ‘goat’ exists and has some meaning or other.

There is much more to say to defend this account of word-meaning. But let us move on

40A similar idea is that of the disquotational lexicon for Mentalese defended by Fodor and Lepore 1998. Perhaps they are one and the same if our language of thought just is our natural language. 41On reference fixing descriptions, see Kripke 1972.

69 in order to state the view more clearly. Restricting attention to speakers of natural languages satisfying 2D Constraint and Metalinguistic Constraint, the view of expression-meaning I propose is:

Lexical Disquotationalism

For any speaker x, expression e, and meaning m, e means m for x just if there is some language L such that the disquotational lexicon of x determines L and L(e) = m.

where the disquotational lexicon of x determines the language L just if: L(e) = m just if there is a sentence S such that for x (i) the horizontal proposition expressed by S is the set of worlds {w : in w, e means m for x}, and (ii) the diagonal proposition expressed by S is necessary. I have already argued for the right-to-left direction of Lexical Disquotationalism. Here is why we should accept the left-to-right half also. Suppose e means m in x, and x has a natural language satisfying 2D Constraint and Metalinguistic Constraint. And suppose that the right-hand side of Lexical Disquotationalism is false. If so, it must be the case that no sentence of x’s language with a necessary diagonal expresses the proposition that e means m. But this violates the Metalinguistic Constraint. Thus, x’s language would not be a natural language, contradicting our supposition. The thought is then that the full language or complete semantic interpretation of a speaker just is the function determined by their disquotational lexicon, which is fixed by the facts of sentence-meaning that hold for a particular speaker. Now, I have taken as paradigmatic truths of word-meaning those truths expressed by true sentences of the form ‘ ‘cow’ means cow’, ‘ ‘goat’ means goat’, etc. But what of semantic truths that bear no resemblance to these? For instance, what of a truth like the following (where y ranges over possibilia and w over possible worlds):

The semantic value of ‘cow’ = {y : ∃w(y is a cow in w)}

Let us suppose that it is indeed an empirical discovery of intensional natural language semantics that ‘cow’ has a semantic value and that it is the set of possible cows. Can my metasemantic account of word-meaning explain what grounds this contingent a posteriori truth?

70 Yes. Either a word’s semantic value is its meaning, or it is not. If it is, then, given that ‘cow’ means cow, it must be that cow = {x : ∃w(x is a cow in w)}. What we seem to have here, then, is the familiar phenomenon of the same contingent truth expressed in two ways. Under one sentential guise, ‘ ‘cow’ means cow’, the truth is a priori. Under another, ‘ ‘cow’ means {x : ∃w(x is a cow in w)}’, the truth is a posteriori. And the truth beneath the guises— that ‘cow’ means cow—is completely accounted for on my account. This accords well with a conception of semantics as “part of a systematic restatement of our common knowledge” (Lewis 1980, p. 21), knowledge that is also a priori.

On the other hand, if a word’s semantic value is not its meaning, then semantic values must earn their relevance to metasemantics (conceived of as the theory of meaning) some other way. There seem to be two options: either (i) semantic values are somehow determined by meanings, or (ii) they are not. If (i), then semantic values are fixed by sentence-meaning because meanings are. If (ii), the semantic valuation of words must be explained some other way, but it is not the job of the theorist of meaning to do so.

2.B Languages and semantic interpretations

The subject matter of the last two chapters is what it is to have a language. But what is a language? I have used ‘a language’ in the ordinary way, on which it is true to say that my language is English, and that this language is literally shared by many others. Languages like English are things had by speakers, including people and populations. They speak, write, or sign, understand and know the languages they have.

But many have argued that our ordinary notion of a language is defective.42 They hold that ‘languages’, like English, cannot be fruitfully theorized about, or that they are a ‘myth’. Likewise, presumably, for alleged properties such as having English as one’s language. This position and the arguments for it have been thoroughly criticized.43 But in this appendix, I take

42On influential strand of argument traces back to Chomsky 1986, where the notion of an “E-language”, like English, is said to be a “dubious one” that plays “no role in the theory of language” (pp. 26, 19–21). Echoes of this argument are found in Ludlow 1999 (pp. 17–18), Ludlow 2006, J. Collins 2008 (pp. 146–48), and Pietroski 2018 (pp. 52–8). And another argument goes back to Davidson’s 1986 shocker: “I conclude that there is no such thing as a language” (p. 174). 43Defenses of the reality and scientific legitimacy of languages are given by Wiggins 1997, Lin 1999, Stainton

71 a different approach to defending the ordinary notion of a language. I will argue that a speaker’s language supervenes on their semantic interpretation function, and that this supervenience is such that if the notion of a speaker having or ‘using’ a semantic interpretation function is in good working order, then so is the ordinary notion of a language. Those who critique the ordinary notion of a language typically take the notion of a speaker’s semantic interpretation function to be worthy of theorizing about. So, if I am right, then their dismissals of ordinary languages are out of place. For example, Chomsky claims that every “serious approach to the study of language departs from common sense usage” of ‘language’, “replacing it by some technical concept” (1997, p. 5). This is accurate; by ‘a language’, many, including myself in the last two chapters, follow Lewis in meaning a semantic interpretation function, a function from expressions to meanings. And this includes Chomsky: “[Lewis’s] characterization of a “language” as a pairing of sound and meaning over an infinite domain is traditional and reasonable as a point of departure” (1980, p. 82). But, strictly speaking, languages like English, German, and French are not identical to or one-one with semantic interpretations functions. What means what in English, as well as what expressions are included in English, changes across time. And so two speakers might share English as their language while no single function represents how they ‘pair’ expressions with meanings. But so long as Chomsky, Lewis, and their followers—including skeptics about ordinary, public languages—accept that speakers have or use particular semantic interpretations, then they are committed to accepting that speakers also have and use particular languages in the ordinary sense. I establish this by showing that and how language-having supervenes on interpretation-use. An account of what fixes interpretation-use will be an account of what fixes language-having. The argument has two main premises:

(1) Facts about speaker-relative meaning supervene on facts about interpretation-use. (2) Facts about language-having supervene on speaker-relative semantic facts.

2011, 2016, and Pereplyotchik 2017 (pp. 45–68).

72 If (1) and (2) are true, then, by the transitivity of supervenience, language-having supervenes on interpretation-use.

2.B.1 Speaker-relative meaning supervenes on interpretation-use

Start with the notion of a semantic interpretation (hereafter, an ‘interpretation’) as introduced above, a pairing of expressions with meanings. An interpretation is a function, I, mapping linguistic expressions to meanings or semantic values. I interprets expression e as having the meaning m just if I(e) = m. Speaker-relative semantic facts can be analyzed in terms of speakers having, using, or standing in ‘the actual interpretation relation’ to, interpretations. Following Lewis (1975), I take the following as analytic:

(L) For any expression e, meaning m, and speaker x, e means m for x just if x uses some I such that I(e) = m.

Given (L), there can be no change in the speaker-relative semantic facts without a change in the facts about which speakers use which interpretations. Suppose two possible speakers are twins with respect to which interpretations they use. By (L), whichever expressions have whichever meanings for the one they will also have for the other, and vice versa. (L) entails (1): that facts about speaker-relative meaning supervene on facts about interpretation-use. With greater precision: Let S be the set of speaker-relative semantic properties, properties of the form pλx(e means m for x)q. Let U be the set of interpretation-use properties, properties of the form pλx(x uses I)q. For a set of properties X, x and y are X-twins just if ∀F ∈ X : F(x) just if F(y). Now, if (L) is true, this follows:

(1) For any possible worlds w and w0, and for any speakers x and y, if x in w is a U-twin of y in w0, then x in w is an S-twin of y in w0.

In other words, S strongly supervenes on U. Here is why (omitting world-variables for simplic- ity): Suppose x and y are U-twins. So, for any interpretation I, x uses I just if y uses I. If some e means m for x, then, by (L), x must use some I mapping e to m; but then, being a U-twin of x, y must also use some I mapping e to m. So, by (L), e means m for y as well. Thus, if e means m

73 for x, then e means m for y; and, by the same reasoning, vice versa. Thus, x and y are S-twins. Therefore, (L) entails (1).

Having adopted (L) as an analytic truth, I likewise adopt (1).

2.B.2 Language supervenes on speaker-relative meaning

There can also be no change in which speakers have which languages without a change in the speaker-relative semantic facts. So language-having supervenes on the semantic facts.

It is not possible for two speakers to have different languages (or for one to have a language while the other lacks any language) while perfectly ‘agreeing’ about what means what, i.e. while e means m for one just if e means m for the other, for every e and m, or while they are ‘semantic twins’. In other words, if a possible speaker has a language distinct from English, then the semantic facts that hold relative to us English-speakers cannot be the exact same as the semantic facts that hold relative to them. For if they were the same, in what could the distinction between our languages consist?

Suppose, for example, that Phil and Jim are semantic twins. Now suppose, for reductio, that they are not ‘language twins’. There must then be some language that one has but that the other lacks. Suppose Phil’s language is French but Jim’s is not. So ‘chevre’` means goat for both of them; for both, ‘chevriere’` means goatherd, ‘barbiche’ means goatee, and so on through the entire French lexicon. They have the same vocabulary and the same ‘dictionary’, so to speak, and yet somehow have different languages. This I submit is absurd. How could French not be Jim’s language?44 This alleged possibility seems like a skeptical one. Is there a far out possible world in which your vocabulary and its meanings are held fixed, but in which your language is not English? I suspect not. So I do not acknowledge such worlds and endorse (2): that facts about language-having supervene on speaker-relative semantic facts.

In more detail, and letting H be the set of language-having properties of the form pλx(x has L)q, I endorse this second strong supervenience thesis:

44One might think that Jim and Phil might be causal-historically disconnected in such a way that we could not count their languages as the same, languages being historically individuated. But I think this is a mistake, for if languages are individuated in this way, then expressions are too, such that no two people so disconnected could assign the same meanings to the same expressions. See 2.5.1.

74 (2) For any possible worlds w and w0, and for any speakers x and y, if x in w is an S-twin of y in w0, then x in w is an H-twin of y in w0.

2.B.3 Language supervenes on interpretation-use

By the transitivity of supervenience, (1) and (2) entail that language-having supervenes on interpretation-use:

(3) For any possible worlds w and w0, and for any speakers x and y, if x in w is a U-twin of y in w0, then x in w is an H-twin of y in w0.

There can be no change in your language without a change in which interpretation you use. And we can say more. Having a language requires that some expression has some meaning for the language-haver; you cannot have a language if no linguistic expressions mean anything to you. If so, then, by (L), it follows that having a language requires using some interpretation. From this plus (3), a different supervenience thesis connecting language-having and interpretation- use follows:

(4) Necessarily, if someone has a language L, then there is some interpretation I such that (i) they use I, and (ii) that they use I entails that that have L.

Here is how we get from (3) to (4): Let us suppose that (4) is false; so suppose that there is a possible world w in which a speaker x has an H-property, F, but does not have any U-property that entails F. We have agreed that x must have some U-property in w. So we must suppose that x has some U-property G that does not entail F; if so, then there is some world w0 in which some speaker y has G but not F. But this is inconsistent with (3): x in w is a U-twin of y in w0, but they are not H-twins. So (4) must be true, given (3) and the claim that having a language requires that some expression is meaningful for one. Here I mimic Kim’s (1987) argument that the (3)-type supervenience of a set of properties A on the set B entails the (4)-type supervenience of A on B (p. 317). McLaughlin 1995 rightly denies that this holds generally, for the (3)-type supervenience of A on B does not imply that necessarily “if something has an A-property [...] then it has some B-property”, as is required by

75 (4)-type supervenience (p. 27). But having an H-property does require having some U-property, so Kim’s argument goes through in our case.

I have shown, by arguing for (4), that language-having strongly supervenes on interpretation- use.45 Strong supervenience relations have interesting metaphysical upshots. For instance, (5) entails that all instances of language-having supervene on instances of interpretation-use. It also entails that having a language L is modally equivalent to a using some I of a particular set of interpretations. This means that for every language L, there is a set of interpretations the use of which realizes having L; call these the ‘L-interpretations’. So, there is a set of interpretations the use of which realizes having English as one’s language: the English-interpretations. Generally, for any ordinary language L, having L is modally equivalent to using some L-interpretation.

2.B.4 Explaining the supervenience of language-having

Taken on their own, supervenience relations arguably raise more interesting metaphysical ques- tions than they answer. In particular, whenever one set of properties supervenes on another, this cries out for explanation.46 So why does language-having supervene on interpretation-use? More specifically, can we explain why using an L-interpretation entails having L? Can anything else be said about the L-interpretations for a language L? Is there anything that makes them relevantly similar other than the fact that using them entails having L? Yes.

If Communicationism is true, an interpretation I is an English-interpretation if and only if using I entails knowing how to communicate in English. And so we might say that the use of an English-interpretation realizes having English because using an English-interpretation entails knowing to communicate in English, or, rather, because of some fact ‘behind’ that entailment. This entailment—entailments between interpretation-use and communicative know-how—is most straightforwardly explicable, I think, if we maintain that interpretation-use is also a matter of communicative know-how; that is, if we maintain that:

Interpretation Communicationism:

45This is what Kim 1984a calls “strong supervenience” (p. 165), and what McLaughlin 1995 calls “modal- operator strong supervenience” (p. 25). 46See 4.3.2, particularly the material leading up to and including fn. 18.

76 x uses I just if x knows how to communicate with I. where what it is to communicate with an interpretation is understood in the now familiar way:

Communication with an Interpretation:

x communicates with I in Φing just if in Φing, x follows SpeakI or ListenI.

SpeakI: Say S of I only if you thereby mean I(S )!

ListenI: If someone says S of I, interpret them as meaning I(S )!

For then we can say that whenever knowing how to communicate with I entails knowing to communicate with L, then I is an L-interpretation. But when and why does knowing how to communicate with a particular interpretation entail knowing how to communicate with a particular language? The answer to this question will give us the full explanation of what it is that the various L-interpretations have in common and in virtue of which they are each such that using them entails having L. To get a feel for a potential answer, suppose I∗ is an English-interpretation. And suppose Floyd knows how to communicate with I∗ by virtue of knowing how to follow this rule:

∗ ∗ SpeakI∗ : Say S of I only if you thereby mean I (S )!

∗ Because I is an English-interpretation, the fact that Floyd knows how to follow SpeakI∗ must entail that he knows how to follow SpeakEnglish. So what we have now is an entailment between the following two properties:

KI∗ = knowing how to follow SpeakI∗

KEnglish = knowing how to follow SpeakEnglish

So KI∗ entails KEnglish. But why?

Might this be because the rule SpeakI∗ itself entails the rule KEnglish? Or because following one entails following the other? No. The rule ‘Pick up all rocks!’ entails ‘Pick up that rock!’, even though one might know how to follow the former but not the latter, and even though one might be following the former while not following the latter.

77 Alternatively, we might try to explain why KI∗ entails KEnglish by helping ourselves to the account of the truth conditions of know-how ascriptions defended in Stanley and T. Williamson 2001, 2017. On their view, roughly, knowing how to ϕ is equivalent to knowing of some ‘way’ W that W is a way to ϕ. If this is right, then our two properties might be redescribed as follows:

KI∗ = knowing that W is a way to follow SpeakI∗ for some way W

0 0 KEnglish = knowing that W is a way to follow SpeakEnglish for some way W

And then one might try to explain why KI∗ entails KEnglish by saying that, necessarily, every way to follow SpeakI∗ is a way to follow SpeakEnglish. But this is implausible when extended into a general account of why using an L-interpretation entails having L. This is because, for a language L, some L-interpretation I will be such that there are ways to follow SpeakI that are not ways to follow SpeakL, i.e. by uttering sentences that differ in meaning on I versus L.

Another proposal is that KI∗ entails KEnglish because having the former bit of know-how requires having the latter in the way that knowing how to play chess requires knowing how to move a pawn, where knowing how to move a pawn is somehow partly ‘constitutive’ of knowing how to play chess. Unfortunately, this also seems to be the wrong kind of explanation. Knowing how to communicate with English is not in some sense ‘part of’, or a ‘prior’ requirement of, knowing how to communicate with a particular English-interpretation.

In light of the failures of these proposals, what should we say? Well, it might have seemed plausible that if KI∗ entails KEnglish, this must be because the ‘object’ of the former state of know-how entails the object of the latter. But we have seen that this does not hold on the standard ‘intellectualist’ account of know-how as having propositional objects. And we have also seen that this does not hold on any account on which the object of know-how is something like an act-type.47

But another strategy for explaining why knowing how to φ entails knowing how to ψ is to point out some similarity between φing and ψing. For example, let ‘Trump-1’ and ‘Trump-2’ refer to indiscernible trumpets. Plausibly, knowing how to play Trump-1 entails knowing how

47For communicating with I∗ does not entail communicating with English.

78 to play Trump-2, due to their exact similarity.48 But there is a range of modifications that we could make to Trump-2 while preserving this entailment. But obviously we cannot modify Trump-2 too much. What makes the difference? Where is the cut-off? It will presumably be indeterminate. But it will be fixed by the character of whatever it is in virtue of which one knows how to play Trump-1. The basis of this know-how will ‘extend’ to Trump-2 even if it is modified in variety of ways.

A similar story should be told, I think, about the entailment between KI∗ and KEnglish. What- ever it is in virtue of which one has KI∗ , that ground will also make it the case that one has

KEnglish. In other words, and more generally, the basis of knowing how to communicate with an English-interpretation I will also be the basis for knowing how to communicate with English. These practical capacities will have ‘one source’. The basis of knowing how to communicate with an English-interpretation will ‘extend’ to knowing how to communicate with English even if it is modified in variety of ways. But if this basis or source is varied too much, such that it gives rise to knowledge of how to communicate with some interpretation too wildly different from English, then it will cease to ground knowing how to communicate with English. What makes the difference? This will be a thoroughly indeterminate affair. And that is how it should be. For it is not determinate what means what in English, after all.

48Or at least knowing how to play Trump-1 entails knowing how to play Trump-2 if Trump-2 exists.

79 80 Chapter 3

Is meaning cognized?

3.1 Introduction

We understand. We often encounter a sentence of our language and instantly come to know what it means. Upon reading ‘Bears fly’ we know it means that bears fly. But how? One view says we unconsciously deduce its meaning, starting from knowledge of what ‘bears’ and ‘fly’ mean, and of how meanings of sentential wholes are determined by their parts’ meanings. In more detail, this view says knowledge of sentence-meaning is based on knowledge of seman- tics, knowledge of a finite base of axioms and composition rules the theorems of which assign sentences meanings. This knowledge is tacit; we unconsciously ‘cognize’ the semantics of our language.1 So, on this view, meaning is cognized: we cognize the meanings of basic expres- sions, on the basis of which we can know the meanings of complex expressions. Call this view ‘Cognitivism’.

I propose an alternative view, Disquotationalism, on which we know what sentences mean as a result of performing a mental operation similar to disquotation.2 On this view, we interpret a perceived sentence, like ‘Bears fly’, by using it to mentally fill in an incomplete belief of the form ‘ ‘Bears fly’ means that ’. A belief of the form ‘ ‘Bears fly’ means that bears fly’ results, which is guaranteed to be true, and so is formed in a reliable way, and so makes for knowledge. We thus come to know what ‘Bears fly’ means, on this view, without deducing it

1This is Chomsky’s 1980 preferred term for tacit linguistic knowledge. 2This thesis about the of linguistic understanding should not be confused with any other thesis that goes by the name ‘disquotationalism’.

81 from cognized meanings. The disquotationalist can explain how we do this by claiming (i) that there is a language of thought M (for ‘Mentalese’) tokens of which realize beliefs and (ii) that our minds can map our language into M while preserving meaning.3,4 For then we can say that, when we perceive ‘Bears fly’, a token of the M-correlate of ‘ ‘Bears fly’ means that ’ is filled in with what we get by mapping ‘Bears fly’ into M. The result is a token of the M-correlate of ‘ ‘Bears fly’ means that bears fly’, which realizes in us the belief that ‘Bears fly’ means that bears fly. Defending the disquotationalist account of our knowledge of sentence-meaning is the aim of this chapter. Here is the plan. In section 3.2, I explain why Disquotationalism’s main rival, Cognitivism, is held to be true. In section 3.3, I argue that the case for Cognitivism is under- mined by the viability of Disquotationalism. In section 3.4, I argue that Disquotationalism is superior to alternative forms of non-cognitivism. In section 3.5, I go through some advan- tages that Disquotationalism has over Cognitivism. And then lastly, in section 5, I consider and respond to objections to Disquotationalism.

3.2 Why meaning is said to be cognized

Why think meaning is cognized? Equivalently, why believe in ‘semantic cognizing’? The answer is that semantic cognizing is posited to explain how we systematically and reliably know the meanings of novel sentences we encounter. But how does it explain this? And what exactly is the target explanandum? Recall the following sentence, which I trust you had never encountered before:

(1) Bears fly.

Upon reading (1) you immediately know that (1) means that bears fly. In general, letting the function L map English sentences to their meanings, the following is an amazing fact about

3Either because M just is our natural language, and so the mapping is just a rewriting procedure, or, if M is not our native natural language, because our mind can ‘translate’ our natural language into M. 4Disquotationalism is a descendant of another form of non-cognitivism, translationism (or ‘transductionism’), proposed and developed in detail by Schiffer 1987, which also aims to explain our capacity to know what sentences mean in terms of (i) and (ii). I compare disquotationalism with Schiffer’s proposal in section 3. A similar proposal can be found in Fodor 1983, 1987, Devitt and Sterelny 1999, and Devitt 2006. See Lepore 1997 and K. Johnson and Lepore 2004 for discussion.

82 normal English speakers:5

Understanding: Normally, when we perceive S , we know that S means L(S ).

Understanding can be explained if we all cognize a compositional semantics for English, that is, if Cognitivism is true:

Cognitivism: A human has a language L just if they cognize a grammar for L.

That is, assuming that a grammar contains a compositional semantic theory as its ‘semantic component’.6

For if Cognitivism is true, we have psychological access to information on the basis of which we can, by (sub-personal) deduction, know the meaning of any English sentence. And if we have such epistemic access to the meaning of any S , then, normally, when we perceive S , we will be in a position to know what S means.

So semantic cognizing is an explanation of Understanding. But we should believe in se- mantic cognizing only if it best explains it. For if the best explanation does not appeal to semantic cognizing, then we lack sufficient evidence to believe in it. After all, there is no direct, non-abductive reason to believe in semantic cognizing. If there is such a thing, it is invisible to introspection. And it is no part of commonsense ‘folk psychology’.

To get a sense of why semantic cognizing is thought to best explain Understanding, con- sider this alternative explanation: We always already know what each sentence we witness means, without ever compositionally deducing this knowledge. On this view, we cognize a ‘listiform’ semantic theory with an axiom for each sentence we can understand specifying its meaning.7 This view may be doomed if there are infinitely many sentences we can understand,

5For now, for ease of exposition, let ‘English’ refer to a simplified fragment of English that lacks the com- plexities of ambiguity, indexicality, context-sensitivity, non-declarative moods, and the like; and let us assume the meanings of English sentences are propositions. 6Advocates of the conjunction of Cognitivism and the view that grammars contain compositional semantic theories, or of something close enough to this to be guilty by association, include Higginbotham 1983, 1991, Davies 1987, Peacocke 1989, Larson and Segal 1995, Heim and Kratzer 1997, Lepore 1997, Platts 1997, Ludlow 1999, 2011, Segal 2006, Lepore and Ludwig 2005, 2007 (arguably; see Pagin 2012 (pp. 53–55)), Yalcin 2014, Glanzberg 2014, and Napoletano 2017. 7See Davies 1987 for the distinction between “listiform” and “structured” semantic theories (pp. 441–42).

83 for it is doubtful that finite beings could cognize or come to cognize an infinitary listiform se- mantics. But even if our linguistic comprehension only extends to finitely many sentences, this view implausibly predicts that, for any two sentences of our language S and S 0, our capacity to understand S is, in a sense, independent from our capacity to understand S 0. Consider:

(I) Goats eat cans. (II) Goats eat cans and clothes.

If semantic knowledge is based in cognizing a listiform semantics, we could ‘erase’ the axiom for (I) from our minds while leaving the axiom for (II). But our knowledge of the meaning of (II) is not independent in this way from our knowledge of the meaning of (I). Our capacities for understanding (I) and (II) are intertwined and plausibly flow partly from the same source. Our linguistic comprehension is ‘structured’.8 Unlike the listiform theory, the best explanation of Understanding must accommodate the structure of our linguistic comprehension and its finitely grounded yet (potentially) infinite scope. Semantic cognizing not only accommodates these phenomena, it also explains them. It accommodates the structure of comprehension because, if we know the meaning of (II), say, by virtue of cognizing the meanings of its constituents, we thereby cognize the mean- ings of the constituents of (I) as well; and so we must also know the meaning of (I). In this way, semantic cognizing explains why our capacities to understand sentences with overlapping constituents are intertwined. And because we finite beings can in principle cognize finitely specifiable compositional semantic theories from which infinitely many semantic facts follow, semantic cognizing accommodates the finitely grounded yet infinite scope of comprehension.9 So, if we cognize such a semantics for English, this explains why we can know a potential infinity of semantic propositions. For these reasons, semantic cognizing is said to best explain Understanding.10

8See Davies 1981, p. 53–57. 9As Chomsky has repeatedly emphasized, language seems to make “infinite use of finite means” (1965, p. v). By ‘infinite use’, what he has in mind seems to encompass Understanding: “What [is] the “infinite use”? People who know a language can produce and understand sentences that they have never heard and that do not closely resemble any they have heard, and this capacity has no bounds” (1982, p. 15). 10For more on the case for Cognitivism, see 1.3.1.

84 3.3 Against semantic cognizing

But semantic cognizing is not the best explanation of Understanding. And so we are not forced to postulate it. There is a rival explanation that makes no appeal to semantic cognizing, namely:

Disquotationalism: When we perceive S , we know what S means by disquoting it,

where to disquote S is to form a disquotational belief about what S means, i.e. a belief realized

by a mental representation of the form p‘S’ means that Sq that results from a mental process analogous to removing quote-marks. We can mimic this process on paper. Without knowing what ‘Goats eat cans’ means, one can still correctly fill in ‘ ‘Goats eat cans’ means that ’, writing a sentence expressing what ‘Goats eat cans’ means, if one follows a simple disquoting rule: take the linguistic material inside the quote-marks and rewrite it in the blank.

Disquotationalism says, roughly, that this is how our brain executes linguistic comprehen- sion. Without ‘knowing’ what sentences mean, the brain can still form true semantic beliefs by following a disquoting rule; that is, so long as the brain’s task of belief-formation is analogous enough to sentence-formation.

I will assume that they are analogous, spelling out Disquotationalism with help from the language of thought hypothesis. For my purposes, a minimal version of this theory will do. It states that certain of our propositional attitudes, like belief, are realized by structured mental representations or sentence-like neural formula in our heads. More precisely, on this view, when someone believes that p, there is a neural sentence-token t such that (i) t is tokened somewhere in their head/mind—in their ‘belief-box’—, (ii) t bears the content that p, and (iii) the tokening of t in their belief-box realizes their belief that p.

I will delay exploring how to implement Disquotationalism without appeal to the language of thought, but I will do so towards the end of the chapter.11 But for now it is worth pointing out that even if the disquotationalist is wedded to the language of thought hypothesis, this is a commitment plausibly shared by the cognitivist. For it is most natural to make sense of

11I consider such an implementation below, in 3.6.7, on the assumption of an information-theoretic, causal- pragmatic account of belief like the one defended in Stalnaker 1984, 1999.

85 semantic beliefs resulting from sub-personal deductions by envisioning this as unfolding at the level of computations implemented by tokenings of Mentalese.

I will also assume, just at the start, that a human’s language of thought incorporates or is the same language as their native natural language.12

3.3.1 Forming semantic beliefs by disquotation

The disquotationalist can explain Understanding by saying that we possess a belief-forming mechanism that follows this ‘writing rule’: If a sentence containing a token of the form p‘S’q— a quote-name of a sentence S of our language—is written in the belief-box, then write in the

belief-box the sentence p‘S’ means that Sq! Call this rule ‘Disquote’:

Disquote: If p... ‘S’ ...q is written, then write p‘S’ means that Sq!

If we possess a belief-forming mechanism following Disquote, then we can explain how we can know what any sentence of our language means.

Let me explain with an example. Imagine John. He perceives that ‘Goats eat cans’ is written in the Encyclopedia Britannica, causing him to know and so believe that ‘Goats eat cans’ is written there. His perceiving causes this belief by causing a tokening of this sentence

12This view is argued for, or at least taken quite seriously, by Harman 1973b (pp. 84–111), 1970, 1975; Car- ruthers 1996 (pp. 40–72), at least for “conscious thinking” (p. 72); Ludlow 1999 (see Appendix P1, pp. 165–69); Devitt 1981 (pp. 75–80), 1996 (p. 158, fn. 13), 2006 (pp. 149–152); and Devitt and Sterelny 1999 (pp. 140–46). See also Dupre 2020 for recent discussion, particularly of this view’s implementation by Hinzen 2006, 2011 and Chomsky 2015, 2017. It is admittedly unclear what it means to say that our language of thought ‘is the same language as’ our natural language. One interpretation of this requires identifying natural language sentence-types with abstract objects to which sounds, marks, and neural events can stand in the ‘is a token of’ relation. We might follow Chomsky 1995b and take a sentence-type to be a phonetic form, logical form pair, hPF, LFi. We might then say that an entity t—of any substrate—is a token of type hPF, LFi just if t’s logical form is LF and its phonetic form is PF. Then if we can make sense of neural goings-on having LFs and PFs, we can make sense of how our language of thought is the same as our natural language; more specifically, we can say: if x’s language is L, then x’s language of thought is L only if x’s mental events/states are realized by tokens of sentence-types of L, i.e. by neural events/states with LFs and PFs corresponding to sentence-types of L. Can neural goings-on have natural language LFs and PFs? This turns on the of syntax and the under-studied metaphysics of phonology. In virtue of what does a sentence-token t have its LF and its PF? The answer for sounds and marks might differ from the answer for neural goings-on. Perhaps a Mentalese sentence- token, or sentence-tokening, will have its LF in virtue of its causal powers vis-a-vis` other Mentalese sentence- tokens (other neural events). And perhaps a Mentalese sentence-token will have its PF in virtue of phonological or phonetic facts about how the mental state to which it gives rise would be externalized by the subject in speech. At the end of the day, these issues do not need to be settled here; in section 2.2.1, everything will be restated assuming the alternative view that our language of thought is not our native natural language.

86 in his belief-box: ‘ ‘Goats eat cans’ is written in the Encyclopedia Britannica’. So a token t of this sentence somewhere in John’s head realizes in him the belief that ‘Goats eat cans’ is written in the Encyclopedia Britannica.

Next, John’s Disquote-following mechanism detects t, and that it contains a quote-name of a sentence of John’s language, namely, ‘ ‘Goats eat cans’ ’. The mechanism then writes in John’s belief-box a token of the sentence I will call Sent’:

Sent = ‘ ‘Goats eat cans’ means that goats eat cans’

The mechanism’s detection of t can be achieved in a ‘brute-causal’ way. John might be hard- wired such that t’s being written directly causes a tokening of Sent. So someone might count as having a belief-forming mechanism that ‘follows’ Disquote because they are hardwired such that a Mentalese tokening in them containing a quote-name of S directly causes in them a

tokening of the form p‘S’ means that Sq. Now, at the end of this process, the tokening of Sent realizes in John the true belief that ‘Goats eat cans’ means that goats eat cans. And this belief plausibly counts as knowledge. For it is arguable that it is epistemically safe: it could not easily have been false, as I will argue.13 And safe true belief plausibly suffices for knowledge.

I will not argue for this reliabilist principle.14 But it is safe to assume. It is not less of a mystery than Understanding that we are always immediately able to form safe true beliefs about what sentences of our language mean. So even if safe true belief does not suffice for knowledge, I have still offered an explanation of a no doubt important aspect of our linguistic comprehension.

3.3.2 Semantic beliefs formed by disquotation are epistemically safe

Why think that his John’s semantic belief is safe? Consider the possible worlds nearby John’s scenario. At all of them, (2) is true as caused by John’s Disquote-following mechanism:

13This safety-based strategy for arguing that forming beliefs by following a certain rule results in knowledge is inspired by Byrne 2005, who argues that beliefs about what we believe formed by following the world-to-mind inference rule ‘If p, then I believe p’ count as knowledge (pp. 96–98). See also Byrne 2018 (pp. 109–12, 116–17). 14I lack the space to argue adequately for it here, as the literature on epistemic safety is now vast. Loci classicus arguing that safe true belief is equivalent to knowledge are Sosa 1999a,b and T. Williamson 2000.

87 (2) John believes that ‘Goats eat cans’ means that goats eat cans.

And so, at all nearby worlds, the following explanation of (2) holds:

(3) (2) is true because there is a token of Sent in John’s belief-box that means that ‘Goats eat cans’ means that goats eat cans.

And if (3) is true at all nearby worlds, then so is (4):15

(4) Sent means that ‘Goats eat cans’ means that goats eat cans.

And if (4) is true at all nearby worlds, then so is (5):

(5) ‘Goats eat cans’ means that goats eat cans.

Here is why. Let w be a nearby world, at which (4) is true. Now suppose for reductio that (5)

is false at w. Then, at w, ‘Goats eat cans’ must mean some proposition q , the proposition that goats eat cans.16 But then, at w, Sent must mean that ‘Goats eat cans’ means q!17 But if so, then (4) is false at w—contradiction. So (5) is true at w if (4) is. And so (5) is true at all nearby worlds. Therefore, the belief John forms by following Disquote is safe, and so plausibly counts as knowledge.

3.3.3 Semantic knowledge by disquotation

It is arguable, then, that one can come to know what a perceived sentence means by possessing a belief-forming mechanism that follows Disquote. That is, so long as in every situation in which we know the meaning of a perceived sentence, we have some ‘prior’ belief or thought about that sentence realized by a Mentalese sentence containing a Mentalese name of that sentence. For without such a belief, the Disquote-following mechanism would not be triggered. This is what I stipulated to happen in John’s case, and I propose that this is what happens generally.

15That is, assuming that Mentalese tokens of Sent mean just what Sent means. 16I assume that, at all nearby worlds, ‘Goats eat cans’ expresses some proposition. 17This is because, at every nearby world, a sentence of John’s language of the form p‘S’ means that Sq means that the sentence denoted by its p‘S’q component means whichever proposition is expressed by its pSq component. So long as John’s language is English or very English-like at every nearby world, this assumption seems safe.

88 Given that possessing a Disquote-following mechanism does not require a speaker to cog- nize a semantic theory for their language, I have offered a viable explanation of Understanding that dispenses with semantic cognizing. And its viability throws the abductive inference of cognitivism into question.

Recall that that inference is based on cognitivism’s accounting not only of Understanding, but also of the structure of our linguistic comprehension and its finitely grounded yet infinite scope. Well, Disquotationalism accounts for these as well. For the disquotationalist, the struc- ture and infinite scope of our linguistic comprehension follow from the structure and infinite scope of thought. If ‘Goats eat cans and clothes’ is a sentence of our language of thought, then so is ‘Goats eat cans’. Therefore, if a Disquote-following mechanism puts us in a position to know the meaning of the former, it will also put us in a position to know the latter; our capaci- ties to understand these sentences are not separable. And if there are potentially infinitely many sentences of Mentalese, then there are infinitely many semantic facts that following Disquote will allow us to know.

For these reasons, it seems to me that Disquotationalism is at least as good of an explanation of Understanding as is cognitivism. (Later, in section 4, I will offer an objection to cognitivism in light of which I think Disquotationalism is a better explanation.)

3.3.4 Non-natural languages of thought

But what if our language of thought is not our native natural language?18 The above account can then be rerun by adding a few bells and whistles.

To do so, I will follow Schiffer (1987) and assume the existence of a certain function f from English sentences to their Mentalese translations (pp. 196–97). To characterize f in more detail, let ‘δ’ be a variable over Mentalese symbols for sentences of English.19 And let ‘µ’ be a variable over Mentalese sentences. I then assume that there is a recursive function f from Mentalese symbols of English sentences to Mentalese sentences such that (i) f can be defined

18This is perhaps the standard view of the language of thought, as articulated by Fodor 1975. 19It should be easy to grant that there are such symbols, given that human semantic competence requires the capacity to think about particular sentences of one’s natural language, and that, on the language of thought hy- pothesis, we require mental names for any particular things about which we form thoughts.

89 purely syntactically, i.e. without reference to the semantic properties of the expressions in its domain and range; and (ii) whenever f (δ) = µ, if the English sentence denoted by δ means that p, then µ means that p (and so a tokening of µ in the belief-box realizes the belief that p). With these resources, the disquotationalist can explain Understanding by saying we have a belief-forming mechanism that follows this rule (using expressions like ‘COW’ to denote the Mentalese words corresponding to English words like ‘cow’):

Disquote*: If p... δ ...q is written and f (δ) = µ, then write pδ MEANS THAT µq!

Following Disquote* requires an additional capacity beyond what is required to follow Dis- quote. It requires the capacity to map English sentences onto their Mentalese counterparts, which could be done by implementing a function like f . As per (i) above, f could be imple- mented purely syntactically, by a mechanism that does not compute the meanings of English sentences or their parts.20 Indeed, f could be implemented via an ‘embodied’ or otherwise hardwired translation manual from English into Mentalese, rather than via a cognized seman- tic theory.21 The upshot is that positing an implemetation of f in explaining Understanding does not require positing semantic cognizing.22 If we possess a belief-forming mechanism following Disquote*, then we can explain how we can know what any sentence of our lan- guage means. The argument for this is parallel to the argument for the same conclusion about

20On this point, here is Fodor (in reviewing Schiffer 1987 and in agreement with him on this very point): it’s far from obvious that you have to know the semantics of an English expression to determine its [Mentalese translation]; on the contrary, the translation algorithm might well consist of operations that deliver Mentalese expressions under syntactic description as output given English expressions under syntactic description as input with no semantics coming in anywhere except, of course, that if it’s a good translation, then semantic properties will be preserved. That purely syntactic operations can be devised to preserve semantic properties is the philosophical moral of proof theory. (1990, pp. 186–87) 21For more on how a recursive function like f might be implemented by a hardwired translation manual between English and Mentalese, see Schiffer 1987 (pp. 204–7) and 1993 (pp. 243–47). 22Grandy 1990 argues that there is a recursive function f satisfying (i) and (ii) if and only if there is a correct compositional semantic theory for English and one for Mentalese (pp. 562–63). This is a problem for Schiffer, who argues that the mere possibility of Understanding being explained by our possession of a translation manual implementing f undercuts what he takes to be the main argument that there are true compositional semantic theories for natural languages (Schiffer 1987, pp. 177–209). But I neither affirm nor deny that there are such theories. So Grandy’s objection is no problem for me. My aim is simply to argue that it is not the case that we should believe that we cognize compositional semantic theories. For this reason, it is also no problem for me if Pagin 2003 is right that any explanation of Understanding is ultimately ‘incomplete’ until it is combined with the true theory of content for Mentalese which will inevitably, Pagin argues, entail that our natural language has a “systematic semantics” (pp. 18–20).

90 Disquote.

Recall John who sees ‘Goats eat cans’ written in an encyclopedia, and then believes it is written there. He has a sentence of Mentalese containing ‘GOATS EAT CANS’—the Mentalese counterpart of a quote-name for the sentence ‘Goats eat cans’—in his belief-box. The Dis- quote*-following mechanism then kicks in, and it writes ‘GOATS EAT CANS’ MEANS THAT GOATS EAT CANS (call this Mentalese sentence ‘Sent*’). It writes Sent* because John’s hardwired translation manual implements a function f such that f (‘GOATS EAT CANS’) = GOATS EAT CANS. And the newly written token of Sent* realizes in John the true belief that ‘Goats eat cans’ means that goats eat cans.

This belief is also plausibly epistemically safe. At all nearby worlds, John’s belief has the same content and is realized by a tokening of Sent* with that content, and so at all nearby worlds (6) is true:

(6) Sent* means that ‘Goats eat cans’ means that goats eat cans.

It follows that, at all nearby worlds, (7) is true:

(7) ‘Goats eat cans’ means that goats eat cans.

Here is why. Let w be a nearby world. So (6) is true at w. Now suppose for reductio that

(7) is false at w. Then, at w, ‘Goats eat cans’ must mean some proposition q , the propo- sition that goats eat cans. But then, at w, Sent* must mean that ‘Goats eat cans’ means q. To see this, recall that Sent* = ‘GOATS EAT CANS’ MEANS THAT GOATS EAT CANS, and f (‘GOATS EAT CANS’) = GOATS EAT CANS. So, if ‘Goats eat cans’ means q, then GOATS EAT CANS means q, and so ‘GOATS EAT CANS’ MEANS THAT GOATS EAT CANS means that ‘Goats eat cans’ means that q. But then (6) is false at w—contradiction. So (7) is true at w if (6) is. And so (7) is true at all nearby worlds. Therefore, John’s belief is safe.

Generalizing, beliefs formed by following Disquote* are guaranteed to be true and safe, and so plausibly constitute knowledge. Possessing a Disquote*-following belief-forming mecha- nism, and the requisite translation manual, suffices for putting one in a position to know the

91 meaning of any sentence of one’s language. And, given that following Disquote* does not require cognizing a semantics for one’s language, this is an explanation of Understanding on which meaning is not cognized.

3.4 Sentence disquotationalism vs. speaker disquotationalism

My disquotationalist explanation of Understanding is a descendant of Schiffer’s (1987) pro- posal of how linguistic understanding might work by translating natural language into Men- talese without semantic cognizing. His proposal is ‘disquotationalistic’ in spirit, but is impor- tantly different from mine. In this section, I distinguish our views and argue that Schiffer’s proposal is not a rival explanation of Understanding.

I say understanding a sentence S results from forming a belief of the form p‘S’ means that Sq. Schiffer proposes, on the other hand, that understanding a speaker’s utterance of S results from forming a belief of the form pN said that Sq (where N is any name). In more detail, he proposes, in effect, that we form beliefs in accordance with this rule:23

Said-That: If pN uttered ‘S’q is written, then write pN said that Sq!24

This rule is clearly disquotational in the same way that Disquote is. We might then distinguish two forms of Disquotationalism, informally stated as follows:

Sentence Disquotationalism: We know what sentences mean by following Disquote. Speaker Disquotationalism: We know what speakers say by following Said-That.

These views aim to explain different things. One targets semantic knowledge, and the other targets what we might call ‘pragmatic knowledge’, i.e. knowledge of what is said or of the contents of speech acts.

23On the supposition that our language of thought is not our native natural language, Schiffer’s proposal can be formulated as the claim that we form beliefs in accordance with this rule instead: Said-That*: If pN UTTERED δq is written and f (δ) = µ, then write pN SAID THAT µq! where ‘N’ ranges over Mentalese names, ‘δ’ ranges over Mentalese symbols of English sentences, and ‘µ’ ranges over Mentalese sentences. 24Here I have taken some interpolative license in reformulating Schiffer’s proposal to bring it into sharper contrast with my formulation of Disquotationalism; the rule Said-That is my creation. In its original formulation, Schiffer’s proposal is that the conceptual role of SAID THAT is such that if pN UTTERED δq is written in the belief-box and f (δ) = µ, then, ceteris paribus, so is pN SAID THAT µq (1987, pp. 196–200).

92 But one might think that Speaker Disquotationalism—if true—leaves Sentence Disquota- tionalism with no work to do. For perhaps pragmatic knowledge has epistemic priority relative to semantic knowledge. Perhaps our knowledge of what speakers say explains our knowledge of what sentences mean. One might think that we are often enough in a position to knowingly infer that S means that p whenever we know that someone said that p by uttering S , assuming, that is, that people usually say what the sentences they utter mean. But even leaving problems with this idealizing assumption to one side, the strategy of explaining semantic knowledge with pragmatic knowledge has severe limitations.

3.4.1 Pragmatic knowledge is insufficient for semantic knowledge

Large swathes of our semantic knowledge cannot be accounted for in terms of Said-That- following belief-forming mechanisms. To see this, consider that someone whose belief-forming mechanisms followed only Said-That would not be in a position to know the meanings of grammatical sentences formed accidentally by randomly dropping words snipped out from magazines on the floor. They would not believe that anyone said anything in uttering them, and so would be clueless as to their meanings.

And even if following Said-That is what allows us to form correct beliefs about what some- one would say if they were to utter a sentence S (insofar as we are able to do this), this would not close the gap. For we know the meanings of sentences that are such that we have no idea what someone would say if they were to utter them. Take this sentence: ‘Goats eat cans and it’s not true that goats eat cans.’ We know that it means that goats eat cans and it’s not true that goats eat cans. But do we know what one would say if one were to utter it? In the nearest worlds in which I utter it, what do I say? It is far from clear that the answer is hat goats eat cans and it’s not true that goats eat cans. Even if we hold fixed that we normally say what the sentences we utter mean, the nearby worlds in which I utter ‘Goats eat cans and it’s not true that goats eat cans’ are surely abnormal speech contexts And so for all we know these are worlds in which there is nothing I say in uttering this sentence. Indeed, to speak for myself, I do seem disposed in such a way that I would utter that sentence (or any other contradiction) only if I would not thereby say its absurd content.

93 For these reasons, I think someone could be in a position to know what any person says (or would say) in uttering a sentence of one’s language without being in a position to know the meaning of any sentence of one’s language. So pragmatic knowledge is insufficient for semantic knowledge. Which is not to say that pragmatic knowledge never issues in semantic knowledge. My point is compatible with saying that, as a matter of fact, we often knowingly infer that S means that p from the fact that someone said that p in uttering S . But this general- ization, if true, is no explanation of Understanding.

So I think there is no threat that Speaker Disquotationalism might leave Sentence Disquo- tationalism with no work to do.25

3.5 Disquotationalism versus cognitivism

So far, I have argued that (Sentence)Disquotationalism rivals cognitivism for the status of best explaining Understanding, and that it beats out the non-cognitivist explanation offered by Schiffer. But why think Disquotationalism is a better explanation than Cognitivism?

The explanation of Understanding that appeals to Cognitivism incorrectly predicts that we are in a position to know which meanings are assigned to which expressions by the semantics we cognize. But we are not in such a position; we do not know the semantic values of most of the subsentential expressions that we encounter. An explanation by appeal to Disquotationalism avoids this difficulty because it entails only that we are positioned to know what sentences mean.

Why think we are mostly ignorant about subsentential meaning? Well, consider that, if we do cognize a grammar, most of us are unable to correctly state the meanings it assigns to even the most commonly used subsentential expressions of our language, such as ‘the’, ‘if’, ‘like’, and ‘in’.26 This is puzzling if we have epistemic access to the contents of theorems

25It may be even less of a threat in light of a problem with Speaker Disquotationalism: that it is not clear that beliefs formed by following Said-That are epistemically safe. They will be false as often as what is said in uttering a sentence differs from what it means—which is often—as well as whenever we mishear which sentence is uttered. And so whenever following Said-That does result in a true belief—when we correctly parse the speech of a speaker saying that p by uttering a sentence that means that p—it might be by luck. 26One might think that we can state the meaning of ‘the’, correctly but uninformatively, via lexical disquotation, like this: ‘ ‘the’ means the’. Indeed, elsewhere I defend the view that this does state the meaning of ‘the’ (see appendix 2.A). But it seems that even if we know the meaning of ‘the’ by virtue of knowing that ‘the’ means the,

94 specifying their meanings of the same kind that we have to the contents of theorems specifying the meanings of sentences of which they are parts, which we do on cognitivism. Of course, in a loose sense, we ‘know what these expressions mean’ in that we know how to use them correctly, and in that we know what sentences containing them mean. But we do not know that their semantic values are such-and-such. Or at least we do not seem to be positioned to know the semantic value of any subsentential expression of our language that we might happen across. That this is so is clearer in the case of non-sentential composite expressions. Most English speakers would not claim to understand the following bits of language if they were handed them written on a card as follows, even if they would claim that they understand their constituents:

(A) and dances (B) which is empty (C) snake next to him

But (A)–(C) are well-formed expressions of our language. They each have syntactic structure and serve as syntactic units of larger sentences, and so must be assigned meanings by the correct compositional semantic theory for English.27 This means that if cognitivism is true—if semantic cognizing positions us to know the meaning of any sentence of our language—then semantic cognizing should also position us to know what (A)–(C) mean. For the meanings of (A)–(C) are deducible from the semantics of English just as the meanings of English sentences are. But we have no idea what (A)–(C) mean. Or, more precisely, we have no idea what our cognized semantics says that they mean (on the assumption that we do cognize it). It does not matter if the learned are equipped to figure out what (A)–(C) mean after a bit of thought, perhaps by trying to come up with sentences in which (A)–(C) occur. This is for two reasons. First, if someone has the capacity to successfully interpret (A)–(C) in this way, this might show that they are positioned to know what subsentential expressions mean in virtue

this is inconsistent with a cognitivist explanation of knowledge of meaning, for we do not know that ‘the’ means the by deducing this from a tacitly known semantics. Rather, it seems a priori that ‘the’ means the. 27Indeed, these examples are pulled from Heim and Kratzer 1997, where their syntax is displayed and standard proposals about their semantic values are canvassed: (A) p. 52, (B) p. 88, (C) p. 201.

95 of being positioned to know what sentences containing them mean. But this would not show that they are so positioned in virtue of semantic cognizing, which is what the cognitivist is committed to claiming. And second, it is equally a problem if it is (nomologically) possible that some English speaker is not positioned to know what (A)–(C) mean. (The case of young children competent with the constituents of (A)–(C) strongly suggests that this is possible.) This should puzzle the cognitivist but not the disquotationalist. In response, the cognitivist might insist that we do know what (A)–(C) mean. But then they would need to explain why semantic cognizing grants us ordinary, accessible, expressible knowledge of sentence-meaning, but apparently inaccessible, inexpressible knowledge of sub- sentential meaning. Alternatively, the cognitivist might accept that we do not know what (A)– (C) mean, but deny that their explanation of Understanding entails that we do. Both replies require modifying the cognitivist explanation of sentence comprehension by specifying some further condition Φ such that:

(8) When we encounter an expression e and can immediately and accessibly know that it means m, that is because we cognize a semantics on which e means m and Φ.

And both replies require that Φ is satisfied only if e is a sentence. A natural suggestion is that Φ states that e is the type of expression that is an acceptable input into a specialized mental process of semantic interpretation, a process that terminates in consciously accessible semantic knowledge. If we imagine that this process is the activity of a ‘semantics module’ in the mind, or of a distinct ‘semantic component’ of the language faculty, we can think of it crudely as a device taking linguistic expressions (or perhaps representations of expressions) as inputs and outputting ‘interpretations’ of them. If this device is the source of our automatically formed occurrent semantic beliefs, then it is natural to think of these outputted interpretations as (or as causing tokenings of) sentences of Mentalese that express the information that linguistic expressions have particular meanings. The suggestion, then, is that Φ states that e served as input into our semantic-belief-forming device. And if so, then its acceptable inputs must only be sentences.28 28This paragraph presents a psychologization of a standard view in linguistics on which the input to the se-

96 I recommend to the cognitivist this speculative proposal about our cognitive architecture. But it is tantamount to taking the cognitivist’s explanation of Understanding and tacking on a mental mechanism that is supposed to do the very same job as the disquotationalist’s Disquote- following mechanism. The job of the cognitivist’s semantics module is to take sentences as inputs and deliver ‘into thought’ accessible knowledge of what those sentences mean. This job can be played by a mechanism that follows Disquote. Hence the cognitivist’s need for such a device in explaining our ordinary, accessible, expressible knowledge of sentence-meaning frees them from the need of semantic cognizing. For Understanding can be explained just by appeal to a Disquote-following mechanism.

Now, the cognitivist might not take themselves to be free to abandon semantic cognizing if they have taken the route of insisting that we inaccessibly know the meanings (as specified by our grammars) of all subsentential expressions of our language. For they might say that we need to explain this by appeal to semantic cognizing. But I say such a cognitivist is free to abandon semantic cognizing, for their case for it is now weak. It is weak because, unlike knowledge of sentence-meaning, inexpressible knowledge of subsentential meaning seems to be a theoretical posit. Belief in it is a commitment incurred by the cognitivist. To put this another way, if we are positioned to know the meanings of all composite subsentential expressions of our language, this knowledge is presumably tacit in the very same way that knowledge of semantics (if there is such a thing) is tacit. But then the cognitivist’s case for semantic cognizing is unpersuasive; it amounts to the claim that we need to appeal to semantic cognizing in order to explain semantic cognizing, or, more specifically, that we need to explain how it is that we cognize the meanings of composite subsentential expressions by saying that we cognize the meanings of basic expressions (and rules of composition). The disquotationalist is free to deny the alleged explanandum.

For these reasons, Disquotationalism outdoes Cognitivism.

mantic component of a generative grammar are syntactic structures of sentences (perhaps ‘LF representations’), themselves the output of the grammar’s syntactic component.

97 3.6 Objections and replies

3.6.1 The ‘there’s more to semantic competence’ objection

In defending Cognitivism against non-cognitivist, alternative explanations of Understanding, Higginbotham (1987) and Chomsky (2000b) point out, correctly, that there is more to our se- mantic competence than being positioned to know that sentences mean that such-and-such. We also know facts about entailment relations between sentences. Higginbotham and Chomsky challenge anyone with a view on which linguistic comprehension consists in translating sen- tences into Mentalese without the help of semantic cognizing to explain this aspect of semantic competence.29

The cognitivist’s explanation is, without getting into too much detail, that semantic cogniz- ing affords not only tacit knowledge of meaning but also tacit knowledge of semantic form;30 and that it is our knowledge of semantic form that explains our (ordinary, consciously accessi- ble) knowledge of entailment relations. Here is an artificially simple example:31 a cognitivist might argue that we know that ‘Bob smokes’ entails ‘Someone smokes’ because (i) by cogniz- ing our language’s semantics, we know that ‘Bob smokes’ has the logical form F(a) and that ‘Someone smokes’ has the logical form ∃x : F(x), and because (ii) we have enough tacit logi- cal competence to tacitly know that if a sentence of the form F(a) is true, then the correlative sentence of the form ∃x : F(x) must be true. If we assume that this cognitivistic explanation is satisfactory, it is a problem for the disquotationalist if they cannot offer their own explanation.

I think the disquotationalist can rise to the challenge. Let us start with Chomsky. He expresses doubt that anyone with a view like the disquotationalist’s will be able to account for our knowledge the following entailment relations between sentences (or between sentences of Mentalese32) without adding an “extra layer of complexity” to their account and raising new

29They both aim the challenge at Schiffer 1987; Higginbotham also challenges Stich 1983, and Chomsky chal- lenges Fodor 1989. All three targets can be read as arguing against semantic cognizing. 30By ‘semantic form’ I mean what linguists usually call ‘logical form’; but I agree with Szabo´ 2012 (p. 105) that ‘semantic form’ is a less misleading label. 31Here I assume, for ease of exposition, that semantic form is exhausted by an assignment of (what philosophers would call) logical form displayed in a first-order meta-language. Semantic form is typically thought to encode much more than a sentence’s ‘first-order’ syntactic structure. See Neale 1993, 1994. 32Or, as Chomsky calls them, “regions of S-Mentalese”, where the ‘S-’ prefix means that these “regions” have

98 problems (2000, pp. 176–77):33

(9) a. Tom chased Bill. b. Tom followed Bill with a certain intention. (10) a. John persuaded Mary to take her medicine. b. Mary came to intend to take her medicine.

Let us assume that Chomsky is right that (9a) entails (9b) and (10a) entails (10b).34 And let us assume that the semantic forms of these sentences are such that tacit knowledge of those forms, together with tacit logical competence, suffices for knowledge of these entailments.35

Even so, the disquotationalist can explain our knowledge of these entailments without adding adding extra layers of complexity to their view. Possessing a Disquote-following mech- anism can explain how it is that we know that (9a) means that Tom chased Bill, and that (9b) means that Tom followed Bill with a certain intention. These pieces of knowledge, together with ordinary knowledge about chasing—namely, our knowledge (again, following Chomsky in taking this to be a truth) that if someone chases someone, then they must follow them with a certain intention—yield our knowledge that (9a) entails (9b).36

To argue that this explanation is not available to the disquotationalist, the cognitivist would have to argue that, unless we cognize a semantics for English, we cannot know truths about chasing and persuasion like the following:37

“semantic interpretations” (p. 176). 33McGilvray 2001 (pp. 20–22) echoes Chomsky’s doubts. 34For what it is worth, I doubt Chomsky is right. With regard to (9), imagine Tom and Bill are running a race in which Tom is in second place and Bill is in first place. Then Tom might be chasing Bill, but with no intention to follow Bill; if Bill were to veer offtrack, Tom would not follow. And with regard to (10), imagine John persuades Mary to take her medicine in 50 years. It is unclear whether this requires that Mary now has an intention to take her medicine. She might only believe that she must take her medicine in 50 years. 35This assumption is also controversial. In making it, Chomsky seems to be thinking that the semantic form of (9a), for instance, includes something like a semantic decomposition of ‘chased’ according to which ‘x followed y’ is part of the internal structure of ‘x chased y’. For a quick overview of work in lexical and structural semantics that adopts a conception of semantic form (and of the lexicon) rich enough to potentially accommodate this assumption, see Gasparri and Marconi 2019 (sec. 4.2–4.3); for lengthier overviews, see Bierwisch 2011 and Engelberg 2011. 36Perhaps also together with our knowledge that if (a) x means that p and (b) y means that q and (c) if p, then it must be that q, then x entails y. 37For at this stage of the dialectic they cannot argue that we cannot know what (9a) and (9b) mean without cognizing a semantics. Attempting to problematize the disquotationalist’s appeal to knowledge of (11) and (12) is their only way forward.

99 (11) Necessarily, if x chases y, then x follows y with a certain intention. (12) Necessarily, if x persuaded y to take their medicine, then y came to intend to take their medicine.

But surely no argument for this conclusion is forthcoming. If we know (11) and (12), then surely monolingual German speakers, who cognize no semantics for English, can as well. The cognitivist might reply that although knowledge of (11) and (12) can be had without cognizing a semantics for English, it is nevertheless true that the best explanation of how it is that we English speakers know (11) and (12) is that we cognize a semantics for English. I have no idea how such a case could be made. Suffice it to say that if there is an abductive argument from our knowledge of chasing (or of persuasion, or of, presumably, anything else denoted by an English word) to semantic cognizing, the cognitivist has seriously underplayed their hand! Now to Higginbotham. He presents the follow pair of sentence-pairs:

(13) a. John didn’t leave before anyone else. b. John left before anyone else. (14) a. John didn’t leave after anyone else. b. John left after anyone else.

He points out that (13a) “is the denial of” (13b), and that while (14a) is also a denial, “it is not the denial of” (14b), “which is not an English sentence” (p. 224). “Why is this?”, Higgin- botham asks; given that “ ‘before’ and ‘after’ are naturally understood as expressing converse relations”, their “asymmetry with respect to ‘any’ therefore requires explanation” (p. 222). He then proposes an explanation, offering an interesting semantic analysis of ‘any’ (and so of ‘any- one’, ‘anything’, etc.) on which (14b) is not semantically well-formed, but on which (13a–b) and (14a) are. But the details of his proposal are not important. Higginbotham’s challenge is to explain how it is that we know that (13a) is the denial of (13b) but that (14a) is not the denial of (14b) without attributing to us tacit knowledge of the semantic forms he proposes that these sentences have.38 For (13), I repeat the explanation I

38He is therefore assuming, with Chomsky, that knowledge of their semantic forms would suffice for knowing that they are semantically related in these ways.

100 gave above. Our Disquote-following mechanism generates knowledge of what (13a) and (13b) mean which, together with our ordinary worldly knowledge of the following truth, (15), grants us knowledge that (13a) entails the negation of (13b), or, grants us knowledge of the following:

(15) Necessarily, if John didn’t leave before anyone else, then it is not the case that John left before anyone else.

And our capacity to know (15) does not plausibly require—and it is not plausibly best explained by—semantic cognizing.

As for (14), I take it that what needs explaining is our knowledge that (14b) is not a sentence of our language. For if we explain how we know that, then we can explain how we are posi- tioned to know that it is not the case that (14a) entails (14b); that is, so long as we spot ourselves knowledge that a sentence of our language cannot entail an ungrammatical non-sentence. So Higginbotham’s challenge is really to explain our knowledge of non-sentencehood and so of sentencehood.

The first thing to say is that it is unfair to demand that the disquotationalist explain this. The fact that knowledge of non-sentencehood can be a basis for knowledge of entailment (or, more specifically, of non-entailment) does not mean that the correct explanation of our ordi- nary knowledge of entailment must explain our knowledge of non-sentencehood. Compare: knowledge of linguists’ testimony might also serve as a basis for knowledge of entailment (or non-entailment). But this does not mean that the disquotationalist must explain our knowledge of linguists’ testimony.

The second thing to say is that the unfair demand can be met. The disquotationalist can explain our knowledge of non-sentencehood as follows: Normally, when we have the belief that a linguistic construction S is a sentence of our language or that S is not a sentence of our language, those beliefs are realized by sentences of Mentalese written by a belief-forming mechanism that follows these writing-rules:

Sentence: If p‘S’ means that Sq is written, write p‘S’ is a sentence of my languageq!

101 Non-Sentence: If p‘S’ means that Sq is not written, write:

p‘S’ is not a sentence of my languageq!

Given that we have a Disquote-following mechanism, following Sentence and Non-Sentence will reliably result in safe true beliefs about whether an encountered string is a sentence or not. For then, when we encounter S,Sentence will be triggered just if S is a sentence of our language, and Non-Sentence will be triggered just if Non-Sentence is not a sentence of our language. This proposal is speculative. It is, however, put forward in the spirit of disquotationalism. And its viability shows that the disquotationalist is not forced to appeal to semantic cognizing to explain our knowledge of non-/sentencehood or the knowledge of non-/entailment we base upon it. So I think that Higginbotham’s challenge can be met.

3.6.2 The ‘no evidence’ objection

Matthews (2003) argues that even if the non-cognitivist (i.e. the disquotationalist39) is

right that linguistic understanding does not require knowledge of a semantic theory, if such knowledge is, as a matter of empirical fact, a crucial causal constituent of linguistic competence, then claims both for the role of such knowledge in language understanding and for the psychological import of semantic theory will have been vindicated. It will be of little consequence that knowledge of semantic theory might have been irrelevant, if in point of fact it is not. If [the non-cognitivist is] going to make the case against the claim that knowledge of semantic theory plays a role in language understanding, then they are going to have to make a case for the stronger claim that semantic knowledge, specifically knowledge of semantic theory, is in fact not used in the course of language processing. But, so far as I can see, they offer no argument or, more pertinently, no empirical evidence for this stronger claim. (pp. 194–95)

I admit that I have offered no empirical evidence that we, as a matter of fact, possess a Disquote- following mechanism rather than cognize a semantic theory. But it is not true that the cognitivist can sit tight and “be [not] at all moved” until such evidence is presented. The cognitivist takes Understanding, as well as the structure and potentially infinite scope of our linguistic

39Schiffer 1987 and Fodor 1989 are Matthews’s targets, but it seems he would also want to target the disquota- tionalist.

102 comprehension, to be evidence for semantic cognizing. But if these can be just as well or better explained by positing a Disquote-following mechanism, then the case for cognitivism is severely weakened if not undermined entirely.

Matthews might be presupposing, though, that there is additional ‘empirical evidence’ for semantic cognizing, evidence not undermined by the mere viability of disquotationalism. But he does not clearly state what this evidence is. He does say that the “argument for the claim that speakers tacitly know (or cognize) the rules and principles of their language is perhaps most explicit in the early paper” of Graves et al. (1973). However, when we examine closely the argument of that paper, what we find is the following claim: that we should believe in semantic cognizing because, when it comes to explaining our knowledge of the semantic properties of sentences, “there appears to be no plausible paradigm of explanation available that avoids the postulation of tacit knowledge” (p. 326).40 But the disquotationalist’s explanation does avoid this. So it is unclear what this additional ‘empirical evidence’ for cognitivism might be.41

3.6.3 The ‘no semantics-free translation’ objection

Matthews (2003) also challenges the claim that an implementation of the recursive function f from natural language sentences to their Mentalese translations “could be wholly syntactic and not at all semantic” (p. 197), and he asks “on what basis” the disquotationalist could “conclude this” (p. 197). In other words, he challenges the claim, made above (and originally by Schiffer (1987), pp. 196–97), that someone could possess a translation manual implementing f without computing the meanings of natural language expressions, i.e. without cognizing a semantics

40In more detail, Graves et al. (1973) considers a“field linguist” who comes to know the grammatical properties of sentences of an “exotic language” by “the explicit use of more general rules of grammar”, in such a way that we would “explain the linguist’s [...] knowledge by saying that the linguist deduces the relevant statements from the rules (and definitions) of the grammar” (pp. 324–25). They then ask: “Why not extend the paradigm of explanation to the [untutored] speaker, by assuming that the speaker performs the same deduction as the linguist, only tacitly, and thus that the principles in the deduction are tacitly known?” (p. 325). Their answer is that there is no reason why we should not: “The argument that we should extend the explanation is simply that by doing so we explain this explicit knowledge” (emphasis mine) (p. 325). So, to be clear, Graves et al. 1973 offers no ‘empirical evidence’ for semantic cognizing beyond pointing out that we can explain our knowledge of meaning if we posit it. But this is just the argument that the viability of Disquotationalism blocks. 41Another argument given by Graves et al. 1973 is, in Matthews’s words, that “intentional explananda [demand] for their explanation intentional explanantia” (p. 190); so knowledge of meaning must be explained by some other intentional state, i.e. semantic cognizing. But Matthews admits that this argument has been shown to be “clearly unsound” (p. 190), citing Egan 1995 as pointing out that its main premise is inconsistent with the possibility of reductive explanation.

103 for one’s language. It is worth pointing out that this claim is not immodest. It amounts to nothing more than the following possibility claim:

(16) It is possible for someone to possess a mechanism implementing a recursive function f from sentences of their language L to their translations in M (Mentalese), and, they do not cognize a semantics for L.

In support of (16), we can compare it with (17):

(17) It is possible for someone to possess a mechanism implementing a recursive function f from sentences of their language L to their translations in French, and, they do not cognize a semantics for L.

(17) is plausible, so it is unclear why (16) would not also be plausible.42 Why think (17) is plausible? Well, if (17) is false, then we should expect it to be impossible to design a device that takes a sentence of English as input and, by purely syntactic recursive symbol manipulation, outputs a French sentence with the same meaning. But there could be such a device. In fact, there one day will be such a device, so long as we assume that the successful operation of statistical machine translation devices does not rely on (or give rise to) semantic cognizing. But Matthews (2003) not only asks why one should accept (16). He also motivates the view that (16) is false. In particular, he argues that it is plausible that any process implementing f is “semantics-involving” in that “it effects the mapping specified by a semantic theory, and hence it is a computational implementation of the speaker’s knowledge of [that] semantic theory” (p. 202), where “the mapping specified by a semantic theory” T is a pairing of sentences with the meanings assigned to them by T (p. 203).43 His argument for this claim is complex.44 But its 42Schiffer 1987 makes this point more quickly, arguing, in effect, that (16) should be unproblematic “in the same way that it is unproblematic that there should be a recursive mapping of French sentences onto English sentences that is statable without reference to any semantic features of those sentences but yet maps each French sentence onto its English translation” (p. 197). 43On this last point, here is Matthews: “the function f can be reasonably construed as effecting the pairing specified by the M-sentences or T-sentences of the semantic theory” (p. 203). 44It involves the claim that the parallel principle about what it takes for a process to be “syntax-involving” is plausible, namely, the principle that a process involves tacit knowledge of syntax if it effects a mapping from sentences to syntactic forms specified by a syntactic theory (pp. 200–4). But this syntax-involvement principle is implausible for the same reasons I go on to give that the semantics-involvement principle is implausible.

104 conclusion is demonstrably false. The translation function f does not immediately “effect” or determine a mapping g from sentences to their meanings; f (S ) is not the meaning of the sentence S . One might give a ‘definition’ of g partly in terms of f as follows: g(S ) = p just if f (S ) realizes the belief in p. But this does not mean that g is computed wherever f is implemented. And even if implementing f does entail implementing g, we cannot credit a speaker who implements f (and so g) with tacit knowledge of a semantic theory T because T specifies g. For there are infinitely many semantic theories specifying g but that wildly differ in their assignments of semantic values to subsentential expressions. Implementers of f cannot cognize all of them.

3.6.4 The indexicality objection

One might object that Disquote is bound to result in error for indexical languages. If I perceive Trump utter ‘I tweet’ and follow Disquote, I will end up believing the following:

(A) ‘I tweet’ means that I tweet.

But seems (A) false. The ‘I’ outside the quotes in (A) will refer to me. But surely it is false that ‘I tweet’, in Trump’s mouth, means that David Balcarras tweets; in the present context, ‘I tweet’ means that Trump tweets, or so goes the objection. I want to explore three different replies to this objection. The first is dialectical. Even if Disquotationalism cannot account for our knowledge of the meanings of indexical sentences, this is no occasion for cognitivists to clink champagne. For if only a small fragment of a natural language is indexical and context-sensitive, as Cappelen and Lepore (2005) argue, then Disquotationalism does not go far wrong. It accounts for almost all knowledge of sentence- meaning, and adjusting it to account for knowledge of the meanings of indexical and context- sensitive sentences is simply a matter of adding some additional knobs and dials to the account. This is nothing for the disquotationalist to be embarrassed about, for additional control panels must also be added to the cognitivist’s explanation of Understanding. The a posteriori worldly knowledge of context required to recover the contents of indexical sentences is surely not built

105 into the grammar. So the path from semantic cognizing to Understanding must take some detour or other. For this reason, the fact that natural languages contain indexicals is not a point in favor of Cognitivism. But suppose indexicality and context-sensitivity are pervasive throughout natural languages. The disquotationalist might then admit that Disquotationalism does go far wrong, conceding that Disquote mishandles indexicals. They might then say that we know what sentences mean by following a different writing rule, namely:

Indexical-Disquote: If p... ‘S’ ...q is written and the context of S ’s utterance is c, then write p‘S’ means that h(hS, ci)q! where h is a function from a sentence-context pair to that sentence’s de-indexicalized translation in c, or, roughly, the non-indexical sentence of our language (of thought) that means what S means as uttered in c. For example, h will map an indexical sentence like ‘I quit tweeting’ and my present context to the sentence ‘David Balcarras quit tweeting’. And h will map any pair involving a non-indexical sentence to that same sentence. In order to follow Indexical-Disquote, we need the capacity to compute or implement h. But although this is arguably not to burden our minds with anything for which semantic cognizing is required,45 it is no easy task to explain how it is that we might do this. ‘De-indexicalizing’ makes up much of the work of our pragmatic competence; it can be thought of as the mental task of figuring out, at the level of Mentalese, what is said by utterances containing ‘I’, ‘here’, ‘now’, and so on. And so a theory of how we implement h would go no small way towards explaining our general communicative powers of comprehending others. But I have insisted on separating semantic and pragmatic competence and their respective explanations. And so, for this reason, I want to explore a reply to the indexicality objection that does not require the disquotationalist to explain our pragmatic competence. That reply is to argue that the indexicality objection rests on a mistake. To tease out the mistake, consider the parallel objection that following Disquote results in error for sentences

45One proposal about how we might implement h without semantic cognizing (although not pitched exactly as such) is given by Schiffer 1987 (pp. 200–203).

106 containing context-sensitive expressions: Suppose ‘rich’ is context-sensitive. And suppose I hear someone utter ‘Trump is rich’, follow Disquote, and end up believing:

(B) ‘Trump is rich’ means that Trump is rich.

Now, let ‘rich∗’ express the property expressed by ‘rich’ in (B) in the relevant context of utter- ance. If (B) is true, then (C) is true:

(C) ‘Trump is rich’ means that Trump is rich∗.

But (C) is false! If (C) were true, then ‘Trump is rich’ and ‘Trump is rich∗’ would mean the same thing, for ‘Trump is rich∗’ means that Trump is rich∗. But they do not mean the same thing because ‘rich’ and ‘rich∗’ differ in meaning. Thus, (B) is false; Disquote led to error. Now, I take it that any argument that recommends denying something so obvious as (B) must go wrong somewhere.46 The misstep is the thought that (B) entails (C). But one might wonder how this could possibly be wrong given that they only differ in that ‘tall’ in (B) is replaced with ‘rich∗’ in (C), and these express the same property. And so aren’t (B) and (C) equivalent? This puzzlement reveals the source of our mistake: the deeper mistake of thinking that co- referential terms can be intersubstituted salva veritate in ‘means that’-contexts. That this is a mistake is clear if properties are individuated coarsely, in such a way that ‘is rich’ and ‘is rich and such that 4+3=7’ express the same property, for (D) is clearly false while (B) is true:

(B) ‘Trump is rich’ means that Trump is rich. (D) ‘Trump is rich’ means that Trump is rich and such that 4+3=7.

And it is also clear if we compare (B) with the dubious (E), assuming that ‘Trump’ and ‘the actual current existent POTUS’ co-refer:

(E) ‘Trump is rich’ means that the actual current existent POTUS is rich.

46Though see Schiffer 2017a, 2019 for arguments for the stunning conclusion that (B) is indeed false, for reasons having to do with vagueness. I lack the space to fully address the nihilism Schiffer expresses here, a welcome throwback to the nihilistic conclusion of Schiffer 1987.

107 It is a mistake, then, to think that following Disquote goes wrong with context-sensitive language. And for the same reason it is a mistake to think that it goes wrong with indexical language. One cannot argue, about our original case, that (A) is false because ‘I tweet’ does not mean that David Balcarras tweets, even though the ‘I’ in (A) and ‘David Balcarras’ co-refer:

(A) ‘I tweet’ means that I tweet.

So what following Disquote leads me to believe ((A)) about the meaning of ‘I tweet’, upon hearing Trump utter it, is not demonstrably false. Indeed, I think (A) deserves to be taken seriously as a fact of indexical meaning. This is in line with a suggestion from Ludlow (1999) that the meanings of sentences containing indexicals should be ‘displayed’ disquotationally using those very indexicals, and are so not correctly ‘displayed’ in non-indexical terms (pp. 62–3).47 Of course, this requires that we abandon (finally) the assumption that the meanings of sentences are propositions. Or, more specifically, it requires that we abandon the idea that p‘S’ means that Sq says that S stands in the meaning-relation to the proposition denoted by pthat Sq. For ‘that I tweet’ in (A) and ‘that David Balcarras tweets’ denote the same (possible- worlds or structured) proposition, but (A) and ‘ ‘I tweet’ means that David Balcarras tweets’ mean different things. But this assumption was bound to be relaxed no matter what, at least if propositions are possible-worlds propositions. It is too implausible to think ‘Socrates is hu- man’ means that {Socrates} exists.48 Or that ‘Trump will win’ means that everyone who does not compete, or loses, will have done something Trump will not have done (Cresswell 1985, p. 4). Now, what about the thought that what I should end up believing is that Trump’s utterance of ‘I tweet’ means that Trump tweets? There is something right to this thought. But talk of 47See also Ludlow 2007 (pp. 166–67). Rumfitt 1993 makes a similar suggestion. And the inspiration for this idea goes back to McDowell 1977, who argues that the “sense” of ‘Hesperus’ is “displayed” by ‘ ‘Hesperus’ stands for Hesperus’ but not by ‘ ‘Hesperus’ stands for Phosphorus’, even though ‘Hesperus’ and ‘Phosphorus’ co-refer (p. 164). But it is unclear whether McDowell (or Ludlow) would want to say, for instance, that the truth stated by ‘ ‘Hesperus is a star’ means that Hesperus is a star’ is distinct from the truth stated by ‘ ‘Hesperus is a star’ means that Phosphorus is a star’. Given that I take the former sentence to be true and the latter false, I take these to state different things, in addition to differently ‘displaying sense’, whatever that might be. 48Plausibly, ‘that Socrates is human’ and ‘that {Socrates} exists’ refer to the same possible-worlds proposition, assuming that Socrates is essentially human.

108 the ‘meaning of Trump’s utterance’ is imprecise; it can be construed as talk of the content of his speech act, or as talk of the meaning of the sentence he uttered in performing that speech act. Here, again, we need to carefully distinguish our knowledge of sentence-meaning from our knowledge of what is said. In the envisioned speech context, there are two salient pieces of knowledge we acquire: the semantic knowledge that ‘I tweet’ means that I tweet, and the pragmatic knowledge that Trump said that Trump tweets in uttering ‘I tweet’. No account of how we acquire the former can be faulted for not fully accounting for how we acquire the latter.

3.6.5 The ‘anti-reliabilism’ objection

Every objection to the sufficiency of safe true belief for knowledge, or to reliabilism more gen- erally, is a problem for my account of Understanding. But in addition to pointing out that our capacity to form safe true beliefs about sentence-meaning is not a less worthy explanandum than our capacity for knowledge of sentence-meaning, there is more to say in defense of as- suming some form of reliabilism in this context. This assumption is warranted, I think, because standard arguments against reliabilism are thrown into question by the very case of semantic knowledge.

Take BonJour’s (1985) argument that because knowledge requires “having a reason to think that one’s belief is true” (p. 235) but safe true belief does not, the latter cannot suffice for the former (p. 34–57). Our knowledge of sentence-meaning suggests that this is unsound, for we often know what a sentence means while lacking any rationale for thinking we are right. (I argue for this in the next section, in reply to an objection from Lepore and Fricker that employs an anti-reliabilist, BonJourian premise.)

Or take Vogel’s (2000) argument that if reliable true belief suffices for knowledge, then bootstrapping is licensed—leveraging knowledge that q into knowledge that one’s belief that q is reliable—which is always “illegitimate” (p. 613–15). The trouble with this argument is that bootstrapping is arguably legitimate in some cases of semantic knowledge. To see this, consider Davidson’s “pure Robinson Crusoe case” of an eternally isolated linguistic being. Crusoe could be in a position to know that he has reliable beliefs about what sentences of his language mean. This knowledge would have to be a posteriori. And it would have to be based on knowledge

109 wholly about what his utterances mean, for he is exposed to no other sentences of L. Assuming Crusoe is like us, when he utters some S of L and then perceives S , he immediately comes to know that S means p, say, and normally will knowingly believe that he believes that S means p. We can suppose knowledge like this exhausts Crusoe’s evidence about what sentences of his language mean; after all, all ‘external checks’ of the reliability of his semantic beliefs are foreclosed to him. But if this is all that Crusoe has to go on to base his knowledge of the reliability of his semantic beliefs, it seems the basing must go by way of what Vogel calls ‘bootstrapping’. Crusoe can combine his beliefs and come to truly and safely believe (and so know) that S means p and he believes that S means p, and then deduce from this that his semantic belief is true, and then, by doing this every time he encounters a sentence, eventually be in a position to know by induction that his semantic beliefs are always true and so know (by abduction) that they are reliable/safe. How else could he do it?

There are of course many other objections to reliabilism. But I take it I have already said enough to justify not grappling with all of them.

3.6.6 The ‘baseless semantic beliefs’ objection

There is a recurring objection to Speaker Disquotationalism that can be rerun against Sentence Disquotationalism with equal effect. Simply put, the (rerun) objection is that semantic beliefs formed by following Disquote would be baseless, irrational, or somehow unreasonable, and so could not count as knowledge.

The original objection is raised by Lepore (1997), who objects to Speaker Disquotational- ism on the grounds that a speaker who forms beliefs about what is said by following Said-That would lack any reason for his belief; he would be “clueless about why he believes” that so-and- so said that such-and-such, for “nothing in his head justifies his belief” (1997, p. 52). Fricker (2003) raises the same worry. She argues that if a speaker follows Said-That:

nothing within her own cognitive perspective connects that inclination to believe that such-and-such has been said with such-and-such indeed having been said. She hears some noise made by another person, and she finds herself thinking that he said some specific thing. But she did not hear him doing so. So she lacks any

110 internal rationale for forming a belief. She has no reason to think that what she finds herself inclined to believe is likely to be true. (p. 339)

Clearly, a similar problem arises for Sentence Disquotationalism. Lepore and Fricker could complain that a speaker whose semantic beliefs were formed by following Disquote would be similarly “clueless” about why they believe, say, that ‘Goats eat cans’ means that goats eat cans upon perceiving a token of ‘Goats eat cans’; they would lack an “internal rationale” or something “in their head” that justifies their belief; they would have no reason to think that their belief is true.49 And so it would not amount to knowledge. Let us label the constraint Lepore and Fricker place on an explanation of Understanding:

Reason: The explanation of Understanding must account for the fact that our semantic beliefs are rationalized by our having reasons to think our semantic beliefs are true.

3.6.6.1 Against reason

In reply, I offer two arguments that when we know what sentences mean we do not usually have reasons to think that our respective beliefs about what those sentences mean are true, and so Reason should be denied. The first argument is a phenomenological appeal to pre-theoretic appearances. The nearest book to me at the moment is my copy of Quine’s Word and Object. Opening to page 233, I read at the top of the page a token of the following sentence:

One finds or can imagine disagreement on whether there are wombats, unicorns, angels, neutrinos, classes, points, miles, propositions.

I can immediately form a true belief about what this sentence means. I know what it means. But if someone were ask me ‘Why do you believe that?’, demanding that I offer some reason to think I am right internal to my psychology, I would be at a loss for words. I would be unable to cite any reason for thinking that my belief is true beyond generalizations about the reliability of my semantic-belief-forming faculties that might occur to me after a bit of thought. But Lepore

49Fricker explicitly endorses BonJour’s view that knowledge requires having some reason to think that one’s belief is true (p. 339, fn. 21).

111 and Fricker both eschew (perhaps correctly) these generalizations as non-rationalizing.50 So it seems, on the face of it, that we ordinarily lack reasons to think our semantic beliefs are right. (The reasons Lepore and Fricker think we have in such cases will be considered below.)

Here is a second argument for this same conclusion. To maintain that semantic knowledge requires having reasons to think that our respective semantic beliefs are true, Lepore and Fricker must also maintain that semantic knowledge requires that we have reasons to think that we have semantic beliefs. For instance, one cannot have a reason to think one’s belief that ‘Goats eat cans’ means that goats eat cans is true that is not also a reason to think that one believes that ‘Goats eat cans’ means that goats eat cans. And if this reason must be “internal” or “in one’s head”, then, presumably, it must be some other belief.51 So Lepore and Fricker must maintain, then, that knowing that ‘Goats eat cans’ means that goats eat cans requires having a belief the having of which rationalizes (or would rationalize) believing that one believes that ‘Goats eat cans’ means that goats eat cans.

But this is no requirement of semantic knowledge! Often the simple-minded possess seman- tic knowledge and yet lack any beliefs that would rationalize their believing that they believe that sentences mean that such-and-such. They might be unable to believe that they believe that sentences mean things, in which case nothing they believe would rationalize them in having those beliefs.

Of course, Fricker acknowledges that there are simple-minded knowers; and so on her view it is only “knowledge by reflective agents” (my emphasis) that requires being “in a position to construct an epistemically rationalizing explanation of it” (2003, p. 340). But this restriction is no help. There are non-reflective agents that have semantic knowledge and of whom Under- standing is true. For instance, Understanding is true of many very young children and of those with various mental disabilities. Could Lepore or Fricker insist that the account of Understand- ing must be different for reflective agents? No. It is implausible that Understanding is satisfied by the reflective differently than how it is satisfied by the non-reflective; adult comprehension and child comprehension, for instance, are not distinct natural kinds.

50See Fricker 2003 (pp. 338–341) and Lepore 1997 (pp. 50–53). 51Lepore 1997 is explicit about this (p. 53).

112 What this means is that any account of Understanding on which speakers must possess internal reasons for their semantic knowledge is a false account. Sentence disquotationalism cannot be faulted for failing to be such a false account. Reason must be denied.

3.6.6.2 Cognitivism fails to satisfy Reason

Moreover, it is arguable that cognitivism—Lepore’s preferred view—also fails to satisfy Rea- son.52 If cognitivism is true, our tacit knowledge of semantics puts us in a position to know semantic truths about our language. But does it also give us reasons to think that our semantic beliefs are true? Not if those reasons must be available to us in such a way that would cure us of being ‘clueless’ as to why we should think our semantic beliefs are true. For on the standard cognitivist picture, the semantic axioms and composition rules from which our minds ‘derive’ our semantic beliefs are not available to introspection. And so most of us are entirely clueless about the contents of the semantic theories we cognize.

For this reason, tacit knowledge of semantics would give us nothing to say and nothing to (occurrently) think that would rationalize our semantic beliefs, at least in the sense of giving us reasons to think they are true.53 And so it seems that cognitivism flouts Reason. Lepore cannot fairly complain that sentence disquotationalism does too.

3.6.6.3 Fricker on the perception of meaning

Fricker’s account of linguistic understanding is, roughly, that we perceive (or “quasi-perceive”) what things mean and that our perception of meaning provides us with reasons to think that our perceptually-based semantic beliefs are true.54 But if this view satisfies Reason, then dis- quotationalism does just as well, or so I will argue. This, plus the fact that Fricker’s account is independently implausible (as I will also argue), shows that it is no rival sentence disquotation- alism even if we grant Reason.

52Lepore thinks the “reasons” we have to think our semantic beliefs are true consist in tacit beliefs “about the sounds and shapes of the language itself”, ultimately based in tacit knowledge of a compositional semantics (of the Davidsonian variety) for our language (1997, pp. 52–53). 53It is granted that tacitly knowing a semantics would epistemically justify our beliefs in its semantic theorems (although even this might be questioned), but Lepore rightly distinguishes this from rationalization: “Someone’s belief being justified and his having another belief which rationalizes his belief are distinct” (1997, p. 51). 54McDowell 1981 (pp. 239–42) and 1987 (pp. 68–70) defend a similar view.

113 Making this case requires stating Fricker’s view in detail. She argues, in the first instance, that we perceive what people say; that our knowledge that someone said that p in uttering S is often based on, and rationalized by, seeing (or hearing) that they said p in uttering S . So, like Schiffer, Fricker focuses primarily on our pragmatic knowledge of what is said. But it seems that Fricker would also want to say, in order to explain our semantic knowledge, that our beliefs about sentences’ meanings are rationalized by perceiving what they mean.55

Focusing on the account of semantic knowledge, then, let us ask: In virtue of what, for Fricker, does perception rationalize semantic beliefs formed on its basis? The answer is: in virtue of its phenomenology.56 For Fricker, it is because perceiving that p phenomenally “[presents] itself intrinsically to its subject as a confrontation with the fact that” p that it pro- vides “by its intrinsic nature, a prima facie though defeasible ground for [the] belief” that p (2003, p. 341). But this is not the whole story. Let us ask next: Why, for Fricker, does the phenomenology of perceiving that p give us a reason to think that our perceptually-based be- lief that p is true? The answer is: because the phenomenology of perception is such that when we perceive that p, we know that we are perceiving that p; our knowledge (or belief) that we perceived that p then serves as our reason to think that our belief that p is true, enabling us to know that p.57 In the end, then, Fricker’s view is that, normally, when we know that S means that p, our belief that S means that p is rationalized by our knowing that we perceived that S means that p.

55Here is her description of the comprehending hearer (emphases mine): “She hears [the utterance’s] meaning in the utterance itself, experiences it as a semantically laden event. It is for her as if, in perceiving the utterance, she perceives not just the sounds, but equally perceives their meaning. It is a fact of phenomenology that we enjoy such understanding-experiences, quasi-perceptions of meaning” (2003, p. 324). 56Fricker claims that the necessary condition for semantic knowledge of having “epistemic means available to [one] to see how [one’s] belief is appropriately linked to the fact putatively believed in” is “satisfied in normal language use” partly “in virtue of the phenomenology of understanding” (p. 341); it is “the characteristic phe- nomenology of understanding is crucial to the epistemology of a hearer’s knowledge of what is said” (p. 342); the “very manner” of perception’s “presentation of its content means that it is, unlike a mere yen to believe, intrinsically a ground for belief” (2003, p. 342). 57This is made clear in Fricker’s discussion of “epistemic links” and the case of Ella and Petra (pp. 343–44). Here, Fricker argues that, ordinarily, a perceptually-based belief that p is rationalized by “citing the fact that one has been the subject of a known epistemic link, one which gives access to the kind of state of affairs one is claiming to know about”, such as the ‘epistemic link’ of having seen that p (p. 344). How do we know we stand in epistemic links? Because standing in one is luminous in virtue of its phenomenology: “Because being a subject of [an epistemic link] is something [one] is consciously aware of, when [one] is, [one] can know that [one] is, when [one] is” (p. 343).

114 Now, here is why this view satisfies Reason only if disquotationalism does too. Granted, knowing that one perceives that p is a reason to think that one’s belief that p is true. But I suspect that the best account of why this is so is that perception issues directly in knowledge; perceiving is a way of knowing.58 If this is right, then knowing that one perceives that p rationalizes believing that p because it suffices for knowing that one knows that p. And if so, anything sufficient for knowing that one knows p is sufficient for having a reason to think that one’s belief that p is true.

But it is not implausible that knowing that p suffices for knowing that one knows that p.59 And this means that Fricker’s view satisfies Reason as well as disquotationalism. For if, as I have argued, following Disquote issues in semantic knowledge, and if it thereby issues in knowledge of semantic knowledge, then speakers who form semantic beliefs by following Disquote will have reasons to think that those beliefs are true (i.e., they will know that they know their contents).

For this argument to go through, I do not need to establish the KK principle. I only need to point out that the following principles are nearly equally plausible:

(PK) If x perceives that p, then x knows that x knows that p. (KK) If x knows that p, then x knows that x knows that p.

If Fricker’s view satisfies Reason, it does so in part because (PK) is true. And if disquota- tionalism satisfies Reason, it does so in part because (KK) is true. So, Fricker’s view has the advantage only if (PK) is in better standing than (KK). But it is not. For counterexamples to (KK) can be modified into counterexamples to (PK). If we can imagine a creature who knows that q but does not know that they know that q, then we should be able to imagine a creature (perhaps the same creature) who knows that q on the basis of perception and does not know that they know that q. So it is doubtful that (KK) is significantly less plausible than (PK). Reason-based considerations do not pull in favor of Fricker’s view over disquotationalism.

58Seeing is a determinate of knowing, as T. Williamson 2000 argues (p. 37–39), or it at least suffices for it. See also Dretske 1969 (pp. 81, 87, 119 fn. 1, 124). 59See Greco 2014, Stalnaker 2015, Das and Salow 2018, and Dorst 2019.

115 But even if they did, it would not matter. For Fricker’s view that we know what sentences mean by perceiving what they mean is too incredible to rival sentence disquotationalism. The problem with her view is that if sentence-meaning can be perceived, then perceptual contents can be as rich as natural language sentences. For associated with any word of a humanly learnable natural language, like ‘soothing’, there will, on Fricker’s view, correspond a possible perceptual content, like that ‘Tea is soothing’ means that tea is soothing. If this proposition can figure as a content of a perceptual state, then properties as specific and ‘high-level’ as soothing- ness can be perceptually represented. But this means that almost any linguistically expressible content can, in principle, be the content of a perceptual state. If there is a perceptible, inter- pretable sentence S that expresses the content p, then the proposition that S means p can be perceptually represented, and so it can be perceptually represented that p. So, for Fricker, the ‘representational power’ of perception subsumes the expressive power of language. This is implausible. It is hard enough to imagine that there could be a perceptual state with (8) as its content, let alone imagine that there could be distinct perceptual states with (9) and (10) as their contents:

(8) The tree is stupendous. (9) The tree is humongous. (10) The tree is gargantuan.

Perception’s grain is not so fine. One cannot object that (8) and (9), say, express the same perceptual content because ‘stu- pendous’ and ‘humongous’ are synonyms (or perhaps ‘perceptually synonymous’). First, it is not at all clear that they are synonyms. And second, if meaning is perceived, we will need to distinguish perceiving (11) from perceiving (12),

(11) ‘Everything stupendous is humongous’ means that everything stupendous is humongous. (12) ‘Everything stupendous is humongous’ means that everything humongous is humongous. for the reason that one can immediately know (11) upon perceiving ‘Everything stupendous is humongous’ without knowing (12). I at least do not know that ‘stupendous’ and ‘humongous’

116 are synonyms, and so know (11) but not (12). Thus we cannot say that (8) and (9) express the same perceptual content.

Lastly, Fricker’s account also exacerbates the problem of non-reflective agents. We have seen that it is implausible to require speakers of whom Understanding is true to have rational- izations for their semantic beliefs (in advancing Reason); this would require them all to have beliefs about their semantic beliefs, and the non-reflective among them do not. But Fricker now requires that these speakers know that they perceive the contents of those semantic beliefs. This requires that someone who comes to know that ‘Bears fly’ means that bears fly, in the ordinary way after hearing ‘Bears fly’, knows that they heard that ‘Bears fly’ means that bears fly. But it seems that most of us, especially the non-reflective, do not think that we hear that sentences mean things.

In light of all this, I think it safe to say that Fricker’s view poses no challenge to sentence disquotationalism.

3.6.7 The indication objection

The final objection I wish to consider challenges the disquotationalist to implement their view without presupposing the controversial doctrine of the language of thought. More precisely, the challenge is to implement Disquotationalism without assuming the following account of belief:

Mentalese: x believes p just if there is a token of x’s language of thought in x’s belief-box that means p.

The most dialectically salient reason one might reject Mentalese is because they find plausi- ble something like the causal-pragmatic account of intentionality defended by Stalnaker 1984, according to which:60

Indicationism: x believes p just if x is in some state that indicates p.

60Stalnaker 1986 replies to objections to Indicationism from Schiffer 1986 and Field 1986, both of whom lean more towards Mentalese. See also Stalnaker 1999 and Stalnaker 2010, the latter a defense of Indicationism in light of objections from Stanley 2010.

117 where indication is a so-called ‘coarse-grained’ relation, or, a relation such that, necessarily, if a state indicates p and p entails q, then that state also indicates q; indication is ‘closed under entailment’.61

Mentalese and Indicationism are inconsistent. That is, they are inconsistent given the plau- sible assumption that it is possible for a finite being to be in a state indicating some p that entails infinitely many distinct propositions. It is presumably not possible for a finite being to have infinitely many sentences of their language of thought tokened in their belief-box. If so, if Indicationism is true, then Mentalese must be false.

So let us take Indicationism for granted. In a sense, this makes explaining Understanding too easy. Imagine an ordinary case in which you perceive Bill utter ‘Goats eat cans’. Ordinarily, you will believe that Bill is speaking English, and so you will believe (13):

(13) Bill utters ‘Goats eat cans’ while speaking English.

But (13) entails (14) which entails (15):

(14) The sentence Bill utters means what it means in English. (15) ‘Goats eat cans’ means that goats eat cans.

So, in any case in which you are in a state believing and so indicating (13), you will also indicate (15), and so will believe (15). Moreover, this belief will plausibly count as knowledge, assuming that one’s perception-based belief in (13) also counts as knowledge.

In short, if Indicationism is true, then knowing that an English-speaker uttered a sentence of English is sufficient for knowing what that sentence means. But for this very reason, if Indicationism is true, we might think that Understanding as stated above does not cry out for explanation. Rather, what cries out for explanation, given Indicationism, is something like this:

61The account of indication defended in Stalnaker 1984 can be stated as follows, where ‘C’ abbreviates ‘optimal conditions obtain’ (pp. 13–14, 18–21): state s of x indicates p just if if C were true, then (i) x would be in s only if p, and (ii) either x would be in s because p is true, or, there would be some proposition q such that q entails p and such that x would be in s because q is true. The ‘because’ in this definition expresses something like causal explanation or causal dependence. But note that this definition of indication only entails that it is closed under entailment if causal dependence is coarse-grained, or, a matter of counterfactual dependence.

118 Understanding+: Normally, when we perceive S , we know what S means and can access this information.

For although knowing (13) suffices for knowing (15), it does not seem that accessing the infor- mation that (13) is true suffices for accessing the information that (15) is true.

Indeed, anyone who endorses Indicationism needs an account of how a subject can be in a state that indicates some p that entails q while accessing p but not q, where accessing q is something like consciously thinking about q or being able to act on the basis of q in thought and outwardly. For we can surely think about the fact that Fido is a dog without thinking about the fact that there are mammals, even if, strictly speaking, believing the former requires believing the latter.

I recommend, then, that Disquotationalism should be reformulated as the thesis that the mechanisms that are causally responsible for which information we access when—our ‘information- access mechanisms’—follow this rule: When the information expressed by p... ‘S’ ...q is ac- cessed, access the information expressed by p‘S’ means that Sq! Letting p[S ]q abbreviate pthe mental act of accessing the information expressed by the sentence S q, we can express the rule as follows:

Disquote**: If you perform [p... ‘S’ ...q], then perform [p‘S’ means that Sq]!

Now, in order to follow Disquote**, we must be such that, for any sentence of our language

S , whenever we are in a state indicating the information expressed by p... ‘S’ ...q, we are in a state indicating the information expressed by p‘S’ means that Sq. But we have seen that often enough we satisfy this constraint, whenever we know which language we or our interlocutor is speaking.

Following DISQUOTE** also requires, of course, the capacity to perform mental actions like [‘Goats eat cans’ means that goats eat cans]. But the problem of accounting for this ca- pacity is just part of the project of spelling out Indicationism. How are we able to ‘raise to consciousness’, so to speak, the information that our inner states indicate? It is not the disquo-

119 tationalist’s job to propose an answer.62 But the disquotationalist is perhaps obliged to argue that the correct answer does not pre- suppose semantic cognizing. Re-enter the language of thought! Even if Mentalese is false—if tokens of Mentalese are not the realizers of our beliefs—they might still be realizers of states of information-access. Perhaps we have a ‘thought-box’ in which our mind tokens sentences of Mentalese in order to thereby access the information they express. If so, we might propose the following on behalf of the advocate of Indicationism:

Access: x performs [S ] just if x’s thought-box contains a token of S .

We can then explain how we can perform [‘Goats eat cans’ means that goats eat cans]: we have the capacity to token ‘ ‘Goats eat cans’ means that goats eat cans’ in our thought-box. This capacity, I take it, does not plausibly require semantic cognizing, at least not unless the language of thought itself does. So it seems to me that the disquotationalist can maintain their position even without endorsing Mentalese.

62See Elga and Rayo 2019 for a proposal.

120 Chapter 4

What might knowledge of grammar be?

4.1 The trouble with psychogrammars

In the 1960s, Chomsky proposed an explanation of the semantic and syntactic facts about our language, and of our linguistic capabilities, that attributed tacit knowledge of grammar to speak- ers.1 Many philosophers were mystified, and reacted with skepticism. If one were to list those who have argued or intimated that something is amiss with the notion of tacit knowledge of grammar—or with the type of explanations in which it figures—it would read like a who’s who of the philosophy of language.2

Perhaps unsurprisingly, Chomsky prevailed, and birthed an enterprise. And by now, philoso- phers have changed their tune; many have proposed or gestured at accounts of what it is to tac- itly know a grammar, accounts they take to demystify psychogrammars and to blunt the force of philosophical objections to the explanations of linguistic facts in terms of such knowledge.

In this chapter, I argue that none of these accounts are satisfactory, siding in many ways with the ‘first wave’ of Chomsky’s philosophical reception. Specifically, I will argue that the theo- retical role that states of tacit knowledge of grammar, or ‘psychogrammars’, are supposed to play—the role they are said to play according to established psycholinguistic theories—cannot

1For these early proposals, see Chomsky 1964 (pp. 7–27), Chomsky and Halle 1965 (pp. 99–106), Chomsky 1965 (pp. 3–9), and Chomsky and Halle 1968 (pp. 3–4). 2A mere sampling (ignoring this century and most of the 80s and 90s): Blackburn 1984a (pp. 26–38), Cohen 1970, Davidson 1970, Devitt 1981 (pp. 70, 95–110), Dretske 1974, Dummett 1981b, Goodman 1967, 1969, Grandy 1972, Hacking 1975 (pp. 57–69), 1980, Harman 1967, 1969, Kripke 1982 (p. 30, fn. 22), Lewis 1969, Putnam 1967, Quine 1969, 1972, Ryle 1974, Searle 1972, and Stich 1971, 1980.

121 be played by the kinds of states with which they have been identified. The job description of a psychogrammar, or, ‘the psychogrammar-role’, includes figuring in how language is grounded and used. The received scientific conception of the linguistic says: What constitutes the fact that English is my language (or that I ‘know’ English) is the fact that I tacitly know or ‘cognize’ a grammar for English.3 And what enables and constrains me to use English in the ways that I do, in both outer communication and inner thought, is the grammar for it that I cognize.4 So, an adequate theory of psychogrammars must satisfy the following desiderata:

Ground: A speaker has a language in virtue of cognizing a grammar for it. Use: A speaker’s capacities for comprehending and/or producing expressions of a lan- guage are enabled and constrained by which grammar they cognize.

But no candidate theory of psychogrammars satisfies these, or so I will argue. Although I will discuss these proposals one-by-one and object to them in a piecemeal fash- ion, there is something of a recurring problem that arises for all of them. They either make psychogrammars too dependent on prior facts about linguistic performance, in apparent ten- sion with Use, or they make psychogrammars too dependent on prior facts about speakers’ languages, in apparent tension with Ground. In short, no plausible account of psychogram- mars seems to at once render them ‘pre-linguistic’ and ‘pre-performative’. If my arguments go through, we are left with a puzzle. We seem to have good reason to think that the psychogrammar-role is played. Our best psycholinguistic theories postulate that psychogrammars are the doers of such-and-such such that Ground and Use are true. And so we should accept that there are doers of such-and-such. But it seems that we should also be confident that if the psychogrammar-role is played, it is played by one of the many kinds of things psychogrammars have been taken to be. Or at least we should think so insofar as we are

3Chomsky 1986 asks “What constitutes knowledge of language? [...] The answer [...] is given by a particular generative grammar, a theory concerned with the state of the mind/brain of the person who knows a particular language,” that “state” being a psychogrammar (pp. 3–4). 4On this point, Chomsky writes that it is a speaker’s “internalized system of rules that determines sound- meaning connections for indefinitely many sentences” that “enables him to produce and interpret sentences that he has never before encountered” (1968, p. 3). For textbookifications of this, see Radford 1988 (pp. 29–30) and Harley 2014 (pp. 36–7). See also Chomsky 1968 (pp. 26–27), Chomsky 1980 (pp. 200–1), Chomsky 1995a (pp. 26–27), and Chomsky 2012 (p. 69).

122 confident that the theorists of the psychogrammatical are not collectively clueless about what psychogrammars are (if they exist).5 And yet I say none of those kinds of things are up to the task of satisfying Ground and Use. And so we seem to have good reason to think that the psychogrammar-role is unplayed. This puzzle is left not fully resolved. But in the last section I will lay out a path forward. To spoil the ending, I suggest that the psychogrammar-role may not be played by one thing; the role is divided. There is no one ‘pre-linguistic’ basis determining our language while simul- taneously enabling and constraining performance. Rather, to have a language just is to have a particular performance capacity. And that capacity is not enabled by anything involving a psychogrammar.

4.2 Desiderata for a theory of psychogrammars

First some stage-setting. First, let us unpack Ground and Use a bit more, to get clearer on what it would take for an account of psychogrammars to accomodate them. Ground says that psychogrammars explain speaker-relative linguistic facts. These expla- nations are synchronic and constitutive. More specifically, which grammar a speaker cognizes grounds which language that speaker has:

Language: For any speaker x and language L, x has L just if and because x cognizes a grammar G for L.

By grounding a speaker’s language, that speaker’s psychogrammar also constitutes the expression- specific semantic and structural linguistic facts about that speaker’s language. If we think of a language L as a function from expressions to semantic values, then we can appeal to language- having to explain speaker-relative semantic facts as follows:

5In other words, it seems safe to say that we should not think that psychogrammars exist and that nobody has any idea what they might be. To think otherwise would be to classify psychogrammars along with the noumenal theoretical posits of fundamental physics. But these theoretical entities are such that if they exist, then there is a good reason why experts on the physical have no idea what they are, i.e. have no idea what it is to have mass. That reason is that these notions are plausibly fundamental, and do not admit of explanation or analysis in other terms. Still, I suppose that ‘mysterianism’ about tacit knowledge of grammar is a live option; I have no argument against going that route. (Perhaps Chomsky 2009 can be read as leaving open such a position.)

123 Semantics: For any speaker x, expression e, and meaning m, e means m for x just if and because x has some L such that L(e) = m.

And if we think of expressions as individuated by their linguistic structure—modeling a lin- guistic expression e as, say, a pair of a phonological (or phonetic) form and a logical form, hPF, LFi, to follow Chomsky (1995b)—we can appeal to language-having to explain speaker- relative syntactic and phonological facts in a similar way:

Syntax:6 For any speaker x, expression e, and logical form LF, e has logical form LF for x just if and because x has some L such that e is an expression of L and LF is the second member of e.7 Phonology:8 For any speaker x, expression e, and phonological form PF, e has logical form PF for x just if and because x has some L such that e is an expression of L and PF is the first member of e.

So I take Ground to entail (at least) the conjunction of Language,Semantics,Syntax, and Phonology. So, if Ground is true, it follows that expressions have semantic values and logical

6The “Logical Form” of a sentence S , in the sense of Chomsky 1995b, is a structured abstract object somehow encoding or representing exactly the syntactic information about S that is relevant to its semantic interpretation; it would be better to call it ‘semantic form’ (Szabo´ 2012, p. 105). For helpful philosophical discussions of LF, see Neale 1993, Ludlow 2002, and King 2002. For an argument that LF “should be construed as instantiating the properties that philosophers have traditionally ascribed to logical form,” see Neale 1994 (p. 583). 7An LF is the second member of e just if {LF, 2} ∈ e. 8In Chomsky 1995b, ‘PF’ is actually short for ‘Phonetic Form’, but in contexts in which his minimalist ac- count of syntax is discussed, PF is often also called ‘Phonological Form’. For an argument that Chomsky’s use of ‘PF’ is ambiguous between phonetic and phonological form, see Scheer 2010 (pp. 616–18). On my non-expert understanding of the phonetics/phonology distinction, a sentence S ’s phonological form is an abstract object rep- resenting the structure of the sound associated with S as it is mentally represented by the speaker, whereas S ’s phonetic form is an abstract object representing the physical structure of the sound associated with S ; here I follow Myers 2000 (pp. 245–46). But I also trust the assessment of Carr 2012 that there “is no consensus in the phonolog- ical literature as to whether it is possible to adopt a clear distinction (or indeed, any distinction) between phonetics and phonology,” and that “there is no clear sense in the phonological community of what the relationship between the two might be” (p. 403). For relatively accessible-to-philosophers discussion of the (alleged) distinction be- tween phonetics and phonology, see Bromberger and Halle 1986 (pp. 139–43), 1989 (pp. 51–3), 2000 (pp. 17–21, 30–7), Bromberger 2012 (pp. 83, 88–92) and Carr 2012 (pp. 403–12).

124 and phonological forms for speakers because of the grammar they cognize.9

Now to Use. And it is implicit in Use that which grammar we cognize enables and con- strains our capacities to use language in such a way that this grammar-cognizing, or, our ‘lin- guistic competence’—is in an important sense prior to and distinct from these practical capac- ities, or, our ‘linguistic performance’. I will take it that this imposes the following constraint: If a speaker x cognizes a grammar G for L, then their cognizing G is not a performance state. The notion of a performance state can be defined as follows:

Performance State: For any speaker x and language L, if x’s language is L, then: state S of x is a performance state just if x is in S either (i) partly in virtue of facts about how they use or are able (or know how) to use L in thought or speech, or (ii) partly in virtue of facts about the physiological bases of those abilities (or that know-how).

So, the state I am in now of having once uttered ‘Goats eat cans’ is a performance state, as is my state of being able to utter ‘Goats eat cans’. And the state of my speech perception and production systems—my ‘articulatory-perceptual’ or ‘sensory-motor’ interface or perhaps rather my ‘parser’ and ‘producer’—that grounds my capacity to utter ‘Goats eat cans’ is also a performance state.10

So Use entails that psychogrammars are not performance states but that they do somehow causally enable speakers to enter performance states; and to say that a psychogrammar is never a performance state is just to say that, when a speaker cognizes a grammar, this is not because

9Many have adopted the schema Semantics, taking it that an account of ‘the actual-language relation’ (Schiffer 1993) will do good metasemantic work. But none have to my knowledge adopted the schemata Syntax and Phonology. This is likely because linguistic expressions, or, things in the domain of a language L, have been thought of as types or kinds of worldly entities, ‘sounds or marks’; a linguistic expression is often said to be any “finite sequence of types of vocal sounds or types of marks,” following Lewis 1969 (p. 142) and Quine 1960 (p. 194–95). But this conception of linguistic expressions is, first, not argued for by Lewis or Quine and was advanced by stipulation, and, second, seems impoverished and by now quite unscientific. So I suggest that we follow the linguists in at least taking expressions to be one-one with tuples of structures. I admit that the issue of how to individuate expressions may be a mere book-keeping issue. But by keeping the books my way, we can put the actual language relation to work in ‘metasyntax’ and ‘metaphonology’, as well as in metasemantics, and unify the three enterprises. 10For details on the posited relation between psychogrammars and the sensory-motor interface, see Berwick, Friederici, et al. 2013 and Friederici 2017 (pp. 85–99).

125 of facts about their sensory-motor systems, the capacities they realize, or the manifestations of those capacities. I take it that this is part of what sharply drawing the competence/performance distinction amounts to.11

Next, a word on terminology. I will be assuming that a grammar is tacitly known by a subject x just if it is cognized by x, and just if it is psychologically real for x, and just if it is ‘internally represented’ in x; or, in other words, just if x possesses a psychogrammar (George 1989). This is somewhat idiosyncratic. One might say that grammars could be psychologically real while not tacitly known, by arguing that necessary conditions for knowledge are not met by the psychological relation in which we stand to grammars. Or one might say that grammars could be cognized while not internally represented, by arguing that cognizing something does not require that one carries around a literal representation of it inside of one. And so one might want to draw important distinctions between notions I take to be equivalent.

The reason I ignore these distinctions is that I treat ‘tacitly known grammar’, ‘cognized grammar’, ‘internally represented grammar’, ‘psychologically real grammar’, and the like as terms of art. They are theoretical terms whose meaning-determining use is in the mouths of linguists and cognitive scientists in neighboring disciplines. They are used with the intention of picking out a mental state that they take to fulfill a job description implicitly specified by the place of these terms in the true psycholinguistic theory. As such, it may be that ‘tacit knowledge of grammar’ does not refer to a kind of knowledge, and it may be that ‘internally represented grammars’ are not grammars literally represented inside us.

Next we will survey some theories of what psychogrammars are, or, more specifically, of what it is to cognize a grammar or that in virtue of which one cognizes a grammar.

11Or, at least, I take it that this is what one salient way of drawing the distinction amounts to, a way that is faithful to how it has been characterized by Chomskyans. I am wary, however, that even by 1975 “up to eight different versions of the distinction” could be identified in the literature “without even trying very hard” (G. A. Miller 1975, p. 201).

126 4.3 Theories of psychogrammars

4.3.1 Working theories

To get things rolling, I want to consider a few theories of what it is to cognize a grammar that are given as ‘working theories’ in the literature, theories advanced to give the reader enough of a handle on the notion of a psychogrammar so as to make it seem non-mysterious. Not surprisingly, all of these theories are stated much too quickly, and wind up being inadequate. It will be instructive, I think, to see why these accounts fail to satisfy our desiderata.

In an early paper, Harman (1973a) suggests a fairly simple, straightforward way of under- standing what it is to cognize a grammar:

there is a trivial sense in which, in learning a language, one forms a representation of the rules of the grammar of the language. We can trivially let the principle of representation be this: a person p at time t represents grammar g if and only if, at t, p knows the language for which g is the grammar. (p. 462)

Letting l be a function from a grammar G to the language it is a grammar for, l(G), the proposal is that a speaker cognizes G just if they know l(G).

But it cannot be that simple. One problem is that there are too many different grammars for any given language. So if English is l(G1) = l(G2) = ... = l(Gn), then Harman’s proposal entails that I cognize each of G1, G2, ..., Gn. This is an excrescence.

A worse problem is that Harman explains what it is to cognize a grammar in terms of knowledge of language. But the direction of explanatory priority is supposed to run in the opposite direction, as per Ground. If the reason why I cognize a grammar for English is that my language is English, then it cannot be that the reason why my language is English is that I cognize a grammar for it. So Harman’s proposal fails to satisfy Ground.

Kim (1984b) suggests an alternative yet equally simple way we might understand attribu- tions of psychogrammars, on which a “speaker of [a] language “knows” [a grammar], at least implicity—in the following rough sense: we can explain his speech by positing that he knows

127 [this grammar]” (p. 311).12 The idea, I take it, is something like the following: a speaker cognizes G just if their linguistic performance is explainable by positing that they cognize G.

Now, the first thing to note is that, if this account has any hope at all of specifying a sufficient condition for cognizing a grammar, then the ‘explainable’ here arguably ought to be ‘correctly explainable’. For anyone’s linguistic performance might be explained, albeit incorrectly, by positing that they cognize any grammar. But once the account is amended it is clearly circular. If a speaker’s linguistic performance is correctly explainable by positing that they cognize G, then that must be because the reason why their performance is thus-and-so is that they cognize G. So the account entails that we cognize a grammar in part because we cognize a grammar. This is unacceptable.

But even if this circularity could be explained away, Kim’s proposal would still be inade- quate because it fail to satisfy Use. For if his proposal is true, it is the facts about a speaker’s linguistic performance that determine which grammar they cognize. And so, on Kim’s pro- posal, psychogrammars to be a performance states.

The last too-quick account I want to consider comes from a suggestion from Chomsky (1980). In answering those who have challenged whether he has offered any reason to believe that grammars are psychologically real, he considers the related question of what it is for a grammar to be psychologically real. Chomsky then questions whether there is any distinction between “psychological reality” and “truth, in a certain domain”, namely, the domain of psy- chology (pp. 106–7). He then says that he is “not convinced that there is any such distinction,” and that he can “see no reason not to take our theories” that postulate that we cognize grammars to “tentatively to be true at the level of description at which we are working,” where that level of description is, presumably, the psychological level of description (p. 107).

I read Chomsky here as saying, correctly, that a speaker x cognizes grammar G just if the true and complete psychological theory says that x cognizes G. And he is also correct in arguing that he has offered reasons to believe the left-hand side of this biconditional insofar

12Kim is here giving an account of tacit knowledge of a “semantical theory”, or, of the semantic component of a grammar (1984, p. 311).

128 as he is offered reasons to believe the right-hand side.13 But this biconditional is no account of that in virtue of which grammars are cognized. If the true psychological theory T says that x cognizes G, that is because (i) T says that x cognizes G and because (ii) x cognizes G. So psychogrammars are presupposed by, and not explained by, the fact that the true psychology posits psychogrammars. There is a distinction, then, between the psychological reality of a grammar and the truth of a theory positing it.14 And we have yet to see in what the former might consist.

4.3.2 Schematic theories

Many endorse the view that speakers’ psychogrammars metaphysically depend in some un- specific way upon some underlying physical, usually neurophysiological state(s) of speakers. Unfortunately, almost always these vague schematic statements about speakers having psy- chogrammars somehow in virtue of their neurophysiology exhaust what these authors say about what it is to have a psychogrammar. At the end of the day, these schematic theories must give way to some more specific account of how psychogrammars are realized.

Here are some examples of theorists who believe in psychogrammars and endorse that they in some generic sense depend upon underlying physical or neurophysiological states of speak- ers: Higginbotham (1983) speaks schematically about a speaker’s grammar being “realized in his physical or psychological state,” abouts its “physical or psychological basis” (p. 177), and elsewhere about how the truths about a speaker’s psychogrammar will “supervene upon” their “mental and physical states” (1991, pp. 559–65); Ludlow (2011) refers to the “lower level physical processes” and “the low level biophysical state upon which the psychogrammar supervenes,” (p. xvii, 63);15 Pietroski (2018) speaks about psychogrammars as “biochemically realized” and as involving “biologically implemented generative procedures” (pp. 69, 8); Horn- stein (2009) says that “grammatical process[es]” must have an “implementation in brain like

13But as Cummins and Harnish 1980 rightly point out, Chomsky does not adequately argue that his linguistic evidence for his linguistic theory really is evidence to believe the right-hand side. A well-confirmed linguistic theory containing the sentence ‘We cognize grammars’ is a reason to believe that the true psychological theory says that we cognize grammars only if the subject matter of that linguistic theory is the mind. But, Cummins and Harnish point out, “linguistic evidence itself can’t tell us whether linguistics is about the mind” (p. 18). 14See also Harman 1980 (pp. 21–22) and Rey 2003 (pp. 108–9). 15Ludlow takes the physical subvenient state to be ‘widely’ not ‘narrowly’ individuated (pp. 117–18, 140).

129 material, and “must ultimately be embedded in brain circuitry” (p. 3); a psychogrammar must be “realized” in “neural circuity” and “supervene” upon “brain structure” or “the computational circuitry and wiring that the brain embodies” (p. 156, 9); and we find similar generic appeals to the ‘physical realization’ of psychogrammars, and therein appeals to some unspecific relation of asymmetric dependence, throughout Chomsky.16

None of these authors, however, explicate which ‘supervenience’, ‘realization’, or ‘imple- mentation’ relations they have in mind. In fact, it is difficult to catch anyone anywhere in the philosophy of linguistics literature offering an explicit account of the specific dependence rela- tion in which psychogrammars stand to their subvening states.17 This is striking, given the huge amount of attention that so-called ‘inter-theoretic relations’ (or ‘small-‘g’ grounding relations’, as Wilson (2014) helpfully calls them) have been given in the and cognitive science more broadly.

But perhaps there is a reason for this. For Chomsky also argues that “the relation of brain and mind,” and so of physiogrammars and psychogrammars, “is a problem of the natural sci- ences,” and so, presumably, not a matter for armchair speculation (Chomsky 1986, p. 40). If so, is it not the scientist’s job to eventually fill-in these schematic appeals to ‘realization’ or what have you? No. Insofar as Chomsky is correct on this point, what is an empirical matter is which mental goings-on are dependent upon which neural goings-on. This is in accord with his hypothetical case of an empirical discovery that there is a “complex of neural mechanisms”

N that “corresponds” to a grammar with principles P1, ..., Pn but not to one with “equivalent” principles Q2, ..., Qn, which allows us to safely infer that a psychogrammar involving P1, ..., Pn,

16A psychogrammar (emphases all mine) must be “physically realized in a finite human brain” (Chomsky and Halle 1968, p. 6), “must structurally correspond to some features of brain mechanism” (Chomsky and Katz 1974, p. 364), has a “neural basis” (Chomsky 1975b, pp. 8, 40), is “coded” and “realized in some arrangement of physical mechanisms” (1982, p. 15), must somehow “arise in the mind/brain” (1988, p. 3), is “determined by the nature of the mind/brain” (p. 36), has “physical structures of the brain” as its “basis” (p. 185), is, again, “realized in mechanisms of the brain” (1995, p. 15), and has a neural “basis” and “implementation” (2016, p. 50, 110); we also read that “one task of the brain sciences is to determine what it is about [the] brain by virtue of which” speakers have psychogrammars, or to “discover the mechanisms that are [its] physical realization” (1986, p. 22), and that facts about the brain “explain” and are “responsible for” the character of a particular psychogrammar (p. 39), and that the “mind/brain” somehow “yields” a psychogrammar (p. xxvi). See also Chomsky 2002a. 17The one exception is the account offered by Evans 1981, Davies 1987, and Peacocke 1989. But even these authors are less than explicit about what their accounts entail regarding the metaphysical status of psychogrammars vis-a-vis` the brain. I address their view in 4.3.5.

130 and not the other principles, is realized by N (p. 40). Suppose Chomsky is right that in such a case we might empirically discover that a certain grammar G is realized by N. Still, the nature of this realization (or ‘correspondence’) relation is up for grabs. The following is a philo- sophical question: How is that possessing neural mechanism N is metaphysically sufficient for cognizing a grammar with principles P1, ..., Pn? Or how can we make sense of this? To put this another way, appeals to psychogrammatical-neural supervenience are plausibly explanatorily empty when taken on their own. It by now a cliche´ that mere modal dependence is insufficient for whatever relation of explanatory, metaphysical dependence is signaled by ‘in virtue of’ talk.18

18Standard citation practices suggest that this point was not fully appreciated until the arguments of Horgan 1993, Fine 2001 (pp. 10–11), and Wilson 2005 took hold, and that it did not become a piece of ‘metaphysical commonsense’ until the later explicitly non-modal accounts of generic metaphysical dependence (i.e. ‘grounding’) of Schaffer 2009, Rosen 2010, K. Bennett 2011, and Fine 2012 became canonized. But this is misleading. Early champions of supervenience, like Lewis and Kim, never endorsed psychophysical supervenience on its own as exhausting their view of the sense in which the mental is ‘nothing over and above’ the physical. Lewis took supervenience to be a result of semantic relations between the terms expressing supervening properties and the terms expressing subvening properties, and held that it is ultimately because of this semantic dependence that the mental truths are ‘made true’ by physical truths (as Horgan 1984 explains (pp. 31–35): “[Lewis] tells me that he agrees with the thesis that the principles whereby all truths supervene upon microphysical truths can only be principles of meaning” (p. 38 fn. 18); for details, see Lewis 1972, 1974, and see Kim 1990 (p. 25–26) and Horgan and Timmons 1992 (p. 234–240) for acknowledgments of this). As for Kim, in his first paper on supervenience (Kim 1978) it is only taken to establish ‘dependence’ in the very weak sense of backing Nagelian reduction (E. Nagel 1961), and even then only “under certain conditions” (p. 154); and he also takes supervenience to issue in nothing-over-and-aboveness only when the set of subvenient properties are “micro-reducible physical properties” (p. 155), or, physically reducible `ala Nagel (p. 152). Kim offers no argument here that supervenience alone suffices for nothing-over-and-aboveness. And even if he can be read as arguing that supervenience always suffices for Nagelian reduction, it is not as if the latter was ever widely confused with metaphysical dependence. Tellingly, ‘metaphysics’ and its cognates only appear in E. Nagel 1961 in square quotes, as does ‘more ultimate than’ and ‘metaphysically prior to’ (p. 315). Despite these anti-metaphysical gestures, Nagel worries that questions about what he means by ‘reduction’—which he takes to be a matter of “the organization of knowledge,” “the strategy of research,” and of “logical relations between sciences constituted at a certain time”—will be misconstrued “as if they were about some ultimate and immutable structure of the universe” (p. 361, 364). Those who take Kim as attempting to limn the structure of reality with supervenience are guilty of this misconstrual; for as Kim 1993 admits, he takes questions about what “really” exists, about what is “ontologically prior” to what, and about the “metaphysical nature” of things to not have “true answers,” or, “answers that are true because they correctly depict some pre-existing metaphysical order of the world” (p. ix). Although sections of Kim 1982a,b, 1984a, 1987 can admittedly be read as advancing the thesis that certain varieties of asymmetric supervenience suffice for metaphysical dependence, this proposal was hedged and tentative (1984, p. 166) and anyway short-lived, retracted by Kim 1989 (p. 40–42). (See also Kim 1990, in which dependence is claimed to be “metaphysically deeper and richer than what can be captured by” purely modal notions (p. 16).) For the historical record, I speculate that the fact that metaphysical dependence is not just supervenience was acknowledged by the early 1970s, perhaps first by Horgan in his 1974 dissertation, “Microreduction and the Mind–Body Problem,” (and so, presumably, it had also occurred by then to his advisor, Kim); Horgan cashes out mind–body dependence in terms of supervenience relations backed by “one-way property correlation laws” or “microreductive connecting principles” linking up supervening properties to subvening properties, principles he takes to establish a “very intimate” relation of “constitution” that holds between mental and physical events, a relation weaker than identity but that suffices for one event’s being “nothing over and above” the other and is

131 So any supervenience-based account of psychogrammars is schematic and incomplete. One might naturally reach for something like a functionalist account of psychogrammars here. For an account on which psychogrammars are functionally realized by neural states might explain psychogrammatical-neural supervenience. Indeed, it is not implausible to think that it might best explain it, or at least best explain it in physicalistically acceptable terms.19

And it would also most straightforwardly make sense of the above schematic talk of ‘real- ization’. So I will examine a functionalist account of psychogrammars in the next section.

The schematic talk of ‘implementation’, on the other hand, suggests that psychogrammars are computational states, perhaps states computationally implementing some program or set of functions corresponding to a grammar. And so I will consider computationalism about psy- chogrammars afterwards.

4.3.3 Functionalist theories

4.3.3.1 Psychofunctionalism

Many have advocated the view that the mental states should be give a ‘psychofunctionalist’ treatment.20 On this view, a theoretical term τ included in the true and complete psycholog- ical theory T, if it picks out some mental property of states (like being a belief or being a psychogrammar), is associated with a very complicated causal-functional role, the τ-role, cor- responding to the theoretical role or ‘place’ of τ in T. As this view is standardly formulated, the τ-role is a complicated property of state-types, Fτ, such that: a state-type (or property) H has Fτ (or ‘plays’ Fτ) only if H stands in the causal relations or relations of counterfactual dependence to exactly the inputs, outputs, and other state-types that T says the referent of τ is

“transitive, irreflexive, and asymmetric” (i.e., has the formal properties of ‘grounding’) (pp. 174–82). And even further back, the non-explanatory nature of supervenience was arguably anticipated by Dennett as early as 1965 (when he submitted his 1969) in his discussion of how a mapping that returns the mental truths about a person when fed their “the entire physical state” is no “explanatory correlation” (pp. 11–12)). For other early acknowledgments of this point, see McGinn 1980 (pp. 197–98), Foster 1982 (pp. 5–7), and Schiffer 1987 (pp. 153–54). 19For arguments that the best physicalistic explanation of psychophysical supervenience is in terms of functional realization, see Melnyk 2003 on how realizationism—his thesis “(R)” (p. 26)—entails and explains the truth of the supervenience claim that any possible world in which (R) is true and that is physically indiscernible from the actual world is in all ways indiscernible from it (pp. 55–7); see also Melnyk 2018. Also, as Tiehen 2018 makes clear, functional realization’s only serious contender is grounding. But see Wilson 2014, 2018 and Melnyk 2016 for arguments that it is no strong contender. 20Highlights include A. Clark 1986, Field 1978, Lycan 1981, 1987, 1996, 2003, Melnyk 2003, and Rey 1997.

132 so related.21

The thesis of psychofunctionalism about the mental state τ picks out is that the τ-role can be used to give an account of what it is for someone to be in that state, as follows: someone is

22 in a τ-state just if they have some property H that has Fτ. Those who advocate psychofunctionalism across the board, or hold the view that all psycho- logical properties should be given the above treatment, are committed to giving a functionalist treatment of psychogrammars (if they believe in them). To see how this would go, let us as- sume that the true and complete psychological theory T includes a psycholinguistic component that postulates that speakers have psychogrammars. And let us say that the theoretical role of ‘psychogrammar’ in T is corresponds to the causal-functional role F‘psychogrammar’, or, the psychogrammar-role. Then we can state the following view:

Psychogrammar Functionalism: For any speaker x, x has a psychogrammar just if x has some property that plays the psychogrammar-role.

How plausible is Psychogrammar Functionalism?

Well, there is no shortage of problems with functionalism as a general thesis about the men- tal, and these problems will in many cases carry over to this specific thesis.23 But these ‘global’ objections to functionalism, objections that apply independently of how the functionalist anal- ysis is applied to a particular psychological notion, are not decisive enough to rely on here. So instead I will argue that Psychogrammar Functionalism is especially untenable.

4.3.3.2 Against psychofunctionalism

Psychogrammar Functionalism is not an adequate account of psychogrammars because what it is for a property to play the psychogrammar-role cannot be cashed out in non-linguistic and non-semantic terms. And so neither can psychogrammars themselves if Psychogrammar

21For a particularly lucid summary of how this is supposed to work, see Horgan and Timmons 1992 (pp. 234– 40). For details, see Lewis 1970b and then Loar 1981 (pp. 44–56) for necessary improvements. 22On an alternative formulation, the property of being in a τ-state is identified with the property of having some property that plays the τ-role. But I will not be assuming here that functional analyses issue in property identities. 23For an overview, see Levin 2018 (sec. 5).

133 Functionalism is right. But this is inconsistent with Ground. For if the fact that a speaker cognizes a grammar depends on, say, facts about what the words of their language mean, then we cannot go on to explain these semantic facts in terms of which grammar they cognize in the manner of Ground.

To make this case, first we need to say a bit more about what the psychogrammar-role in- volves. As per Use, psychogrammars are supposed to do causal explanatory work. A speaker’s psychogrammar is supposed to causally enable them to comprehend and produce sentences in the way that they do. Simplifying considerably, the psychogrammar of an English speaker, for example, enables the event of their perceiving ‘Goats eat cans’ to cause in them an event of understanding ‘Goats eat cans’, or the onset of a state of knowing that ‘Goats eat cans’ means that goats eat cans. And their psychogrammar will also enable a mental event in them of decid- ing to say something that means that goats eat cans (in order to thereby say or mean that goats eat cans) to cause them to utter a sentence that means that goats eat cans for them. So, if a speaker has some property H that plays the psychogrammar-role, then the state of their having H must at least causally enable comprehension and production in these ways; H must enable certain transitions between certain linguistic inputs and linguistic outputs, because that is (part of) what makes it the case that H plays the psychogrammar-role.

Let us focus on the comprehension case. Playing the psychogrammar-role in comprehen- sion involves causally enabling transitions between events of linguistic perception (inputs) and events of linguistic understanding (outputs). This means that a property’s playing the psychogrammar-role depends on facts about linguistic perception and linguistic understanding. And here is the problem for the adherent of Psychogrammar Functionalism: linguistic per- ception and understanding must be understood in terms of facts about linguistic structure and meaning.

In the ordinary case, to comprehend a sentence-token of our language is to, first, perceive it as having a particular linguistic form. In Austinian jargon, when we perceive an utterance in our language (written or spoken), the input for which understanding is the output is not perception of the “phonetic act,” the “[mere] act of uttering certain noises,” but rather of the “phatic act,”

134 the “uttering of certain vocables or words” as “conforming to a certain grammar” (Austin 1962, p. 95).24 We then to come to know that that token has a particular meaning. But we cannot perceive that a sentence-token has a particular linguistic (phonological or morphosyntactic) form unless it already does. And unless a sentence-token already has a particular meaning, we cannot know that it does. In short, when linguistic comprehension and production are successful, this is partly in virtue of facts about linguistic structure and meaning. But this means that, if Psychogrammar Functionalism is true, then speakers have psychogrammars partly in virtue of facts about linguistic structure and meaning. And if this is true, then psychogrammars cannot be non-circularly invoked as constitutive bases for these structural and semantic facts, and so Semantics and Syntax and/or Phonology cannot be sustained, and so Ground cannot be satisfied.

This problem is similar to a familiar problem with functionalist analyses of folk psycholog- ical notions like belief and desire.25 On these views, roughly, to have a belief is to have some property that plays the belief-role. And to play the belief-role is to play a certain causal role in a creature’s transitions from sensory inputs to behavioral outputs. But there is a problem: if the behavioral outputs of belief are things like ordinary intentional actions, they are psychologically loaded; they do not seem easily analyzed in belief-free terms.

But the problem for the adherent of Psychogrammar Functionalism is worse. They say the psychogrammar-role involves playing a certain causal role a creature’s transitioning from linguistic perceptual inputs to outputs of comprehension. But if an output of comprehension is ordinary semantic knowledge, outputs are undeniably semantically loaded; they do not just seem to be. And if an input of linguistic perception is an act of perceiving linguistic form, then inputs are phonologically loaded. So these inputs and outputs cannot be understood in non- phonological and non-semantic terms, respectively, and so—assuming that psychogrammars

24For example, monolingual English-speakers have the capacity to perceive an English sentence-token as having a particular phonological or morphosyntactic form, but they lack this capacity when it comes to perceiving German sentence-tokens. A speaker’s linguistic perception of their native tongue is not mere perception of linguistic entities, but involves perception of tokens ‘as’ linguistically structured, or, perceiving that tokens are structured in such-and-such a way. Or at least this is what the phenomenology of linguistic perception strongly suggests, as Peacocke 1992 points out (pp. 89–90). See Siegel 2006 (pp. 490–91) and O’Callaghan 2010 for discussion. 25See Lewis 1994 (pp. 299–301).

135 ground meaning, as per Ground—they cannot be understood in psychogrammar-free terms. So Psychogrammar Functionalism is inadequate.

4.3.3.3 De-semanticalizing understanding

In response to this problem, the friend of Psychogrammar Functionalism must argue that what it is for a property to play the psychogrammar-role can be understood in non-semantic terms.26 They need to de-semanticalize and physicalize, broadly speaking, the psychogrammar-role’s outputs of linguistic understanding.

But it is arguable that, in principle, no such strategy could possibly work so as to save Psychogrammar Functionalism.27 The argument, in short, is that a de-semanticalizing ac- count of understanding would have to be no less than a naturalistic theory of linguistic mean- ing in its own right. Such an account would then compete with and threaten to replace the psychogrammar-based theory of meaning that is part of Ground, and so would be at odds with Ground. De-semanticalization cannot, then, resolve the tension between Psychogram- mar Functionalism and Ground, for if we can de-semanticalize understanding we should deny Ground.

In more detail now: Suppose that we could, in principle, come up with a non-semantic account of the outputs of linguistic comprehension.28 If so, then it should be in principle possible to reconstruct the true psycholinguistic theory T into a de-semanticalized theory T 0, on which psychogrammars enable transitions into what I will call pre-semantic comprehension states (‘PSC-states’, hereafter). A PSC-state S has to be such that if x is in S , then that it not because of any semantic facts. Additionally, if T 0 is to explain how psychogrammars enable

26For simplicity, I will focus just on viability of de-semanticalizing the outputs of the psychogrammar-role. But my argument that this is not viable can be rerun to also show that de-phonologicalizing the inputs of the psychogrammar-role is also not viable. 27And so we can ignore the other problems that the need to physicalize inputs and outputs raises, such as the problem, as posed by Block 1978, that the resulting view will either be too “liberal” in expanding, or too “chauvinistic” in restricting, the kinds of properties that might play the relevant functional-role. See also Block 2007b. 28I do not deny that we could do this. If we accept that psychogrammars strongly supervene on the physical, then perhaps it can be shown that some such physical and hence non-semantic account is out there. (Take the infinitely disjunctive physical property that has each maximal physical property of each possible linguistic comprehender as a disjunct, and then ... .) But I do deny, as we will see, that the availability of such an account can save Psychogrammar Functionalism from being an inadequate account of psychogrammars.

136 transitions into ordinary comprehension states, then it must be that whenever a speaker x is in an ordinary comprehension state S (i.e., a state of knowledge of meaning), there is some PSC-state S 0 such that x is in S because x is in S 0. In short, PSC-states must be necessary and explanatorily sufficient for their associated comprehension states. For if they were not explanatorily sufficient, then T 0 would not explain any instance of ordinary comprehension; and if they were not necessary, then there could be instances of ordinary comprehension that T 0 would not explain. In either case, we would not want to call T 0 ‘the true psychology’, unless ordinary semantically-loaded comprehension has no psychological explanation.

This means that an account of PSC-states would have to specify some three-place relation R such that what it is for a speaker x to be in a PSC-state of ‘pre-semantically understanding’ a linguistic expression e as having m as its meaning just is for R(e, m, x) to hold, and such that:

The PSC-State Relation x knows that e means m for x just if and because R(e, m, x).

Now, there are good reasons to think that no straightforward non-semantic account of R could ever be devised. For if we had such an account, we would hold the long sought after non- semantic specification of explanatorily sufficient conditions, or full grounds, for facts about linguistic meaning. The reason for this is that nothing can fully ground the fact that x knows p that does not ‘mention’ either p or that in virtue of which p holds.29 Now, because the fact that R(e, m, x) is supposed to be wholly non-semantic, it cannot ‘mention’ the fact that e means m for x. So it must mention, in some way, whatever it is in virtue of which e means m for x.

From this it follows that if we had an account of R, we would have in hand nothing less than a true theory of meaning. Now, that theory of meaning, call it ‘ToM’, built from the resources in terms of which R is analyzed, will either explain meaning in terms of psychogrammars or not. If ToM explains meaning in terms of psychogrammars, then circularity is reintroduced; R, it turns

29In other words, p is always at least a reason p is known whenever it is, or, p is always at least a partial ground of the fact that x knows p. This is not an abandonment of ‘knowledge first’ (T. Williamson 2000). That x knows p partly in virtue of p does not entail that there is some way of non-epistemically filling in the following to produce a conceptual analysis or definition of knowledge: x knows p =de f ... and p.

137 out, must be analyzed in terms of psychogrammars, and so cannot be used to non-circularly, non-semantically specify the outputs of comprehension figuring in the psychogrammar-role.

But if ToM does not explain meaning in terms of psychogrammars—if the true theory of meaning built from R’s analysans does not mention psychogrammars—then it would seem to be in tension with any theory of meaning that does. In other words, if e means m for x in virtue of some fact that makes no (tacit or explicit) mention of psychogrammars, then it would seem to be false that e means m for x in virtue of x’s cognizing a grammar for a language L such that L(e) = m. There is room for my opponent to maneuver here, as there always is.30 But at the very least, the availability of a wholly non-psychogrammatical theory of meaning would certainly free us to dispense with psychogrammars for the purposes of metasemantics. And if Psychogrammar Functionalism gives us this liberty, it does not sit nicely with Ground.

So Psychogrammar Functionalism will just not do as an account of psychogrammars. What type of account might we reach for next?

4.3.4 Computational theories

Perhaps a computationalist account of psychogrammars will fair better than a functionalist account. Computationalism about psychogrammars is motivated by thinking of ascriptions of psychogrammars to speakers as made at ‘the computational level of description’, such that

30Let us consider some of these maneuvers. (The following will be extremely compressed.) Suppose ToM says that e means m for x because Φ. One might endorse ToM while maintaining that (for some grammar G) e means m for x because x cognizes G by saying either that (i) x cognizes G because Φ, relying on the transitivity of ‘because’, or, that (ii) Φ and the fact that x cognizes G are distinct but equally complete grounds for the fact that e means m for x. Both options are odd, unmotivated, and ad hoc. And they are also difficult to assess at this level of abstraction at which we have no idea what fact Φ might be. Nonetheless, it is worth pointing out that option (i)—the least odd option—does not seem viable. If Psy- chogrammar Functionalism is true and x cognizes G in virtue of Φ, then it would seem inevitable that either (a) Φ just says that x has a property that plays the psycho-grammar role, or, (b) x has a property that plays the psycho- grammar role because Φ. We can rule (a) out: if (a) is true, then, because Φ involves notions in the analysans of R, those notions include the notion of the psycho-grammar role; therefore, R cannot after all be used to non-circularly state what it is to play the psychogrammar-role. That leaves option (b). But if (b) is true, then it seems Φ must say, in part, that x has a property causally related to PSC-states. But because PSC-states are analyzed in terms of R, it turns out that Φ must also be analyzed in terms of R, in which case R’s non-semantic analysans (with the resources of which Φ is specified) makes mention of R itself, and so is circular. (An alternative, I suppose, is to accept (b) but to think of Φ as formulated in the terms of fundamental physics, specifying the fundamental physical full ground of the fact that x has a property that plays the psycho-grammar role. But this would have the unfortunate result that the de-semanticalization of linguistic comprehension can only be couched in fundamental physical terms, and it would have the implausible result that the true psychology T 0 must be stated in fundamental physical terms. Hardly!)

138 cognizing a grammar should be thought of as implementing a computation of some function or other abstract entity corresponding to that grammar.31 This view can be encapsulated as follows:

Psychogrammar Computationalism: For any speaker x and grammar G, x cognizes G just if some system of the appropriate kind within x computes or implements G.

On this view, believing that grammars are psychologically real is similar to believing in the psychological reality of the computations that Marr (1982) took edge detection to consist in, namely, the visual system’s computing the Laplacian of a two-dimensional Gaussian distri- bution of the retinal input (pp. 54–61). Many would say this puts psychogrammars in good, scientifically respectable company.

But worries immediately come to mind. One arises from the fact that the ‘of the appropriate kind’ bit is essential to this view’s formulation. Without it, the account would have the bizarre result that if someone had a microcomputer computing a grammar for French implanted in their foot, then they would thereby cognize that grammar, and, as per Ground, would thereby have French as a language. Language acquisition is not so easy!

The problem with this, however, is that it is unclear whether or how ‘of the appropriate kind’ could be understood non-circularly in non-psychogrammatical and non-linguistic terms. A natural suggestion is that a system of the ‘appropriate kind’ must be one that plays the psychogrammar-role. But then Psychogrammar Computationalism would be objectionable in exactly the ways that we found Psychogrammar Functionalism to be objectionable, and it would not square nicely with Ground.

There is also a felt worry that Psychogrammar Computationalism is bound to be in tension with Use, and so will be an inadequate account of psychogrammars. For it seems plausible

31That psychogrammars are states of function-computation is suggested by Marr 1982 (pp. 28–29, 357), Boden 1984 (pp. 26–8), Egan 2003, Rey 2003 (pp. 120–23), J. Collins 2000 (pp. 469–70), 2004 (pp. 525, 529–30), and 2007 (pp. 634–36), Devitt 2006 (pp. 66–71), Hornstein and Pietroski 2009 (pp. 114, 123), Pietroski 2010 (pp. 250– 52), 2011 (pp. 473–75, 474 fn. 4), K. Johnson 2014 (pp. 52–3), Berwick and Chomsky 2016 (pp. 128–39), M. Johnson 2017, Poeppel 2017 (pp. 156–63), and Chomsky 2018 (pp. 34–35).

139 that if grammars are computed, their computation is the result of the real-time operation of the language processor, or, by the performance state(s) underlying our capacity for linguistic com- prehension and production.32 But this would turn psychogrammars into performance states, contra Use, blurring competence and performance together.33 But let us set this problem to the side, as it ultimately turns on empirical disputes about the computational architecture of the mind.

Instead, I want to press the worry that if one accepts Psychogrammar Computationalism, then one should probably endorse Psychogrammar Functionalism too, and so will wind up positioned to flout Ground, as I argued above.

4.3.4.1 Computational functionalism

Computationalism about a particular mental state arguably entails psychofunctionalism about that state. This is because states of function-computation are arguably functional states, states one is in just if one has some property that plays a certain functional role. A functional role corresponding to a computational state, or a ‘computational role’, is just a special kind of causal role; a computational role is a causal role that a property plays just if it is causally and coun- terfactually related to special kinds of computational inputs and outputs, such as tokenings of a machine code (in the case of a computer) or of the language of thought or the neural machine code (in the case of minds).34 If a psychogrammar is a function-computation state, then its associated computational role is specified by the place of ‘psychogrammar’ in the true (compu- tational) psychology T; and so having a psychogrammar just is having some property that plays the computational role that is the psychogrammar-role, and so Psychogrammar Functionalism is true.35

32This natural conjunction of Psychogrammar Computationalism and the view that grammars are computed by performance mechanisms is estimated by Laurence 1996 to be accepted “in rough outline” by “the majority of linguists and psycholinguists working within Chomsky’s broad theoretical framework” (p. 283). If this is right, this majority seems to have overlooked that their view stands in tension with Use. 33This worry, that Psychogrammar Computationalism plausibly turns psychogrammars into performance states, is shared by Ludlow 2011 (p. 50). 34As Mellor 1984 points out (pp. 44). Though I admit that the question of relationship between computational- ism and functionalism is perhaps vexed; see the discussion of Piccinini 2010 below. For my purposes, all I need to claim is that Psychogrammar Computationalism is not clearly an alternative to Psychogrammar Functionalism. 35Interestingly, some who accept some form of computationalism about psychogrammars are explicitly suspi- cious of functionalism. Chomsky has repeatedly criticized functionalism, accusing it of retaining some problem-

140 I suspect that I am bound to be accused of conflating computational and functionalism here. This is in light of the admission in Fodor (2000) that, from a fresh 21st century perspective, it is “striking” how there has been a “widespread failure to distinguish the computational program in psychology from the functionalist program in metaphysics”, typified by how these are “run together” in Fodor (1968) (2000, p. 105, fn. 4). Piccinini (2010) makes the most thoroughgoing attempt to stem this tide of “failure”. He argues, persuasively, that the view he calls “compu- tationalism”, or, the view that the “functional organization of the brain is computational”, does not entail the view he calls “functionalism”, that “the mind is the functional organization of the brain” (p. 301).36 This is because the former view is compatible with identifying the mind with “some non-functional property of the brain, such as its physical composition, the speed of its action, its color, or more plausibly, the intentional content or phenomenal qualities of its states” (p. 301). I think this is correct, but it is irrelevant to my argument.

My argument is that if psychogrammars are computational states, then they are functional states. For if psychogrammars are computational states, then, assuming that they are somehow ‘realized’ by the brain (as seems to be common ground), they must be “some aspect of” the “computational organization of the brain,” as Piccinini would put it (pp. 300–1). And so the brain must have a computational organization. As Piccinini says, a “computational organization is the functional organization of a computing mechanism” (emphasis mine) (p. 296). And so the brain’s functional organization must be computational. Psychogrammars, then, must be “some aspect of” this functional organization. And this is tantamount to what Piccinini calls “functionalism” as applied to psychogrammars, which seems best formulated as some such thesis as Psychogrammar Functionalism. So Psychogrammar Computationalism entails Psychogrammar Functionalism after all. atic ‘dualistic’ aspect of Cartesianism; see the curious rejection of psychofunctionalism in Chomsky 2003b, in reply to Lycan 2003. (And note that Psychogrammar Functionalism is silent on the question of property dualism.) In a tone similar to Chomsky’s, Hinzen and Sheehan (2013) dismiss functionalism as somehow wrong-headed in the context of psycholinguistics for reasons that I cannot make clear (pp. 46, 253–54, 298–99). Meanwhile, however, they freely speak of “the implementation of language in the brain” and its “biological foundations”, by which they mean the a biological system implementing or computing some psychogrammar for a language (pp. 292, 294), but they do not hint at which non-functionalistic account of computational implementation they prefer. 36I also accept Piccinini’s arguments that what he calls “functionalism” does not entail his “computationalism” (2010, pp. 269–72, 296–98). See also Piccinini 2004.

141 4.3.4.2 Computational structuralism

At this stage, the adherent of Psychogrammar Computationalism might hold out for some non- functionalist, alternative account of computational implementation that does not have this un- toward result. Perhaps the structuralist account of computational implementation is such an account. On this view, roughly, what it is to compute an abstract object is to be in a set of physical states the causal organization of which is isomorphic to that abstract object.

As applied to psychogrammars, this view says that to compute a grammar G is to be in a suite of physical states the causal organization of which is isomorphic with some abstract object corresponding to G.37 Perhaps this abstract object will be a ‘combinatorial state au- tomaton’ representing G.38 This view—call it ‘Psychogrammar Structuralism’—although importantly distinct from a functionalist account of grammar-computation, nonetheless entails Psychogrammar Functionalism, or so I argue.

To see this, let ‘is isomorphic to a grammar’ abbreviate ‘is in a suite of physical states the causal organization of which is isomorphic with some abstract object corresponding to a gram- mar’. Arguably, Psychogrammar Structuralism entails: x is isomorphic to a grammar just if x has some property that plays the psychogrammar-role. Left-to-right: If Psychogrammar Structuralism is true, then: if x is isomorphic to a grammar, then x has a psychogrammar, and so must have some property that plays the psychogrammar-role, for that role was abstracted from the true psychology T. Right-to-left: Suppose Psychogrammar Structuralism is true. Might x have some property that plays the psychogrammar-role while not being isomorphic to a grammar? If ‘Yes’, then the true psychology T is forbidden from saying that speakers are caused to have psychogrammars by events that also cause speakers to become isomorphic to a grammar. For if T says this, then a state plays the psychogrammar-role only if it is caused by an event that causes them to become isomorphic to a grammar, and so anyone in a state that

37A structuralistic account of computational implementation is defended in Chalmers 1994, 1996, 2012, and variants of it in Copeland 1996, Dresner 2010, Egan 1992, Godfrey-Smith 2009, and Scheutz 2001. For a dis- cussion of this view in the context of providing a theory of how grammars might be computed, see Ludlow 2011 (pp. 119–20). Problems for structuralism are raised by Rescorla 2013 and Piccinini 2015 (pp. 16–25). 38There will be a combinatorial state automaton representing G just if G is Turing computable, which all gener- ative grammars are. See Chalmers 1994 (pp. 393–96) for more on the notion of a combinatorial state automaton.

142 plays the psychogrammar-role must be isomorphic to a grammar. But, given Psychogrammar Structuralism, T had better say that psychogrammars are caused by whatever causes speakers to become isomorphic to a grammar, for that is what it is to come to have a psychogrammar. Thus, if x has some property that plays the psychogrammar-role, then they must be isomorphic to a grammar. And so, as promised, x is isomorphic to a grammar just if x has some property that plays the psychogrammar-role.

Moreover, this biconditional seems like it could be strengthened into a property identity claim without loss of too much plausibility, since the property of being isomorphic to a gram- mar and the property of having a property that plays the psycho-grammar role are properties of the exact same kind. They are properties had by a speaker wholly in virtue of their causal organization. So we can safely infer, I think, that to be isomorphic to a grammar just is to have some property that plays the psychogrammar-role Therefore, Psychogrammar Structuralism entails Psychogrammar Functionalism.

Psychogrammar Functionalism seems then to be a commitment of many computationalists about psychogrammars. And so much the worse for Psychogrammar Computationalism.

Now, there yet more alternative accounts of computational implementation that might es- cape this problem. One is computational descriptivism, as defended by Rescorla (2014), and the other is the mechanistic account defended at length by Piccinini (2015). Unfortunately, although these views may not straightforwardly entail Psychogrammar Functionalism, they nonetheless fall prey to the same objections to which Psychogrammar Functionalism falls.

4.3.4.3 Computational descriptivism

Rescorla (2014) defends a detailed and admittedly theoretically virtuous account of compu- tational implementation glossed as follows: “Physical system P realizes/implements compu- tational model M just in case computational model M accurately describes physical system P” (p. 1278), where M accurately describes P just if “P reliably moves through “state space according to mechanical instructions encoded by M” (p. 1280). Now, assuming that gram-

143 mars can be put into an appropriate one-one correspondence with computational models,39 one might propose that a system computes a grammar G just if it is accurately described by the computational model corresponding to G. And then one might say that a speaker cognizes a grammar G just if a system of the appropriate kind in them is accurately described by the com- putational model corresponding to G. Let us label this view ‘Psychogrammar Descriptivism’ (a potentially viable, non-functionalist formulation of Psychogrammar Computationalism).

Psychogrammar Descriptivism is inadequate for three main reasons. First, as Rescorla ad- mits, as an account of computational implementation, his account is circular (pp. 1305–6). It specifies necessary and sufficient conditions for a computational model M to accurately de- scribe a physical system P spelled out in terms of the notion of computational implementation. Rescorla argues that this is no problem, however, because “most philosophically interesting concepts resist non-circular reduction” and so there is no reason to blush at failing to reduc- tively analyze implementation (p. 1278). But the circularity of a proposed analysans is not bad just because it rules out the possibility of its being reductive. It is also bad because it seems to rule out the possibility that that analysans states the ground or metaphysical dependence base of the analysandum. So, due to the circularity of Rescorla’s account, although it might “illu- minate,” as he says, the concept of computing a grammar, it plausibly does not specify that in virtue of which grammars are computed. If so, then if Psychogrammar Computationalism is true, it also does not specify that in virtue of which we cognize grammars. And so Psychogram- mar Descriptivism is not a formulation of Psychogrammar Computationalism that will serve as the account psychogrammars we are looking for.

Second, Psychogrammar Descriptivism arguably entails that to be accurately described by some computational model corresponding to a grammar just is to have some property that plays the psychogrammar-role. And if so, it too entails Psychogrammar Functionalism. To show this, let ‘is described by a grammar’ abbreviate ‘is accurately described by some compu- tational model corresponding to a grammar’. (The arguments will be familiar.) Left-to-right: If Psychogrammar Descriptivism is true, then: if x is described by a grammar, then x has a

39Rescorla uses ‘computational model’ so that it applies to Turing machines, so, this should not be controversial. (p. 1279)

144 psychogrammar, and so must have some property that plays the psychogrammar-role, for that role was abstracted from the true psychology T. Right-to-left: Suppose Psychogrammar De- scriptivism is true. Now ask: Might x have some property that plays the psychogrammar-role while not being described by a grammar? If ‘Yes’, then the true psychology T is forbidden from saying that speakers are caused to have psychogrammars by events that also cause speakers to become described by a grammar. For if T says this, then a property plays the psychogrammar- role only if it is caused by an event that causes a speaker to become described by a grammar, and so anyone with a property that plays the psychogrammar-role must be described by a grammar. But, given Psychogrammar Descriptivism, T had better say that psychogrammars are caused by whatever causes speakers to become described by a grammar, for that is equivalent to having a psychogrammar. Thus, if x has some property that plays the psychogrammar-role, then they must be described by a grammar. So, x is described by a grammar just if x has some property that plays the psychogrammar-role.

Here we go again. It looks like Psychogrammar Descriptivism threatens to collapse into Psychogrammar Functionalism.

Rescorla might reply, however, by insisting that being described by a grammar is not sim- ply a matter of one’s causal-functional organization, and so that even if it is equivalent with having a property that plays the psychogrammar-role, it should not be identified with this. As Rescorla might say, being accurately described by a computational model corresponding to a grammar is a matter of “conforming to [the] instructions” encoded in that model, which “re- quires doing what the instructions say,” which often “requires instantiating properties that do not reduce to any relevant pattern of causal organization” (pp. 1288). What he has in mind is the instantiation of properties which require having “an ability to instantiate states specified by the model” (pp. 1288). Such a state, in the case of a grammar, will be a possible state of a speaker corresponding to a semantic theorem of the grammar. But this will presumably be a state of linguistic comprehension, i.e. a state of knowing the content of a semantic theorem of the grammar. Putting this all together, the reply I anticipate on Rescorla’s behalf is that being described by a grammar is not merely a matter of causal-functional organization, for speakers

145 are described by a grammar partly because they have the capacity to enter states of linguistic comprehension.

But there are two problems with this. First, this suggests that Psychogrammar Descrip- tivism will be inconsistent with Use, for psychogrammars are performance states if it is true; they are states that speakers are in due to their performance capacities. And second, this opens up Psychogrammar Descriptivism to the very same problem that afflicts Psychogrammar Func- tionalism. For a speaker’s capacity for understanding seems bound to depend at least partly on semantic facts. If so, then if Psychogrammar Descriptivism is true, it seems in tension with Ground.

4.3.4.4 Computational mechanicalism

On a mechanistic theory of computation, a computational explanation—an explanation at Marr’s computational level of description—should be thought of as a “sketch” of a mechanistic expla- nation, to use the metaphor of Machamer, Darden and Craver (2000, p. 18).40 In other words, a computational explanation, or, an explanation that appeals to a computational state, should be thought of as something like a temporary placeholder for an explanation in terms of the activity of some yet undiscovered mechanism M. And this is because the computational state to which the computational explanation appeals is ultimately nothing but an ‘abstraction’ from the activity of M, or so the view goes; the computational state itself is mechanistically realized or constituted by M’s activity, and so is ‘mechanistically explained’.

To see how this is supposed to go, consider the helpful diagram from Povich and Craver (2018) of an instance of multi-level mechanistic realization:

40See also Piccinini and Craver 2011 and Piccinini 2015 (pp. 96–99).

146 Let us focus just on the relation between the topmost level and the level below it. The ‘S’ in the topmost oval refers to some “mechanism as a whole” (p. 186). And the ‘Ψ-ing’ refers to whatever that mechanism does. What we see, then, is that there are components of S or its sub- mechanisms X1...X4 engaged in activities φ1...φ4 and organized “such that together they Ψ”; “collectively” the φ-ings of the Xs “exhaustively constitute” or “realize” S’s Ψ-ing (p. 186).

As applied to psychogrammars, then, a mechanistic account of computation will take Psy- chogrammar Computationalism as a ‘sketch’ of some something like the following view: if a speaker x cognizes a grammar G, then there is in x a set of components, Y1...Yn, and a set of activities, ϕ1...ϕn, such that: Y1’s ϕ1ing, ... and Yn’s ϕn-ing collectively exhaustively constitute x’s cognizing G. Call this view ‘Psychogrammar Mechanicalism’.41

Now, I am not sure that Psychogrammar Computationalism—the view that to cognize a grammar is to have in one a system of the appropriate kind that computes a grammar—is best thought of as a “sketch” of Psychogrammar Mechanicalism. That would seem to depend on

41I take it that this is in accord with Piccinini 2010, who claims: “A mechanism M with capacities C is a set of spatiotemporal components A1, ... An, their functions F, and F’s relevant causal and spatiotemporal relations R, such that M possesses C because (i) M contains A1, ... An, (ii) A1, ... An have functions F organized in a way R, and (iii) [A1, ... An], when organized in way R, constitute C.” But, to maximize accordance, an ‘activity’ in my formulation should be thought of as a ‘way of functioning’. Given Piccinini’s account of mechanisms, Psychogrammar Mechanicalism entails that a speaker with a psychogrammar is (or contains) a mechanism with the capacity to cognize a grammar.

147 whether those who assert Psychogrammar Computationalism intend to be interpreted that way. But it seems plausible that Psychogrammar Computationalism entails Psychogrammar Me- chanicalism. After all, we are physical beings. And if we compute grammars, surely this is in some sense ‘constituted by’ bits of our physical makeup collectively doing some such things or functioning in some such ways. Moreover, it seems to me that Psychogrammar Mechanicalism is entailed by the conjunction of physicalism and the proposition that we finite composite phys- ical beings have psychogrammars. But this is just to say that Psychogrammar Mechanicalism is quite weak. I do not see how it could be false, unless the quantification over ‘components’ and ‘activities’ is radically restricted.

Does this mean we have finally arrived at an account of psychogrammars? No. Psy- chogrammar Mechanicalism seems no better, despite the added technicalia, than the unspecific, schematic accounts considered above. To see this, let us ask: what is this relation R∗ of ‘exhaus- tive constitution’ to which the account of mechanistic realization appeals? About R∗, Povich and Craver (2018) say: it is “more intimate” than “correlational, causal, and nomological” relations (p. 192); the “intimacy” between R∗’s relata is “stronger, more metaphysically neces- sary, than a simple matter of regularity, causation or law” (p. 193); when R∗(x, y) (pretending for simplicity that it is a binary relation between individuals), x “contains” y, x “exhaustively constitutes” y, and x is “exhaustively explained (in an ontic sense)” by y (p. 193). They stop short of saying that R∗(x, y) entails x = y, however, for mechanistically realized things are often multiply realizable.

To me, these indirect characterizations fail to shine much light on R∗. And if some positive account of R∗ can be given, I suspect it will be in terms of functional realization, such that Psy- chogrammar Mechanicalism entails Psychogrammar Functionalism, inheriting its flaws. To see this, consider how two mechanisms might perform the same function while being ‘exhaustively constituted’ by different subcomponents engaging in different activities. For instance, take a pair of watches, W and W0. W is a mechanical watch, a paradigm mechanism. W’s telling the time is ‘exhaustively constituted’ by the mechanizing activities of its mainspring, hairspring, jewel, balance wheel, and so on. But W0 is a digital watch. W0’s telling the time is ‘exhaustively

148 constituted’ by the digitalizing activities of its battery, quartz oscillator, microprocessor, circuit board, and so one. So telling the time is multiply realizable.

But what explains this? In virtue of what can do differently built watches tell the time in the exact same way? The mechanistic explanations of the watches offer no answer, that is, so long as R∗ is unspecified. But suppose R∗ is explained in terms of functional realization. We could then say that the mechanizing of W’s parts ‘exhaustively constitutes’ (or, is R∗-related to) W’s telling the time because the property of having mechanizing parts plays the time-telling- role; that is, because (i) there is some functional-role r such that x tells the time just if x has a property that plays r, and (ii) the property of having mechanizing parts plays r. This seems like just what we should say about how watches are realized. And so I suspect that mechanism realization must be explained at least partly in terms of functional realization. So Psychogrammar Mechanicalism naturally gives way to Psychogrammar Functionalism.42 But Psychogrammar Functionalism is an inadequate account of psychogrammars.

4.3.5 The Evans-Davies-Peacocke theory

So far, the theories of psychogrammars I have considered have been instances or applications of broader views about the nature of psychological states. But perhaps psychogrammars de- serve special treatment. Evans (1981), Davies (1987), and Peacocke (1989) give them special treatment, developing an account tailor-made for psychogrammars. On their view, roughly, to cognize a grammar is a matter of having one’s mind or brain causally structured in a way that mirrors the formal, derivational structure of the grammar. More specifically, to cognize a grammar is to be in a set of suitable mental states corresponding to the rules of that grammar that have causal power to move one into mental states corresponding to the theorems of that grammar.

So far, this all sounds familiar. At the slogan-level, it is reminiscent of Psychogrammar

42“How can you say that Psychogrammar Mechanicalism is true and that it entails Psychogrammar Function- alism but deny that Psychogrammar Functionalism is true?” Strictly speaking, I do not claim that Psychogrammar Functionalism is false. I claim that it is an inadequate account of psychogrammars. That is, I claim that if there are psychogrammars and Ground and Use are true, then Psychogrammar Functionalism is false. Likewise, I do not claim outright that Psychogrammar Mechanicalism is true; I claim that if there are psychogrammars, then Psychogrammar Mechanicalism is true.

149 Structuralism. But it is different enough to deserve its own discussion, as we will see.43 The formulation of this view laid out by Peacocke (1989) is the most detailed, so I will focus on it.44 To keep things simple, let us consider what Peacocke’s account says about what it would take to cognize a toy grammar for an extremely simple language I will call ‘Simplish’. It contains only two words and one sentence: the name ‘Bill’, the predicate ‘smokes’, and the sentence ‘Bill smokes’. An artificially simple semantic theory for Simplish might contain two semantic valuation axioms, specifying the meanings of ‘Bill’ and ‘smokes’, and one semantic composition axiom:

(A1) ‘Bill’ refers to Bill. (A2) ‘smokes’ is true of x iff x smokes. (A3) If, for some name N and verb phrase VP, N refers to x and, for all y, VP is true of y iff y φs, then N_VP is true iff x φs.

Call this semantic theory, a partial grammar, G3, for its three axioms. It will have as a theorem a specification of truth conditions for the sentence ‘Bill smokes’:

(T1) ‘Bill smokes’ is true iff Bill smokes.

And let us pretend that (T1) says or entails that ‘Bill smokes’ means that Bill smokes.45 We can now imagine a creature, Floyd, whose only language is Simplish. And we can imagine that Floyd cognizes a generative grammar for Simplish that has G3 as its semantic

46 component. So Floyd cognizes G3. For Peacocke, in virtue of what does Floyd cognize G3? The story is very complicated. The next section unpacks Peacocke’s proposal at a high-level of detail, in an attempt to arrive at a simplified statement of what it entails about what it is

43And this time I will not try to show that the view under consideration entails Psychogrammar Functionalism. 44I trust Peacocke 1989 in his asssessment that his account “can be shown to be equivalent” (p. 128, fn. 2) to that of Davies 1987, and so take it that my assessment of Peacocke’s account can be transposed into an assessment of Davies’s. I also trust Davies’s assessment that his account is an improvement over that of Evans 1981, refined in light of objections raised by Wright 1986, 1987, and so take it Evans’s account can be no more plausible than Peacocke’s. 45It does not, not even if ‘iff’ is read as tacitly necessitated. 46I will be ignoring the syntactic and phonological components of generative grammars. But much that I say in problematizing the notion of tacit knowledge of semantics will appeal just as well to the notions of tacit knowledge of syntax and tacit knowledge of phonology.

150 to cognize G3. And the following section argues that Peacocke’s account of what it takes to

cognize G3 is inadequate. If my arguments go through, then surely his account will not be able to adequately characterize what it is to cognize a complex grammar for a natural language like English.

4.3.5.1 Peacocke’s account of psychogrammars

On Peacocke’s view, what it is for a subject to cognize a grammar is for the rules of that grammar “to specify information drawn upon by the relevant mechanisms or algorithms in that subject” (p. 114), where the “relevant mechanisms” are the ones causally responsible for the grammatical facts—the syntactic, semantic, and presumably also phonological facts—about that subject’s language. In more detail, his account states that

(P1) For any speaker x and grammar G, x cognizes G just if for every fact of grammar q (of a

certain canonical form) derivable in G from rules R1 ... Rn, there is a mechanism M in x such that q is a fact about x’s language because M draws upon the information expressed

by R1 ... Rn. where the ‘because’ is that of causal explanation. For comparison, here is Peacocke’s own statement of his view:

Suppose we have rules R1 ... Rn of a grammar G, rules which state that p1 ... pn respectively. Then, to a first approximation, for R1 ... Rn in G to be psychologically real for a subject is for this to be true: Take any statement q of grammar which is derivable in G from rules R1 ... Rn: then not merely is it true that q, but the fact that q holds for the subject’s language has a corresponding explanation at level 1.5 by some algorithm or mechanism in the subject, an algorithm or mechanism which draws upon the information that p1, upon the information that p2 ... and upon the information that pn. (p. 115)

My formulation, (P1), elides the distinction between mechanisms and algorithms. For I take it that talk of ‘algorithms in subjects’ is to be understood as talk of physiological implementations of algorithms in subjects, or, as talk of ‘physioalgorithms’, to use George’s (1989) helpful term (p. 94). Strictly speaking, algorithms are abstract objects that cannot be ‘in’ subjects and cannot

151 causally explain anything. And physioalgorithms are presumably mechanisms of some kind, so long as we are not using ‘mechanism’ in line with some restrictive philosophical theory of what it is to be a ‘genuine mechanism’.47 My (P1) also eliminates mention of the “explanation at level 1.5”, a level of description or analysis (or whatever) to be thought of as ‘in between’ Marr’s computational level 1 and algorithmic level 2.48 As Peacocke makes clear, an explanation at level 1.5 just is a causal explanation that appeals to what Peacocke calls “facts stated at level 1.5” (1989, p. 112), which just are “facts about the information drawn upon by algorithms or mechanisms in the” mind or brain of the individual (p. 113). So, for Peacocke, a psychological fact p about x has an explanation at level 1.5 just if there is some mechanism M in x and some information i such that the fact that p is causally explained by M’s drawing on i. So, when Peacocke talks of facts about a subject’s language being explained at level 1.5, he means nothing more than that those facts are causally explained by a particular mechanism within the subject drawing on some information. This allows his view to be stated more simply, as in (P1). Now, what is it for a mechanism to draw on information? Or, what is it for a speaker’s language to be a certain way because a mechanism in them draws upon some information expressed by a grammar’s rules? Peacocke’s answer, as I understand it, is as follows (p. 116):

(P2) When a mechanism M in x draws upon the information expressed by rules R1 ... Rn and thereby causes q to be a fact of x’s language, this is because (i) M has the power to cause x to be in an informational state with the content q if x is in states with the contents

expressed by R1 ... Rn, and (ii) x is in states with the contents R1 ... Rn.

Let us see how this all applies to Floyd. Recall that Floyd’s language is Simplish and he

cognizes G3. Peacocke’s view, as applied to Floyd, says that he cognizes G3 just if the semantic

47On this point, I take the lesson of Illari and J. Williamson 2012 to be that, in order for a theory of mechanisms to correctly classify as mechanisms the wide variety of things classified as mechanisms across the sciences, that theory will need to be as unrestrictive as saying that M is a mechanism just if M “consists of entities and activities organized in such a way that they are responsible for [some] phenomenon” (p. 132). Physioalgorithms must consist of some things organized such that they are responsible for the implementation of an algorithm. So they are mechanisms. 48See Marr 1982 (pp. 24–27) for his original distinction between the computational level 1, algorithmic level 2, and implementation level 3. And see Peacocke 1986 for a defense of 1.5-level psychological explanations.

152 theorem (T1) holds of his language because he possesses a mechanism in his mind/brain with a certain causal power: the power to cause Floyd to be in an informational state with the content (T1) if he is in informational states with the contents (A1)–(A3).49 To make this even clearer, Peacocke says that there is a mechanism M∗ in Floyd such that (A) is causally explained by (B):

(A) (T1) is a fact about Floyd’s language. (B) M∗ has the power to cause Floyd to be in an informational state with the content (T1) if Floyd is in informational states with the contents (A1)–(A3).

And this is what makes it the case that Floyd cognizes G3. So, if (P1) is true, then the following should state necessary and sufficient conditions for

cognizing G3:

(P3) For any speaker x, x cognizes G3 just if there is a mechanism M in x such that (T1) is true of x’s language because M has a power to cause x to be in an informational state with the content (T1) if x is in informational states with the contents (A1)–(A3).

I will evaluate (P3) next.

4.3.5.2 Against Peacocke’s account

Does an account on which (P3) states necessary and sufficient for cognizing G3 satisfy our desiderata Ground and Use? I think not. First I will argue that Peacocke’s account does not satisfy Use, and then I will argue that it does not satisfy Ground. Peacocke’s account and Use: I argue that, on Peacocke’s view, psychogrammars are perfor-

mance states, in violation of Use. On his view, Floyd cognizes G3 in part because his mecha- nism M has a causal power that is activated a certain set of “informational states”. But can these

49Here I avoid unnecessary complexity by ignoring the fact that Peacocke thinks that axioms like (A1) and (A2) play a different role in the account of what it is to cognize G3 than does the inference rule (A3). A more accurate statement of his view, then, is that Floyd cognizes G3 just if (T1) holds of his language because he possesses a mechanism M in his mind/brain with the power to cause Floyd to be in an informational state with the content (T1) if he is in informational states with the contents (A1) and (A2), which is to say that M “uses the transition-type expressed by” (A3); by using this transition-type—by having this causal power—M “draws upon the information” encoded in (A1) and (A2) (but not on the information encoded in (A3)) (p. 116).

153 be informational states of any kind? No. In Peacocke’s words, they must be a “suitable” set of informational states (p. 116). But, as it turns out, an information state’s suitability requires that it be a performance state, which has a consequence, or so I will argue, that psychogrammars are performance states.

To see why not just any informational states are suitable, imagine Gus. (T1) is a fact about

Gus’s language, but this is because of a mechanism MGus within him that we can think of as implementing a ‘listiform’ semantic theory; Gus’s competence is unstructured.50 If he cognizes a semantic theory at all, it is the listiform semantic theory with (T1) as its sole axiom. But Gus is wired in such a way that MGus’s operation is activated by Gus’s being in states bearing (A1)– (A3) as contents. But these states are desires. Gus oddly desires that the world is such that

(A1)–(A3) are true. And Gus is wired such that if he were to lack these desires, MGus would not operate, and such that these desires are the stimulus condition for MGus’s causally shaping Gus’s language.

Pretty clearly, Gus does not cognize G3, even though it might appear that his mind/brain has the structure Peacocke takes to be sufficient for cognizing G3. This appearance must be mistaken; Gus’s desires, although they bear the right contents, are not informational states suitable for determining a speaker’s grammar. For the same reasons, I think ordinary beliefs are also unsuitable.51

What, then, must a suitable informational state—an ‘S-I state’, hereafter—be like? Pea- cocke speculates that an S-I state with, for example, the content specifying the meaning of ‘man’—i.e., a state suitable for (partially) determining that its speaker’s grammar contains an axiom specifying the meaning of ‘man’—must “be connected with [its] subject’s possession of the concept man, with the ground of his ability to recognize the word ‘man’, and with his general grasp of predication” (p. 113). Clearly, desiring or simply believing that ‘man’ means man (or whatever) fails to meet these requirements. Someone can desire or believe that ‘man’ means man without grasping the concept man and without being able to recognize tokens of

50See Davies 1981, p. 53–57. 51 To see this, imagine Gus is wired in such a way that MGus’s operation is triggered by explicit beliefs in (A1)–(A3). Still, I think we would not want to say Gus cognizes MGus.

154 ‘man’.

But now it seems that in order for a state to be an S-I state it must be “connected” with a subject’s performance states, or, with states that ground a speaker’s capacities for understand- ing and thought (concept possession) and linguistic perception (word recognition). What this strongly suggests is that, for Peacocke, S-I states with (A1)–(A3) as their contents are per- formance states: they are states a speaker is in partly because of facts about that speaker’s sensory-motor systems. Indeed, Peacocke suggests that S-I states “function” to “allow the sub- ject to participate, now as perceiver, now as producer,” playing a causal role “in the perception and the production of sentences” (p. 118).

Now, I say ‘strongly suggests’ because Peacocke does not say what the nature of the con- nection relation is between S-I states and their respective performance states. He says that it “is a fascinating and difficult question what the nature of the required connections is,” but that it must be such that “no state which can exist without [connected performance states] can be identified with an” S-I state (p. 115). But if this is right, then there must be a necessary connection between S-I states and their connected performance states. Performance states are metaphysically necessary for S-I states. But how could we account for this other than by say- ing that S-I states either always obtain partly in virtue of performance states or are in some other way ‘not wholly distinct from’ performance states?52 Either way, Peacocke’s account of psychogrammars fails to satisfy Use.

Moreover, it is unclear how, for Peacocke, which grammar we cognize enables and con- strains our linguistic performance. If the bases of our practical linguistic capacities figure as partial determinants of which grammar we cognize, then it would seem those bases and their capacities are ‘pre-psychogrammatical’. It would seem that, on Peacocke’s view, it is more ac- curate to say, for example, that Floyd cognizes G3 because he has the capacities to use ‘smokes’ and ‘Bill’ in the ways he does (or because he has bases for those capacities) than it is to run the explanation in the opposite direction. Floyd’s performance capacities to use, grasp the mean-

52Other candidate explanations for a necessary connection between states S and S 0 cannot be appealed to here. For instance, we cannot say that there is an analytic or logical or conceptual connection between stative predicates associated with S and S 0 or between the concepts they express. In other words, it does not seem to be a priori or analytic that an S-I state must be connected to such-and-such performance states.

155 ings of, and recognize ‘Bill’ and ‘smokes’ determine and constrain which grammar he cognizes and enable him to cognize it, not vice versa. Again, this is inconsistent with Use.

Peacocke’s account and Ground: As it is formulated above in (P3), Peacocke’s account is arguably incomplete. To complete it, one needs to give an account of how a subject’s posses- sion of a mechanism with a certain causal power is supposed to explain the facts about their language. Such an account will have to say that a subject’s possessing a mechanism of the right kind is sufficient for that subject to be in (or to enter) a multitude of what we might call linguistic states, one for each fact p that their mechanism causes to be true of their language. A linguistic state must be a type of state such that, necessarily, if a speaker is in a linguistic state ‘associated with’ or ‘for’ the proposition p, then p is a fact about that speaker’s language. And a linguistic state S for p must be such that whenever a mechanism M in a speaker x causes x to be in (or to enter) S , M can be correctly cited to causally explain why p is a fact about x’s language.

Now, the viability of Peacocke’s account turns on whether an adequate account of linguistic states can be given. And this will be no straightforward task. For linguistic states are plausibly not anything like belief-like content-bearing informational states; they are plausibly not states that speakers are in in virtue of the tokening of particular contentful mental representations within them. To see why, imagine Peacocke’s proposal applied to a complex natural language like English. For Peacocke, if a speaker x cognizes a grammar for English, then x must possess a mechanism that causally explains an infinity of facts of the form ‘p is a fact of x’s language’, i.e. the mechanism must cause x to be in an infinity of linguistic states, at least one for each of the infinitely many semantic facts about English expressions. But x cannot token a mental representation for each of those semantic facts, for x’s mind/brain is finite.

So it is not easy to anticipate what an account of linguistic states would look like. But we can set out some general criteria it must meet. In owing us such an account of linguistic states,

Peacocke owes us a specification of the linguistic state relation, or the relation RL such that a state S is a linguistic state for the proposition p just if and because RL(S, p). This means RL must be such that:

156 (R1) Necessarily, if a speaker x is in a state S such that RL(S, p), then (i) p is a fact of x’s language, and, (ii) (i) holds because x is in S .

Additionally, RL must also be such that, for any semantic fact q (or any “statement of grammar”, as Peacocke puts it (p. 115)), a speaker x’s being in a state S such that R(S, q) is a necessary condition for q to be true of x’s language. The reason for this is that all of the semantic facts that hold for a speaker’s language are supposed to be explained, if Peacocke’s account is true, by the mechanism underlying their psychogrammar. But if some semantic fact q is true of x even

though x is in no state S such that RL(S, q), then the mechanism underlying x’s psychogrammar cannot explain why q is true of x’s language, So R must be such that:

(R2) Necessarily, for any semantic fact q, if q is a truth of x’s language, then x must be in some

state S such that RL(S, q).

Although, ‘q’ may need to be restricted to semantic facts about complex expressions, or to ‘non-atomic’ semantic facts. For it may be that the atomic semantic facts about a speaker’s language—those corresponding to the semantic axioms of the grammar of their language— are not apt to be explained by the speaker’s being in certain linguistic states as a result of the mechanism underlying their psychogrammar. Indeed, Peacocke takes his account to be only “a first approximation” because it “it not meant to apply when q has the same content as one of the [grammar’s rules],” because “nothing explains itself” (p. 115). But, arguably, this hedge is based on a mistake. If we explain why (A1) is true of Floyd’s

language by saying he is in a linguistic state S such that RL(S, that ‘Bill’ refers to Bill) as a result of a mechanism within him that draws on a suitable informational state with the content that ‘Bill’ refers to Bill, we are not explaining anything in terms of itself. Linguistic states for atomic semantic propositions need to be distinguished from their associated S-I states with those propositions as contents, for two reasons. First, as I argued above, we are in far too many linguistic states for them to be thought of as informational states. And, second, if linguistic states just are S-I states, then, unfortunately, Peacocke’s account forbids us from appealing to psychogrammars to explain in any way the atomic semantic facts about speakers’ languages’;

157 because then psychogrammars will be partly realized by linguistic states (S-I states) for those atomic semantic facts, i.e. psychogrammars will be dependent upon those atomic semantic facts already being facts of the speaker’s language. So Peacocke’s hedge seems unnecessary. We should say that even the atomic semantic facts about a speaker’s language are reflected in linguistic states associated with those facts that the speaker is in because of the mechanism underlying their psychogrammar.

Now, here is the problem: No matter what account of the linguistic state relation RL is given, if that account satisfies (R1) and (R2), then the availability of such an account threatens to undermine the entire project of using grammars to explain facts about our language. In other

words, if we had in our grasp some relation RL satisfying (R1) and (R2), we could dispense with psychogrammars. Instead of appealing to psychogrammars to explain the semantic facts about the languages of speakers, as in (1) and in line with GROUND, we can explain them by appeal to the relation

RL, as in (2):

(1) An expression e means m for speaker x just if and because x cognizes some grammar G that assigns m to e as its meaning. (2) An expression e means m for speaker x just if and because x is in a state S such that

RL(S, that e means m).

The right-to-left direction of (2) follows immediately from (R1), and the left-to-right direction from (R2). So, given (R1) and (R2), there can be no counterexamples to (2). The upshot is that a resource needed to complete Peacocke’s account, once made available, can be used to construct explanations that rival and are simpler than the psychogrammatical

explanations that Peacocke’s account aims to back. We might even go further, and use RL to replace Peacocke’s account directly, offering the following account of what it is to cognize a

grammar in terms of RL:

(3) A speaker x cognizes grammar G just if for every proposition p expressed by an axiom

or theorem of G, x is in a state S such that RL(S, p).

158 Again, it is unclear how there could be any counterexample to (3).53

Even more threateningly, we might use RL to give an account of what it is to have a lan- guage, perhaps eliminating the need for psychogrammars entirely:

(4) A speaker x has language L iff for every linguistic fact p of L, x is in a state S such that

RL(S, p).

In other words, an adequate account of the linguistic state relation RL would allow to construct a seemingly adequate account of the actual language relation in fully non-psychogrammatical terms.54 And we could then go on to explain everything we thought we needed psychogram- mars to explain just in terms of this relation. For example, with (4) in our back pocket, we could say that an expression e means m for speaker x just if and because x has a language L such that e means m and L. And we would then be perfectly positioned to deny Ground. In sum, if we go in for Peacocke’s account, we will find ourselves in need of a supplemen-

tary theory (of RL) that will inevitably throw Ground into question. It would seem, then, that Peacocke’s account cannot satisfy Ground. Moreover, and finally, it is unclear whether Peacocke’s account is even compatible with Ground. For if, as on Peacocke’s view, we cognize grammars partly in virtue of being in lin- guistic states, and if to be in a linguistic state just is for something to be true of one’s language, then it seems we cannot go on to say that our language is the way that it is in virtue of the gram- mar we cognize. For it would seem that the order of explanatory priority between language and psychogrammar is, for Peacocke, the reverse of that implied by Ground. For these reasons, the Evans-Davies-Peacocke view is an inadequate account of psychogram- mars, and for more.55 53Or at least this is unclear when ‘p’ is restricted to semantic axioms and theorems. All along we have been ignoring grammars’ syntactic and phonological components, as well as the syntactic and phonological facts about speakers’ languages that psychogrammars are also supposed to explain. But it is clear that if Peacocke’s account is extended to syntax and phonology, then he will owe us an account of R such that (3) is bound to have no coun- terexamples. In particular, RL will have to be such that, necessarily, for any semantic, syntactic, or phonological fact p, p is a truth of x’s language just if x is in some state S such that RL(S, p). 54There can be no objection that (4) will inevitably lead to a reintroduction of psychogrammars somewhere further ‘down the chain’ of metaphysical dependence. For if an analysis of language-having in terms of RL makes some hidden appeal to psychogrammars, then Peacocke cannot offer a non-circular analysis of psychogrammars in terms of RL. 55For other problems, see Barber 2007.

159 4.3.6 Biological theories

The last theory of psychogrammars that I will consider is a reductive, biological account on which to have a psychogrammar just is to have a particular neurophysiological property instan- tiated by one’s brain, body, or nervous system. This view will involve at least the following claim:

Psychogrammar Reductionism56 For any speaker x and cognizable grammar G, there is some neurophysiological property N such that: x cognizes G just if x has N.

Before I consider some problems with Psychogrammar Reductionism, I will explain why I think it is something like a methodological assumption of significant strands of contemporary work in the foundations of linguistics.

4.3.6.1 The biolinguistic conception of psychogrammars

Something like Psychogrammar Reductionism is widely hinted at in Chomsky, as well as in recent research falling under the heading of ‘biolinguistics’.57

A first hint is the repeated insistence by Chomsky and others that the linguistic and psy- cholinguistic predicates that they apply to speakers in stating their theories are to be interpreted as being used to refer to as yet unknown neurophysiological properties of physical mechanisms in the brain. Here is Chomsky (1980) making this point:58

56I remain neutral on whether Psychogrammar Reductionism is a thesis of ‘type-identity’, or, on whether it entails that psychogrammars qua state-types (or properties) are identical to neural state-types. For this reason I use ‘just if’ and not ‘just if and because’, taking ‘because’ to be asymmetric. (If properties are coarse-grained, then the entailment goes through.) And I call this account ‘reductive’ just because the label is associated with views that tie the mental closely to the neurophysiological, not because I have some particular theoretical notion of reduction in mind at this stage. 57By ‘biolinguistics’, I mean here what Martins and Boeckx 2016 call “biolinguistics as the study of the uniquely human and linguistic,” a research program that “assumes there is something biologically unique to lan- guage” (p. 4). Psychogrammar Reductionism, if true and combined with Ground, would surely vindicate the view that language is, as a matter of fact, unique to humans—that is, assuming that the neurophysiological states associated with psychogrammars are too complex for non-human brains to enter. 58And in less words: “we may think of the study of mental faculties as actually being a study of the body— specifically the brain—conducted at a certain level of abstraction” (p. 31). There are many other passages through- out Chomsky 1980 advocating the view that psycholinguistic descriptions are abstract descriptions of unknown brain mechanisms; we read that linguistics is “the abstract study of certain mechanisms, their growth and matura- tion” (pp. 187–188), that mental state ascription is the “abstract characterization of perhaps unknown mechanisms”

160 When I use such terms as “mind,” “mental representation,” “mental computation,” and the like, I am keeping to the level of abstract characterization of the properties of certain physical mechanisms, as yet almost entirely unknown. There is no fur- ther ontological import to such references to mind or mental representations and acts. (p. 5)

If there is no ‘further ontological import’ to talk of psychogrammars than the instantiation of neurophysiological properties, then, presumably Psychogrammar Reductionism is true.

This theme is picked up again and again by Chomsky’s expositors and defenders. J. Collins (2008) is clear on this: he takes a psychogrammar to be “simply a steady state of the mind/brain of the speaker/hearer” (p. 134); we can “identify” it with an “aspect of the human mind/brain” (p. 140). And he takes “the very inquiry linguistics is pursuing” to be “one into the character of the mind/brain” (p. 144). And he echoes Chomsky’s characerization of linguistics as engaged in abstract description, clarifying that a psychogrammar “is simply a state of the mind/brain, albeit abstractly described” (p. 220). States of the ‘mind/brain’ are, after all, states of the brain. And so some form of ‘psychogrammatical-biological reduction’ seems to advocated here.59

N. Smith and Allott (2016) make similar claims. They claim that given our “ignorance” of “how language is neurophysiologically implemented in the brain”, theories involving the ascription of psychogrammars “are necessarily abstract characterizations of the properties of certain physical (mostly mind-brain) systems” (p. 129). What is implied that there would be no need to resort to such abstract notions if we were more knowledgeable of our neurophysiologi- cal nature. This strongly suggests that there is nothing for linguists to be about that is over and above the brain.

(pp. 103–105), and that inquiry into psychogrammars is “inquiry at a certain level of abstraction into the nature of certain mechanisms, presumably neural mechanisms, now large unknown” (pp. 89–91). On one reading of these passages, Chomsky has in mind something like the “mechanism sketch” view of psychogrammatical expla- nation considered above. But on the reading I prefer, Chomsky is admitting that he takes predicates like ‘has a psychogrammar’ to indirectly refer to neurophysiological properties, in which case Psychogrammar Reductionism will be true. 59Curiously, Collins seems to deny that Chomskyan biolinguistics has ‘reductive’ aspirations (2008, p. 17–18). But this is easily cleared up. The notion of ‘reduction’ he has in mind, and rejects as anti-Chomskyan in spirit, is a form of inter-theoretic reduction such that, if it were achieved, it would eliminate the need for any science other than physics. But this is not implied by Psychogrammar Reductionism. Katz 1981 was early in attributing the aspiration to Chomsky 1975b, 1976 that linguistic theories postulating psychogrammars will be “reducible to biological theories” (p. 54). But Chomsky does not discuss ‘reduction’ in these works. In Chomsky 2002a, however, he is clear that he believes that an integration of linguistics with neuroscience based on some form of reduction is in principle possible.

161 However, elsewhere Smith and Allott suggest the connection between language and the brain is metaphysically looser than Psychogrammar Reductionism might suggest:

[. . . ] we do not yet know in any detail how linguistic theories at this level relate to facts about brains: how the brains of English speakers differ from those of speakers of Japanese, or what it takes for a configuration of brain tissue to be a noun-state. Of course, linguistic knowledge is manifested in psychological processes which are presumably carried out by physiological mechanisms that are somehow instan- tiated in physical systems. But it doesn’t follow that the “higher-level” linguistic generalizations can be stated in physical terms: e.g. in the vocabulary of neuro- science. At present, all the explanatory power is at the abstract (or “linguistic”) level. (p. 199–200)

But this is somewhat leading. Smith and Allott refer to N. Smith (2005) for elaboration on this point. But Smith blames the separateness of psycholinguistic and neurophysiological no- tions on our “need”, qua scientists, for maintaining separate psycholinguistic and neurophys- iological theories (pp. 95–96). But this need on our part is compatible with Psychogram- mar Reductionism. Indeed, as Smith goes on to admit, he takes it that something like a bonafide psychogrammatical-biological reduction may be in principle possible, arguing that psychogrammars may be reducible to “cortical or sub-cortical processes” in the same way that “physics and chemistry were unified in the last century” (p. 96).

Another hint that something like Psychogrammar Reductionism is a foundational assump- tion of biolinguistics is that the implausibility (or impossibility) or any straightforward, term- by-term reduction of linguistics to neurophysiology has been widely acknowledged as a ‘prob- lem’. This is largely due to the reception of what Poeppel and Embick (2005) call the “On- tological Incommensurability Problem” (p. 105). The ‘problem’—which will surprise exactly zero philosophers of mind—is that, given the contemporary state of linguistics on one hand and of neuroscience on the other, “the fundamental elements of linguistic theory cannot be reduced or matched up with the fundamental biological units identified by neuroscience,” such that no “direct reduction” seems possible (p. 105–6). That is, we have no idea how we might directly ‘match up’ psychogrammatical properties to neural properties. The recognition of this fact is perhaps uninteresting. But what is very interesting is that this barrier to reduction is viewed as

162 a problem to be solved.

I take it that these hints suggest that an intimate enough relationship between psychogram- mars and the brain is widely enough assumed that Psychogrammar Reductionism is a view to consider seriously.60 Unfortunately, it cannot be true.

I have two different objections to Psychogrammar Reductionism. The first concerns its tensions with Use and Ground. And the second, its tensions with the apparent multiple realiz- ability of language and meaning.

4.3.6.2 Limitations of the biological theory

Suppose Psychogrammar Reductionism is true. I cognize, then, some grammar for English. Let ‘G’ refer to the grammar I cognize. And let ‘N’ refer to the neurophysiological property that is equivalent to the property of cognizing G. So I have N. Now, let us ask the question: Is it possible to have N and yet completely lack the capacities for using language in thought and speech?

If the answer is ‘No’, then it would seem that N must be instantiated at least partly by the neural basis of at least one of my speech production systems, by the articulatory-perceptual interface, my sensory-motor system, or my parser or producer. For if it were not partly instan- tiated by any of the neural mechanisms underlying these systems, then it would be possible for me to have these ‘removed’ while still instantiating N, in which case it would be possible for me to have N and yet lack the capacity to use language. But if my having N does essentially in- volve the neural bases of my performance systems, then it would seem that N is a performance state. And so cognizing G is a performance state, contra Use.

If the answer is ‘Yes’, then a being that lacked all capacities for using language could never- theless cognize G. Given Ground, this would entail that such a being could have a language and be such that expressions are meaningful for them. But this is implausible. Take the extreme

60Yet another hint is the suggestion by Chomsky 2002a that it would be an “error” to say that psychogrammars “supervene on physical properties, but are not reducible to them,” and that their relation should be thought of as akin to the alleged reduction of chemical to physical goings-on (p. 72). (I say ‘alleged’ because, in light of argu- ments by Hendry 2011, it seems an open question whether the chemical is emergent from or reducible to the phys- ical.) See also Chomsky 2012 (pp. 129–30) for his negative assessment of the thesis of “weak conceptualism”—a non reductive, supervenience-based account of psychogrammars—as defended by Higginbotham 1991.

163 case of the ‘smallest’ biological entity that can instantiate N. Perhaps this is a disembodied brain, its life sustained by suspension in fluid. And we can imagine it is kept in a state of ‘sleep’; it never gives rise to anything like waking consciousness. If Psychogrammar Reduc- tionism is true, this brain cognizes G knows the English language. It is a linguistic being. Me and it share a language. And what words mean for me they also mean for it. This is incredible.

I take it, then, that Psychogrammar Reductionism renders either Use or Ground implausible. And so it is not an adequate account of psychogrammars.

4.3.6.3 The multiple realizability of language

If Psychogrammar Reductionism is true, then psychogrammars are not realizable by non-neural goings-on; that is, non-human creatures lacking brains cannot cognize grammars. This has radical consequences.61

If psychogrammars are not multiply, non-neurally realizable, then, given Ground, neither are language and meaning. And neither is the utterance of truths, given that utterances are al- ways true partly in virtue of their meanings. And neither are all of the sociological phenomena that essentially depend on language or linguistic meaning, including writing, publishing, edit- ing, poetry, marriage, and song. Surely we do not want to say that brains are metaphysically necessary for all of these phenomena.

Now, so-called ‘type-identity physicalists’ about the mind have well-developed dialectic maneuvers for resisting claims of multiple realizability. And they have persuasively argued that more evidence is needed than can be gained by merely conceiving of, say, a non-human instance of pain to show that pain can in fact be multiply realized in non-human creatures lacking ‘C-fibres’. Granted. Whether octopi feel what we humans mean by ‘pain’ cannot be settled from the armchair.

But it seems to me that while these arguments might succeed as applied to sensory or phe- nomenal states like pain, or even to perceptual phenomena like vision, wheeling them out to

61Similar multiple-realizability-based objections to theses similar to Psychogrammar Reductionism are ad- vanced, perhaps too quickly, by Lewis 1975 (p. 22), Dummett 1976 (p. 37), Katz 1981 (pp. 89–90), Soames 1984 (p. 171), Devitt and Sterelny 1989 (p. 514), and Hanna 2006 (p. 50). None of these authors, however, draw out the consequences of denying the multiple realizability of psychogrammars that I outline below.

164 defend Psychogrammar Reductionism seems like an overreach. For if any aspects of our hu- manity are multiply realizable, this will surely include the ‘high-level’ phenomena of language, meaning, true speech, and the sociological goings-on dependent upon them. If not these, than what? Does everything we do require a brain?

In any event, I find it difficult to accept that it is metaphysically impossible for brainless aliens to have languages. And if that is impossible, such that extraterrestrials could also not speak truly (or falsely), then let us defund S.E.T.I.

Admittedly, I rely on an ‘intuition’ that language is non-neurally realizable. With Katz (1981), I say it is too “counterintuitive” to think that if we were “visited by intelligent aliens from outer space” who were “indistinguishable from us in any Turing imitation game,” then they would “not speak English” because psychogrammars are only “realizable within the hu- man mind or brain” (p. 89–90). And so I am vulnerable to an argument by J. Collins (2008) that trusting our intuitions about cases of non-human language, say, of Martian language, “amounts to nothing more than an insistence that linguistics should answer to a commonsense conception of language”, which he vehemently rejects (p. 148).62 But I say that we can hold that the com- monsense conception of language is correct, and that the science of linguistics need not codify this conception. That is, linguistics may well have nothing to say about Martian language. For linguistics may not include the theory of language itself. Indeed, Collins admits as much:

Do the Martians speak English or not? The decision is of no scientific interest, for we have never been concerned with what language populations speak but only with what explains the linguistic phenomena of the populations. (p. 147)

Perhaps then it is ‘of no scientific interest’ whether a Martian could share our language. But then it will be of no scientific interest whether the conjunction of Ground and Psychogrammar Reductionism is true. If so, then we are free to insist, partly on the basis of intuition, that this conjunction is false, all without insisting that working linguists should take heed. Though I

62For related discussion, see J. Collins 2008 (p. 19–20, 142–48), 2009 (pp. 182–92), and 2018 (pp. 175–78). See also D’Agostino 1986 (pp. 34–36) for an objection to the alien case from Katz 1981. (But D’Agostino does not defend Psychogrammar Reductionism from this objection; rather, he defends the thesis he calls “linguistic individualism,” or, the view that linguistic theories are psychological theories (pp. 29, 35).)

165 suspect that most would take it to be of scientific interest whether Psychogrammar Reduction- ism can be held together with Ground. For those so interested, then, something else must be said about language’s multiple realizability. Perhaps it could be argued that my claim that language is multiply realizable, or, that Mar- tians could speak English, “merely begs the question,” as Laurence (2003) argues, for one “might respond that the language that the Martians speak, despite sounding an awful lot like English, is not English” (p. 91). As he explains, the adherent of Psychogrammar Reductionism:

[...] must grant that it is logically possible, perhaps nomologically possible (and, for all we know, even actual), that there are beings with a different competence that otherwise resemble English-speakers as much as you like. But this is just as the philosopher of chemistry must grant that it is logically possible, perhaps nomo- logically possible (and, for all we know, even actual), that there exists a substance which is not H2O, but otherwise resembles water (in appearance and behavior) as much as you like. The logical possibility of such a substance, however, should not require the philosopher of chemistry to hold that water is not (identical to) H2O.

This reply requires taking ‘language’ to designate a natural kind K, where K is something we might call ‘human language’, in line with Chomsky’s clarification that by ‘language’ he means “human language” (1994, p. 155). And it requires that K is identical to (or equivalent with) some unspecific determinable (or disjunctive) neural property, with more specific neural properties equivalent to psychogrammatical properties as determinates. We can consider how plausible this view is by restricting Psychogrammar Reductionism and Ground so that the speaker-variable ‘x’ ranges only over human speakers. But the resulting theses, to me, are not that much more plausible than their unrestricted ancestors. For human language does not seem essentially neural either. English is a human language. But humans might preserve English as their language even if in the future parts of their brains are slowly replaced with cybernetic implants, so that eventually they become brainless recipients of full cyberbrains. So a human might have a language without having a brain, contra the restricted version of Psychogrammar Reductionism. And so human language is not a neural kind. At this point, further restrictions might be offered. Perhaps the speaker-variable ‘x’ should range only over normal human speakers, or wholly biological humans. And so rather than

166 saying ‘language’ means human language, my opponent must now say ‘language’ picks out the more the more specific natural kind K0 that we might call biological human language. The problem with this strategy, however, is that it threatens to trivialize Psychogrammar Reduc- tionism. For what is it to be a wholly biological human? If this is understood partly in terms of being a human whose language is biologically realized—and I do not see how this can be avoided—then Psychogrammar Reductionism is true by definition. In which case, it can hardly be of much use to us in satisfying our desire for a substantive account of psychogrammars.

Alternatively, one might try to reformulate Psychogrammar Reductionism in the style of theses of ‘species-relative reduction’ or ‘local reduction’, as advocated by Enc¸(1983), Church- land (1986) (pp. 356–58), and Kim (1992), as follows: for any cognizable grammar G, there is some kind K and neurophysiological property N such that: for Ks, to cognize G just is to have N. This path is a hard row to hoe. If it is walked in the careful steps of Kim (1992), Loar (1981), or Jackson, Pargetter, and Prior (1982), we wind up forced to reduce mental states relative to individual-time-indexed kinds. If this is where my opponent wants to go, then Psy- chogrammar Reductionism ends up saying that my psychogrammar is identical (or equivalent) to one of my neural states relative to the ‘species’ David Balcarras-at-7:31PM-on-15/07/2020. Linnaeus was surely right to exclude this ‘species’ from the Systema Naturae.63

All of this suggests that Psychogrammar Reductionism is too implausible to be an adequate account of psychogrammars.

4.4 Competence as performance

We have landed in a quagmire. The evidence suggests that whatever plays the psycho-grammar role also does the explanatory work assigned by Ground and Use. But psychogrammars are not up to the task, no matter how they are accounted for. What to make of this?

I say lay blame on the assumption that there must be a single psychological state responsible for both constituting language and enabling performance. It is a mistake to think that the psychological ground of language and meaning must also be prior to and enable linguistic

63See Endicott 1993 for further critical discussion.

167 performance, and so must not be a performance state. The work traditionally assigned to a single psychogrammar can instead be divided between two other states. Or, at the very least, it is worth exploring the possibility that there are distinct psychological states that do the jobs corresponding to Ground and Use separately. On my view, the psychological ground of language and meaning is practical knowledge, the state of knowing how to partake in communication with a language (see chapters 1 and 2). The psychological state that enables and constrains linguistic performance and comprehen- sion is distinct. It is the mind’s capacity for transducing natural language into thought, which relies not on tacit knowledge of grammar (see chapter 3). As I see it, the traditional compe- tence/performance distinction is to be abandoned. For to have or know a language just is to have a particular performance capacity. Knowing a language is knowing how to use it.

168 Bibliography

Austin, J. L. (1962). How to do things with Words. Oxford University Press. Barber, Alex (2007). “Linguistic Structure and the Brain.” Croatian Journal of Philosophy 21. 317–341. Bennett, Jonathan (1976). Linguistic Behaviour. Cambridge University Press. Bennett, Karen (2011). “Construction area (no hard hat required).” Philosophical Studies 154. 79–104. Berwick, Robert C. and (2016). Why Only Us: Language and Evolution. The MIT Press. Berwick, Robert C., Angela D. Friederici, et al. (2013). “Evolution, brain, and the nature of language.” Trends in Cognitive Science 17.2. 89–98. Bickerton, Derek (2001). “Linguists Play Catchup with Evolution.” Journal of Linguistics 37.3. 581–591. Bierwisch, Manfred (2011). “Semantic features and primes.” In: Semantics: An International Handbook of Natural Language Meaning, Volume 1. Ed. by Klaus von Heusinger, Claudia Maienborn, and Paul Portner. de Gruyter, 322–357. Blackburn, Simon (1984a). Spreading the Word: Groundings in the Philosophy of Language. Oxford University Press. — (1984b). “The Individual Strikes Back.” Synthese 58.3. 281–301, Reprinted in Blackburn (1993). — (1993). Essays in Quasi-Realism. Oxford University Press. Block, Ned (1978). “Troubles with Functionalism.” In: Perception and Cognition: Issues in the . Ed. by C. W. Savage. University of Minnesota Press, 261–325. Reprinted in Block (2007a). — (2007a). Consciousness, Function, and Representation: Collected Papers, Volume I. The MIT Press. — (2007b). “Remarks on Chauvinism and the Mind-Body Problem.” In: Consciousness, Func- tion, and Representation: Collected Papers, Volume I. The MIT Press, 1–12. Boden, Margaret A. (1984). “What Is Computational Psychology? (I).” Proceedings of the Aristotelian Society 58. 17–35. Boeckx, Cedric and Kleanthes K. Grohmann, eds. (2013). The Cambridge Handbook of Biolin- guistics. Cambridge University Press. Boghossian, Paul A. (1989). “The Rule-Following Considerations.” Mind 98.392. 507–549. Borg, Emma (2008). “Intention-Based Semantics.” In: The Oxford Handbook of Philosophy of Language. Ed. by Ernie Lepore and Barry C. Smith. Oxford University Press, 250–266. Brandom, Robert B. (1994). Making It Explicit: Reasoning, Representing, and Discursive Com- mitment. Harvard University Press. Braun, David (1995). “What is character?” Journal of 24.3. 227–240.

169 Bromberger, Sylvain (2012). “Vagueness, Ambiguity, and the “Sound” of Meaning.” In: Anal- ysis and Interpretation in the Exact Sciences: Essays in Honour of William Demopoulos. Ed. by Melanie Frappier, Derek Brown, and Robert DiSalle. Springer, 75–93. Bromberger, Sylvain and Morris Halle (1986). “On the Relationship of Phonology and Phonet- ics: Comments on B. Lindblom ‘On the Origin and Purpose of Discreteness and Invariance in Sound Patterns’.” In: Invariance and Variability in Speech Processes. Ed. by J. S. Perkell and D. H. Klatt. L. Erlbaum Associates, 510–520. Reprinted in Halle (2002). — (1989). “Why Phonology is Different.” Linguistic Inquiry 20.1. 51–70, Reprinted in Halle (2002). — (2000). “The Ontology of Phonology (Revised).” In: Phonological Knowledge: Concep- tual and Empirical Issues. Ed. by Noel Burton-Roberts, Philip Carr, and Gerard Docherty. Oxford University Press, 19–37. Burgess, Alexis and Brett Sherman, eds. (2014). Metasemantics: New Essays on the Founda- tions of Meaning. Oxford University Press. Burton-Roberts, Noel and Philip Carr (1999). “On speech and natural language.” Language Sciences 21.4. 371–406. Byrne, Alex (2005). “Introspection.” Philosophical Topics 33.1. 79–104. — (2018). Transparency and Self-Knowledge. Oxford University Press. Cappelen, Herman and Ernie Lepore (2005). Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism. Oxford University Press. Carr, Philip (2012). “The Philosophy of Phonology.” In: Philosophy of Linguistics. Ed. by Ruth Kempson, Tim Fernando, and Nicholas Asher. Elsevier, 403–444. Carruthers, Peter (1996). Language, Thought and Consciousness: An Essay in Philosophical Psychology. Cambridge University Press. Chalmers, David J. (1994). “On implementing a computation.” Minds and Machines 4.4. 391– 402. — (1996). “Does a Rock Implement Every Finite-State Automaton?” Synthese 108.3. 309– 333. — (2006). “The Foundations of Two-Dimensional Semantics.” In: Two-Dimensional Seman- tics. Ed. by Manuel Garcıea-Carpintero´ and Josep Macia.` Oxford University Press, 55–140. — (2012). “The Varieties of Computation: A Reply.” Journal of Cognitive Science 3. 211–48. Chomsky, Noam (1964). Current Issues in Linguistic Theory. Mouton. — (1965). Aspects of the Theory of Syntax. The MIT Press. — (1968). Language and Mind. Harcourt Brace Jovanovich, Inc. — (1975a). “Knowledge of Language.” In: Language, Mind, and Knowledge. Ed. by Keith Gunderson. Minnesota Studies in the Philosophy of Science, Vol. VII. University of Min- nesota Press, 299–320. — (1975b). Reflections on Language. Pantheon. — (1976). “On the Biological Basis of Language Capacities.” In: The Neuropsychology of Language: Essays in Honor of Eric Lenneberg. Ed. by R. W. Rieber. Springer, 1–24. — (1980). Rules and Representations. Columbia University Press. — (1982). “Mental Representations.” Syracuse Scholar 4.2. 5–21. — (1984). Modular Approaches to the Study of the Mind. San Diego State University Press. — (1986). Knowledge of Language: Its Nature, Origin, and Use. Praeger. — (1988). Language and Problems of Knowledge: The Managua Lectures. The MIT Press.

170 — (1992). “Language and Interpretation: Philosophical Reflections and Empirical Inquiry.” In: Inference, Explanation, and Other Frustrations: Essays in the Philosophy of Science. Ed. by John Earman. University of California Press, 99–128. Reprinted in Chomsky (2000b). — (1994). “Noam Chomsky.” In: A Companion to the Philosophy of Mind. Ed. by Samuel D. Guttenplan. Blackwell, 153–167. — (1995a). “Language and Nature.” Mind 104.413. 1–61. — (1995b). The Minimalist Program. The MIT Press. — (1997). “Language and Problems of Knowledge.” Teorema 16.2. 5–33. — (2000a). “Linguistics and Brain Science.” In: Image, Language, Brain: Papers from the First Mind Articulation Project Symposium. Ed. by Alec Marantz, Yasushi Miyashita, and Wayne O’Neil. The MIT Press, 13–28. — (2000b). New Horizons in the Study of Language and Mind. Cambridge University Press. — (2002a). “Language and the brain.” In: On Nature and Language. Ed. by Adriana Belletti and Luigi Rizzi. Cambridge University Press, 61–91. — (2002b). On Nature and Language. Ed. by Adriana Belletti and Luigi Rizzi. Cambridge University Press. — (2003a). “Internalist Explorations.” In: Reflections and Replies: Essays on the Philosophy of Tyler Burge. Ed. by Martin Hahn and Bjørn Ramberg. The MIT Press, 259–288. — (2003b). “Reply to Lycan.” In: Chomsky and His Critics. Ed. by Louise M. Antony and Norbert Hornstein. Blackwell, 255–63. — (2009). “The Mysteries of Nature: How Deeply Hidden?” The Journal of Philosophy 106.4. 16–200. — (2012). The Science of Language: Interviews with James McGilvray. Cambridge University Press. — (2015). What Kind of Creatures Are We? Columbia University Press. — (2016). “Minimal Computation and the Architecture of Language.” Chinese Semiotic Stud- ies 12.1. 13–24. — (2017). “The Language Capacity: Architecture and Evolution.” Psychonomic Bulletin and Review 24.1. 200–203. — (2018). “Mentality Beyond Consciousness.” In: Ted Honderich on Consciousness, Deter- minism, and Humanity. Ed. by Gregg D. Caruso. Springer, 33–46. Chomsky, Noam and Morris Halle (1965). “Some Controversial Questions in Phonological Theory.” Journal of Linguistics 1.2. 97–138. — (1968). The Sound Pattern of English. Harper & Row. Chomsky, Noam and Jerrold J. Katz (1974). “What the Linguist is Talking About.” The Journal of Philosophy 71.12. 347–367. Churchland, Patricia Smith (1986). Neurophilosophy: Toward a Unified Science of the Mind- Brain. The MIT Press. Clark, Austen (1986). “Psychofunctionalism and Chauvinism.” Philosophy of Science 53.4. 535–559. Clark, David Glenn and Jeffrey L. Cummings (2003). “Aphasia.” In: Neurological Disorders: Course and Treatment. Ed. by Thomas Brandt et al. Academic Press, 265–275. Clark, Herbert H. (1992). Arenas of Language Use. The University of Chicago Press. — (1996). Using Language. Cambridge University Press. Cohen, L. Jonathan (1970). “Some Applications of Inductive Logic to the Theory of Language.” American Philosophical Quarterly 7.4. 299–310.

171 Collins, Chris and Edward Stabler (2016). “A Formalization of Minimalist Syntax.” Syntax 19.1. 43–78. Collins, John (2000). “Theory of Mind, Logical Form and Eliminativism.” Philosophical Psy- chology 13.4. 465–490. — (2004). “Faculty Disputes.” Mind and Language 19.5. 503–533. — (2007). “Review of Ignorance of Language by Michael Devitt.” Mind 116.462. 416–423. — (2008). Chomsky: A Guide for the Perplexed. Continuum. — (2009a). “Methodology, Not Metaphysics: Against Semantic Externalism.” Aristotelian So- ciety Supplementary Volume 83.1. 53–69. — (2009b). “The Limits of Conceivability: Logical Cognitivism and The Language Faculty.” Synthese 171.1. 175–194. — (2018). “Perceiving Language: Issues between Travis and Chomsky.” In: The Philosophy of Charles Travis: Language, Thought, and Perception. Ed. by John Collins and Tamara Dobler. Oxford University Press, 155–180. Connelly, James (2012). “Meaning is Normative: A Response to Hattiangadi.” Acta Analytica 27.1. 55–71. Copeland, B. Jack (1996). “What is computation?” Synthese 108. 335–359. Cresswell, Maxwell J. (1985). Structured Meanings: The Semantics of Propositional Attitudes. The MIT Press. — (1994). Language in the world: A philosophical enquiry. Cambridge University Press. Cummins, Robert and Robert M. Harnish (1980). “The language faculty and the interpretation of linguistics.” The Behavioral and Brain Sciences 3. 18–19. D’Agostino, Fred (1986). Chomsky’s System of Ideas. Clarendon Press. Das, Nilanjan and Bernhard Salow (2018). “Transparency and the KK Principle.” Noˆus 52.1. 3–23. Davidson, Donald (1970). “Semantics for Natural Languages.” Linguaggi nella Societ`ae nella Teenica. 177–188, Reprinted in Davidson (1984b). — (1984a). “Communication and Convention.” Synthese 59.1. 3–17, Reprinted in Davidson (1984b). — (1984b). Inquiries into Truth and Interpretation. Oxford University Press. — (1986). “A Nice Derangement of Epitaphs.” In: Philosophical Grounds of Rationality: In- tentions, Categories, Ends. Ed. by Richard E. Grandy and Richard Warner. Oxford Univer- sity Press, 157–174. — (1992). “The Second Person.” Midwest Studies in Philosophy 17.1. 255–267, Reprinted in Davidson (2001). — (2001). Subjective, Intersubjective, Objective. Oxford University Press. Davies, Martin (1981). Meaning, Quantification, Necessity: Themes in philosophical logic. Routledge. — (1987). “Tacit Knowledge and Semantic Theory: Can a Five per cent Difference Matter?” Mind 96.384. 441–462. — (2002). “Philosophy of Language.” In: The Blackwell Companion to Philosophy. Ed. by Nicholas Bunnin and E. P. Tsui-James. Second. Blackwell, 90–146. Davis, Wayne A. (1992). “Speaker Meaning.” Linguistics and Philosophy 15.3. 223–253. — (2003). Meaning, Expression and Thought. Cambridge University Press. Dennett, Daniel C. (1969). Content and Consciousness. Routledge & Kegan Paul. Devitt, Michael (1981). Designation. Columbia University Press.

172 — (1996). Coming to Our Senses: A Naturalistic Program for Semantic Localism. Cambridge University Press. — (2006). Ignorance of Language. Oxford University Press. — (2011). “Linguistic Knowledge.” In: Knowing How: Essays on Knowledge, Mind, and Ac- tion. Ed. by John Bengson and Marc A. Moffett. Oxford University Press, 314–333. Devitt, Michael and Kim Sterelny (1989). “Linguistics: What’s Wrong with “The Right View”.” Philosophical Perspectives 3. 497–531. — (1999). Language and Reality: An Introduction to the Philosophy of Language. Second Edition. The MIT Press. Dorst, Kevin (2019). “Abominable KK Failures.” Mind 128.512. 1227–1259. Dresner, Eli (2010). “Measurement-Theoretic Representation and Computation-Theoretic Re- alization.” The Journal of Philosophy 107.6. 275–292. Dretske, Fred (1969). Seeing and Knowing. Routledge. — (1974). “Explanation in Linguistics.” In: Explaining Linguistic Phenomena. Ed. by David Cohen. John Wiley & Sons, 21–42. Dummett, Michael (1975). “What is a Theory of Meaning?” In: Mind and Language: Wolfson College Lectures 1974. Ed. by Samuel D. Guttenplan. Oxford University Press, 97–138. Reprinted in Dummett (1993). — (1976). “What is a Theory of Meaning? (II).” In: Truth and Meaning: Essays in Semantics. Ed. by Gareth Evans and John McDowell. Oxford University Press, 67–137. Reprinted in Dummett (1993). — (1981a). Frege: Philosophy of Language. Second Edition. Harvard University Press. — (1981b). “Objections to Chomsky.” London Review of Books 3.16. 5–6. — (1989). “Language and Communication.” In: Reflections on Chomsky. Ed. by Alexander George. Blackwell, 192–212. Reprinted in Dummett (1993). — (1993). The Seas of Language. Oxford University Press. Dupre, Gabe (2020). “What would it mean for natural language to be the language of thought?” Linguistics and Philosophy. Egan, Frances (1992). “Individualism, Computation, and Perceptual Content.” Mind 101.403. 443–459. — (1995). “Computation and Content.” The Philosophical Review 104.2. 181–203. — (2003). “Naturalistic Inquiry: Where does Mental Representation Fit in?” In: Chomsky and His Critics. Ed. by Louise M. Antony and Norbert Hornstein. Blackwell, 89–104. Elga, Adam and Agust´ın Rayo (Oct. 2019). “Fragmentation and Information Access.” Draft. Enc¸, Berent (1983). “In Defense of the Identity Theory.” The Journal of Philosophy 80.5. 279– 298. Endicott, Ronald P. (1993). “Species-Specific Properties and More Narrow Reductive Strate- gies.” Erkenntnis 38.3. 303–321. Engelberg, Stefan (2011). “Frameworks of lexical decomposition of verbs.” In: Semantics: An International Handbook of Natural Language Meaning, Volume 1. Ed. by Klaus von Heusinger, Claudia Maienborn, and Paul Portner. de Gruyter, 358–399. Evans, Gareth (1981). “Semantic Theory and Tacit Knowledge.” In: Wittgenstein: To Follow a Rule. Ed. by Steven H. Holtzman and Christopher M. Leich. Routledge, 118–137. Reprinted in Evans (1985). — (1982). The Varieties of Reference. Ed. by John McDowell. Oxford University Press. — (1985). Collected Papers. Oxford University Press.

173 Fantl, Jeremy (2008). “Knowing-How and Knowing-That.” Philosophy Compass 3.3. 451–470. Field, Hartry (1978). “Mental Representation.” Erkenntnis 13.1. 9–61, Reprinted in Field (2001). — (1986). “Stalnaker on Intentionality.” Pacific Philosophical Quarterly 67. 98–112. — (2001). Truth and the Absence of Fact. Oxford University Press. Fine, Kit (2001). “The Question of Realism.” Philosophers’ Imprint 1.2. 1–30. — (2012). “Guide to Ground.” In: Metaphysical Grounding: Understanding the Structure of Reality. Ed. by Fabrice Correia and Benjamin Schnieder. Cambridge University Press, 37– 80. Fodor, Janet D., Jerry A. Fodor, and M. F. Garrett (1975). “The Psychological Unreality of Semantic Interpretations.” Linguistic Inquiry 6.4. 515–531. Fodor, Jerry A. (1968). Psychological Explanation: An Introduction to the Philosophy of Psy- chology. Random House. — (1975). The Language of Thought. Harvard University Press. — (1981). “Some Notes on What Linguistics Is About.” In: Readings in Philosophy of Psy- chology. Ed. by Ned Block. Harvard University Press, 197–207. — (1983). The Modularity of Mind: An Essay on Faculty Psychology. The MIT Press. — (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. The MIT Press. — (1989). “Stephen Schiffer’s Dark Night of The Soul: A Review of Remnants of Mean- ing.” Philosophy and Phenomenological Research 50.2. 409–423, Reprinted in J. A. Fodor (1990). — (1990). A Theory of Content and Other Essays. The MIT Press. — (1998). In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind. The MIT Press. — (2000). The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psy- chology. The MIT Press. — (2008). LOT 2: The Language of Thought Revisited. Clarendon Press. Fodor, Jerry A. and Ernie Lepore (1998). “The Emptiness of the Lexicon: Reflections on James Pustejovsky’s The Generative Lexicon.” Linguistic Inquiry 29.2. 269–288. Foster, John (1982). The Case for Idealism. Routledge & Kegan Paul. Fricker, Elizabeth (2003). “Understanding and Knowledge of What Is Said.” In: Epistemology of Language. Ed. by Alex Barber. Oxford University Press, 325–366. Friederici, Angela D. (2017). Language in Our Brain: The Origins of a Uniquely Human Ca- pacity. The MIT Press. Gasparri, Luca and Diego Marconi (2019). “Word Meaning.” In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Fall 2019. Stanford University. Geeraerts, Dirk (2010). Theories of Lexical Semantics. Oxford University Press. George, Alexander (1989). “How Not to Become Confused about Linguistics.” In: Reflections on Chomsky. Ed. by Alexander George. Blackwell, 90–110. Geva, Sharon, Sophie Bennett, et al. (2011). “Discrepancy between inner and overt speech: Implications for post-stroke aphasia and normal language processing.” Aphasiology 25.3. 323–343. Geva, Sharon and Charles Fernyhough (2019). “A Penny for Your Thoughts: Children’s Inner Speech and Its Neuro-Development.” Frontiers in Psychology 10.1708. 1–12. Gilbert, Margaret (1989). On Social Facts. Princeton University Press.

174 Glanzberg, Michael (2014). “Explanation and Partiality in Semantic Theory.” In: Metaseman- tics: New Essays on the Foundations of Meaning. Ed. by Alexis Burgess and Brett Sherman. Oxford University Press, 259–292. Godfrey-Smith, Peter (2009). “Triviality Arguments against Functionalism.” Philosophical Stud- ies 145.2. 273–295. Goodman, Nelson (1967). “The Epistemological Argument.” In: A Portrait of Twenty-Five Years: Boston Colloquium for the Philosophy of Science 1960–1985. Ed. by Robert S. Co- hen and Marx W. Wartofsky. Kluwer, 52–57. — (1969). “The Emperor’s New Ideas.” In: Language and Philosophy: A Symposium. Ed. by Sidney Hook. New York University Press, 138–142. Grandy, Richard E. (1972). “Grammatical Knowledge and States of Mind.” Behaviorism 1.1. 16–21. — (1990). “Understanding and the Principle of Compositionality.” Philosophical Perspectives 4. 557–572. Graves, Christina et al. (1973). “Tacit Knowledge.” The Journal of Philosophy 70.11. 318–330. Greco, Daniel (2014). “Could KK Be OK?” The Journal of Philosophy 111.4. 169–197. Green, Mitchell S. (2018). “How Much Mentality is Needed for Mentality?” In: The Rout- ledge Handbook of Philosophy of Animal Minds. Ed. by Kristin Andrews and Jacob Beck. Routledge, 313–323. Gregory, Daniel (2016). “Inner Speech, Imagined Speech, and Auditory Verbal Hallucinations.” Review of Philosophy and Psychology 7.3. 653–673. — (2017). “Inner Speech: A Philosophical Analysis.” PhD thesis. Australian National Univer- sity. — (2018). “The Feeling of Sincerity: Inner Speech and the Phenomenology of Assertion.” Thought 7.4. 225–236. Grice, H. Paul (1957). “Meaning.” The Philosophical Review 66.3. 377–388. — (1968). “Utterer?s Meaning, Sentence-Meaning, and Word-Meaning.” In: Philosophy, Lan- guage, and Artificial Intelligence. Ed. by Jack Kulas, James H. Fetzer, and Terry L. Rankin. Springer, 49–66. — (1982). “Meaning Revisited.” In: Mutual Knowledge. Ed. by N. V. Smith. Academic Press, 223–243. — (1989). Studies in the Way of Words. Harvard University Press. Habermas, Jurgen¨ (1970). “Towards a theory of communicative competence.” Inquiry 13.1–4. 360–375. — (1998). On the Pragmatics of Communication. The MIT Press. Hacking, Ian (1975). Why Does Language Matter to Philosophy? Cambridge University Press. — (1980). “Chomsky and his critics.” New York Review of Books 27.16. 47–50. Halle, Morris (2002). From Memory to Speech and Back: Papers on Phonetics and Phonology 1954-2002. De Gruyter. Hanna, Robert (2006). Rationality and Logic. The MIT Press. Harley, Trevor A. (2014). The Psychology of Language: From Data to Theory. Fourth Edition. Psychology Press. Harman, Gilbert (1967). “Psychological Aspects of the Theory of Syntax.” The Journal of Philosophy 64.2. 75–87. — (1969). “Linguistic Competence and Empiricism.” In: Language and Philosophy: A Sym- posium. Ed. by Sidney Hook. New York University Press, 143–151.

175 Harman, Gilbert (1970). “Language Learning.” Noˆus 4.1. 33–43, Reprinted in Harman (1999). — (1973a). “Review of Language and Mind by Noam Chomsky.” Language 49.2. 453–464. — (1973b). Thought. Princeton University Press. — (1975). “Language, Thought, and Communication.” In: Language, Mind, and Knowledge. Ed. by Keith Gunderson. Minnesota Studies in the Philosophy of Science, Vol. VII. Univer- sity of Minnesota Press, 270–298. Reprinted in Harman (1999). — (1999). Reasoning, Meaning and Mind. Oxford University Press. Hattiangadi, Anandi (2006). “Is Meaning Normative?” Mind and Language 21.2. 220–240. — (2007). Oughts and Thoughts: Rule-Following and the Normativity of Content. Oxford Uni- versity Press. — (2017). “The Normativity of Meaning.” In: A Companion to the Philosophy of Language. Ed. by Bob Hale, Crispin Wright, and Alexander Miller. Second Edition. Blackwell, 649– 669. Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch (2002). “The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?” Science 298. 1569–79. Hawthorne, John (1990). “A note on ‘Languages and Language’.” Australasian Journal of Phi- losophy 68.1. 116–118. — (1993). “Meaning and Evidence: A Reply to Lewis.” Australasian Journal of Philosophy 71.2. 206–211. Heim, Irene and Angelika Kratzer (1997). Semantics in Generative Grammar. Wiley-Blackwell. Hendry, Robin Findlay (2011). “Reduction, Emergence and Physicalism.” In: Philosophy of Chemistry. Ed. by Andrea I. Woody, Robin Findlay Hendry, and Paul Needham. Elsevier, 367–386. Higginbotham, James (1983). “Is Grammar Psychological?” In: How Many Questions?: Essays in Honor of Sidney Morgenbesser. Ed. by Leigh S. Cauman et al. Hackett, 170–179. — (1987). “Is Semantics Necessary?” Proceedings of the Aristotelian Society 88. 219–241. — (1991). “Remarks on the Metaphysics of Linguistics.” Linguistics and Philosophy 14.5. 555–566. Hinzen, Wolfram (2006). Mind Design and Minimal Syntax. Oxford University Press. — (2011). “Language and Thought.” In: The Oxford Handbook of Linguistic Minimalism. Ed. by Cedric Boeckx. Oxford University Press, 499–522. Hinzen, Wolfram and Michelle Sheehan (2013). The Philosophy of Universal Grammar. Ox- ford University Press. Horgan, Terence (1974). “Microreduction and the Mind–Body Problem.” PhD thesis. The Uni- versity of Michigan. — (1984). “Supervenience and Cosmic Hermeneutics.” The Southern Journal of Philosophy 22.S1. 19–38. — (1993). “From Supervenience to Superdupervenience: Meeting the Demands of a Material World.” Mind 102.408. 555–586. Horgan, Terence and Mark Timmons (1992). “Troubles on Moral Twin Earth: Moral Queerness Revived.” Synthese 92. 221–60. Hornstein, Norbert (1984). Logic as Grammar. The MIT Press. — (2009). A Theory of Syntax: Minimal Operations and Universal Grammar. Cambridge Uni- versity Press. Hornstein, Norbert and Paul Pietroski (2009). “Basic operations: Minimal syntax-semantics.” Catalan Journal of Linguistics 8. 113–139.

176 Hymes, Dell (1972). “On Communicative Competence.” In: Sociolinguistics: Selected Read- ings. Ed. by John B. Pride and Janet Holmes. Penguin Books, 269–293. Illari, Phyllis McKay and Jon Williamson (2012). “What is a mechanism? Thinking about mechanisms across the sciences.” European Journal for Philosophy of Science 2. 119–135. Isac, Daniela and Charles Reiss (2008). I-Language: An Introduction to Linguistics as Cogni- tive Science. Oxford University Press. Jackson, Frank, Robert Pargetter, and Elizabeth W. Prior (1982). “Functionalism and Type- Type Identity Theories.” Philosophical Studies 42.2. 209–225. Jenkins, Lyle (2000). Biolinguistics: Exploring the Biology of Language. Cambridge University Press. — (2013). “Biolinguistics: A historical perspective.” In: The Cambridge Handbook of Biolin- guistics. Ed. by Cedric Boeckx and Kleanthes K. Grohmann. Cambridge University Press, 4–11. Johnson, Kent (2014). “Methodology in generative linguistics: Scratching below the surface.” Language Sciences 44. 47–59. Johnson, Kent and Ernie Lepore (2004). “Knowledge and Semantic Competence.” In: Hand- book of Epistemology. Ed. by Ilkka Niiniluoto, Matti Sintonen, and Jan Wolenski.´ Springer, 707–731. Johnson, Mark (2017). “Marr’s levels and the minimalist program.” Psychonomic Bulletin and Review 24.1. 171–74. Kamp, Hans and Uwe Reyle (1993). From Discourse to Logic: Introduction to Modeltheo- retic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Springer. Kaplan, David (1989a). “Afterthoughts.” In: Themes from Kaplan. Ed. by Joseph Almog, John Perry, and Howard Wettstein. Oxford University Press, 565–614. — (1989b). “Demonstratives: An Essay on the Semantics, Logic, Metaphysics, and Episte- mology of Demonstratives and Other Indexicals.” In: Themes from Kaplan. Ed. by Joseph Almog, John Perry, and Howard Wettstein. Oxford University Press, 481–563. — (1990). “Words.” Proceedings of the Aristotelian Society, Supplementary Volumes 64. 93– 119. Katz, Jerrold J. (1981). Language and Other Abstract Objects. Rowman and Littlefield. Kiesselbach, Matthias (2014). “The Normativity of Meaning: From Constitutive Norms to Pre- scriptions.” Acta Analytica 29.4. 427–440. Kim, Jaegwon (1978). “Supervenience and Nomological Incommensurables.” American Philo- sophical Quarterly 15.2. 149–156. — (1982a). “Psychophysical Supervenience.” Philosophical Studies 41.1. 51–70. — (1982b). “Psychophysical Supervenience as a Mind-Body Theory.” Cognition and Brain Theory 5.2. 129–147. — (1984a). “Concepts of Supervenience.” Philosophy of Phenomenological Research 45.2. 153–176, Reprinted in Kim (1993). — (1984b). “Reference: Some Recent Philosophical Issues.” Language Research 20.4. 311– 320. — (1987). ““Strong” and “Global” Supervenience Revisited.” Philosophy and Phenomenolog- ical Research 48.2. 315–326. — (1989). “The Myth of Nonreductive Materialism.” Proceedings of the Address of the Amer- ican Philosophical Association 63.3. 31–47. — (1990). “Supervenience as a Philosophical Concept.” Metaphilosophy 21.1/2. 1–27, Reprinted in Kim (1993).

177 Kim, Jaegwon (1992). “Multiple Realization and the Metaphysics of Reduction.” Philosophy and Phenomenological Research 52.1. 1–26. — (1993). Supervenience and Mind: Selected Philosophical Essays. Cambridge University Press. King, Jeffrey C. (2002). “Two Sorts of Claim about “Logical Form”.” In: Logical Form and Language. Ed. by Gerhard Preyer and Georg Peter. Oxford University Press, 118–131. Kolbel,¨ Max (1998). “Lewis, Language, Lust and Lies.” Inquiry 41.3. 301–315. — (2002). Truth Without Objectivity. Routledge. Kripke, Saul A. (1972). “Naming and Necessity.” In: Semantics of Natural Language. Ed. by Donald Davidson and Gilbert Harman. Reidel, 253–355, 763–769. Republished as Kripke (1980). — (1980). Naming and Necessity. Harvard University Press. — (1982). Wittgenstein on Rules and Private Language. Harvard University Press. — (2011). “The First Person.” In: Philosophical Troubles: Collected Papers, Volume I. Oxford University Press. Kusch, Martin (2006). A Sceptical Guide to Meaning and Rules: Defending Kripke’s Wittgen- stein. Acumen. Langland-Hassan, Peter et al. (2015). “Inner speech deficits in people with aphasia.” Frontiers in Psychology 6.528. 1–10. Larson, Richard K. and Gabriel M. A. Segal (1995). Knowledge of Meaning: An Introduction to Semantic Theory. The MIT Press. Laurence, Stephen (1996). “A Chomskian Alternative to Convention-Based Semantics.” Mind 105.418. 269–301. — (2003). “Is Linguistics a Branch of Psychology?” In: Epistemology of Language. Ed. by Alex Barber. Oxford University Press, 69–106. Lenneberg, Eric H. (1967). Biological Foundations of Language. John Wiley & Sons. Lepore, Ernie (1997). “Conditions on Understanding Language.” Proceedings of the Aris- totelian Society 97.1. 41–60, Reprinted in Lepore and Loewer (2011). Lepore, Ernie and Barry Loewer (2011). Meaning, Mind, and Matter: Philosophical Essays. Oxford University Press. Lepore, Ernie and Kirk Ludwig (2005). Donald Davidson: Meaning, Truth, Language, and Reality. Oxford University Press. — (2007). Donald Davidson’s Truth-Theoretic Semantics. Oxford University Press. Levin, Janet (2018). “Functionalism.” In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Fall 2018. Stanford University. Levine, David N., Ronald Calvanio, and Alice Popovics (1982). “Language in the absence of inner speech.” Neuropsychologia 20.4. 391–409. Lewis, David K. (1969). Convention: A Philosophical Study. Harvard University Press. — (1970a). “General Semantics.” Synthese 22.1–2. 18–67, Reprinted in Lewis (1983). — (1970b). “How to Define Theoretical Terms.” The Journal of Philosophy 67.13. 427–446, Reprinted in Lewis (1983). — (1972). “Psychophysical and Theoretical Identifications.” Australasian Journal of Philoso- phy 50.3. 249–258, Reprinted in Lewis (1999). — (1974). “Radical Interpretation.” Synthese 27.3–4. 331–344, Reprinted in Lewis (1983).

178 — (1975). “Languages and Language.” In: Language, Mind, and Knowledge. Ed. by Keith Gunderson. Minnesota Studies in the Philosophy of Science, Vol. VII. University of Min- nesota Press, 3–35. Reprinted in Lewis (1983). — (1980). “Index, Context, and Content.” In: Philosophy and Grammar. Ed. by Stig Kanger and Sven Ohman.¨ Springer, 79–100. Reprinted in Lewis (1998). — (1983). Philosophical Papers: Volume I. Oxford University Press. — (1986). On the Plurality of Worlds. Blackwell. — (1992). “Meaning without use: Reply to Hawthorne.” Australasian Journal of Philosophy 70.1. 106–110, Reprinted in Lewis (2000). — (1994). “Reduction of Mind.” In: A Companion to the Philosophy of Mind. Ed. by Samuel D. Guttenplan. Blackwell, 412–431. Reprinted in Lewis (1999). — (1997). “Naming the Colours.” Australasian Journal of Philosophy 75.3. 325–342, Reprinted in Lewis (1999). — (1998). Papers in Philosophical Logic. Cambridge University Press. — (1999). Papers in Metaphysics and Epistemology. Cambridge University Press. — (2000). Papers in Ethics and Social Philosophy. Cambridge University Press. — (2002). “Tharp’s third theorem.” Analysis 62.2. 95–97. — (2009). “Ramseyan Humility.” In: Conceptual Analysis and Philosophical Naturalism. Ed. by David Braddon-Mitchell and Robert Nola. The MIT Press, 203–222. Lin, Francis Y. (1999). “Chomsky On The 1Ordinary Language’ View Of Language.” Synthese 120. 151–191. Loar, Brian (1976). “Two Theories of Meaning.” In: Truth and Meaning: Essays in Semantics. Ed. by Gareth Evans and John McDowell. Oxford University Press, 138–161. Reprinted in Loar (2017). — (1981). Mind and Meaning. Cambridge University Press. — (2017). Consciousness and Meaning: Selected Essays. Ed. by Katalin Balog and Stephanie Beardman. Oxford University Press. Loewer, Barry (1997). “A guide to naturalizing semantics.” In: A Companion to the Philosophy of Language. Ed. by Bob Hale and Crispin Wright. Blackwell, 108–126. Ludlow, Peter (1999). Semantics, Tense, and Time: An Essay in the Metaphysics of Natural Language. The MIT Press. — (2002). “LF and Natural Logic.” In: Logical Form and Language. Ed. by Gerhard Preyer and Georg Peter. Oxford University Press, 132–168. — (2006). “The Myth of Human Language.” Croatian Journal of Philosophy VI.18. 385–400. — (2007). “Understanding Temporal Indexicals.” In: Situating Semantics: Essays on the Phi- losophy of John Perry. Ed. by Michael O’Rourke and Corey Washington. The MIT Press, 155–177. — (2011). The Philosophy of Generative Linguistics. Oxford University Press. Lycan, William G. (1981). “Form, Function, and Feel.” The Journal of Philosophy 78.1. 24–50. — (1987). Consciousness. The MIT Press. — (1996). Consciousness and Experience. The MIT Press. — (2003). “Chomsky on the Mind–Body Problem.” In: Chomsky and His Critics. Ed. by Louise M. Antony and Norbert Hornstein. Blackwell, 11–28. MacFarlane, John (2014). Assessment Sensitivity: Relative Truth and its Applications. Oxford University Press.

179 Machamer, Peter, Lindley Darden, and Carl F. Craver (2000). “Thinking about Mechanisms.” Philosophy of Science 67.1. 1–25. Marr, David (1982). Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. W. H. Freeman. Martins, Pedro Tiago and Cedric Boeckx (2016). “What we talk about when we talk about biolinguistics.” Linguistics Vanguard 2.1. 1–15. Matthews, Robert J. (2003). “Does Linguistic Competence Require Knowledge of Language?” In: Epistemology of Language. Ed. by Alex Barber. Oxford University Press, 187–213. McDowell, John (1977). “On the Sense and Reference of a Proper Name.” Mind 86.342. 159– 185. — (1980). “Meaning, Communication, and Knowledge.” In: Philosophical Subjects: Essays Presented to P. F. Strawson. Ed. by Zak van Straaten. Oxford University Press, 117–139. Reprinted in McDowell (1998a). — (1981). “Anti-realism and the epistemology of understanding.” In: Meaning and Under- standing. Ed. by Herman Parret and Jacques Bouveresse. de Gruyter, 225–248. Reprinted in McDowell (1998a). — (1984). “Wittgenstein on Following a Rule.” Synthese 58.3. 325–363, Reprinted in McDow- ell (1998b). — (1987). “In Defence of Modesty.” In: Michael Dummett: Contributions to Philosophy. Ed. by Barry M. Taylor. Martinus Nijhoff, 59–80. Reprinted in McDowell (1998a). — (1998a). Meaning, Knowledge, and Reality. Harvard University Press. — (1998b). Mind, Value, and Reality. Harvard University Press. McGilvray, James (1998). “Meanings Are Syntactically Individuated and Found in the Head.” Mind and Language 13.2. 225–280. — (2001). “Chomsky on the Creative Aspect of Language Use and Its Implications for Lexical Semantic Studies.” In: The Language of Word Meaning. Cambridge University Press, 5–27. — (2013). “The philosophical foundations of biolinguistics.” In: The Cambridge Handbook of Biolinguistics. Ed. by Cedric Boeckx and Kleanthes K. Grohmann. Cambridge University Press, 22–46. McGinn, Colin (1980). “Philosophical Materialism.” Synthese 44.2. 173–206. McLaughlin, Brian P. (1995). “Varieties of Supervenience.” In: Supervenience: New Essays. Ed. by Elias E. Savellos and Umit¨ D. Yalc¸in. Cambridge University Press, 16–59. Mellor, D. H. (1984). “What Is Computational Psychology? (II).” Proceedings of the Aris- totelian Society 58. 37–53. Melnyk, Andrew (2003). A Physicalist Manifesto: Thoroughly Modern Materialism. Cam- bridge University Press. — (2016). “Grounding and the Formulation of Physicalism.” In: Scientific Composition and Metaphysical Ground. Ed. by Kenneth Aizawa and Carl Gillett. Palgrave Macmillan, 249– 269. — (2018). “In Defense of a Realization Formulation of Physicalism.” Topoi 37. 483–493. Miller, George A. (1975). “Some Comments on Competence and Performance.” Annals of the New York Academy of Sciences 263. 201–204. Miller, S. R. (1986). “Truth-Telling and the Actual-Language Relation.” Philosophical Studies 49.2. 281–294. Montague, Richard (1970). “English as a formal language.” In: Linguaggi Nella Societ`ae Nella Tecnica. Ed. by Bruno Visentini. Edizioni di Comunita,` 189–224. Reprinted in Montague (1974).

180 — (1974). Formal Philosophy: Selected Papers of Richard Montague. Ed. by Richmond H. Thomason. Yale University Press. Myers, Scott (2000). “Boundary Disputes: The Distinction between Phonetic and Phonological Sound Patterns.” In: Phonological Knowledge: Conceptual and Empirical Issues. Ed. by Noel Burton-Roberts, Philip Carr, and Gerard Docherty. Oxford University Press, 245– 272. Nagel, Ernest (1961). The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett. Nagel, Thomas (1997). The Last Word. Oxford University Press. Napoletano, Toby (2017). “Why Truth-Conditional Semantics in Generative Linguistics is Still the Better Bet.” Erkenntnis 83.3. 673–692. Neale, Stephen (1993). “Logical Form and LF.” In: Noam Chomsky: Critical Assessments. Ed. by C. Otero. Routledge, 788–838. — (1994). “What is Logical Form?” In: Logic and Philosophy of Science in Uppsala. Ed. by Dag Prawitz and Dag Westerstαahl. Springer, 583–598. Noe,¨ Alva (2005). “Against Intellectualism.” Analysis 65.4. 278–290. O’Callaghan, Casey (2010). “Experiencing Speech.” Philosophical Issues 20. 305–332. Pagin, Peter (2003). “Schiffer on Communication.” Facta Philosophica 5. 25–48. — (2012). “Truth theories, competence, and semantic computation.” In: Donald Davidson on Truth, Meaning, and the Mental. Ed. by Gerhard Preyer. Oxford University Press, 49–75. Pagin, Peter and Dag Westerstahl (2011). “Compositionality.” In: Semantics: An International Handbook of Natural Language Meaning, Volume 1. Ed. by Klaus von Heusinger, Claudia Maienborn, and Paul Portner. de Gruyter, 96–123. Partee, Barbara Hall (2015). “Asking What Meaning Does: David Lewis’s Contributions to Semantics.” In: A Companion to David Lewis. Ed. by Barry Loewer and Jonathan Schaffer. Blackwell, 328–344. Partee, Barbara Hall, Alice Ter Meulen, and Robert E. Wall (1993). Mathematical Methods in Linguistics. Springer. Peacocke, Christopher (1974). “Finiteness and the Actual Language Relation.” Proceedings of the Aristotelian Society 75. 147–165. — (1976). “Truth Definitions and Actual Languages.” In: Truth and Meaning: Essays in Se- mantics. Ed. by Gareth Evans and John McDowell. Oxford University Press, 162–188. — (1986). “Explanation in Computational Psychology: Language, Perception and Level 1.5.” Mind and Language 1.2. 101–123. — (1989). “When is a Grammar Psychologically Real?” In: Reflections on Chomsky. Ed. by Alexander George. Blackwell, 111–130. — (1992). A Study of Concepts. The MIT Press. Pereplyotchik, David (2017). Psychosyntax: The Nature of Grammar and its Place in the Mind. Springer. Pettit, Philip (1990). “The Reality of Rule-Following.” Mind 99.393. 1–21. Piccinini, Gualtiero (2004). “Functionalism, Computationalism, and Mental States.” Studies in History and Philosophy of Science 35.4. 811–833. — (2010). “The Mind as Neural Software? Understanding Functionalism, Computational- ism, and Computational Functionalism.” Philosophy and Phenomenological Research 81.2. 269–311. — (2015). Physical Computation: A Mechanistic Account. Oxford University Press.

181 Piccinini, Gualtiero and Carl F. Craver (2011). “Integrating psychology and neuroscience: Functional analyses as mechanism sketches.” Synthese 183. 283–311. Pietroski, Paul (2010). “Concepts, Meanings and Truth: First Nature, Second Nature and Hard Work.” Mind and Language 25.3. 247–278. — (2011). “Minimal Semantic Instructions.” In: The Oxford Handbook of Linguistic Minimal- ism. Ed. by Cedric Boeckx. Oxford University Press, 472–498. — (2018). Conjoining Meanings: Semantics Without Truth Values. Oxford University Press. Platts, Mark (1997). Ways of Meaning: An Introduction to a Philosophy of Language. Second Edition. The MIT Press. Poeppel, David (2017). “The Influence of Chomsky on the Neuroscience of Language.” In: The Cambridge Companion to Chomsky. Ed. by James McGilvray. Cambridge University Press, 155–174. Poeppel, David and David Embick (2005). “Defining the Relation Between Linguistics and Neuroscience.” In: Twenty-First Century Psycholinguistics: Four Cornerstones. Ed. by Anne Cutler. Lawrence Erlbaum Associates, 103–118. Povich, Mark and Carl F. Craver (2018). “Mechanistic Levels, Reduction, and Emergence.” In: The Routledge Handbook of Mechanisms and Mechanical Philosophy. Ed. by Stuart Glennan and Phyllis Illari. Routledge, 185–197. Putnam, Hilary (1967). “The Nature of Mental States.” In: Art, Mind, and Religion: Proceed- ings of the 1965 Oberlin Colloquium in Philosophy. Ed. by W. H. Capitan and D. D. Merrill. University of Pittsburgh Press, 37–48. Originally titled “Psychological Predicates”. Reprinted in Putnam (1979). — (1979). Mind, Language and Reality: Philosophical Papers, Volume II. Cambridge Univer- sity Press. — (1980). “Models and Reality.” The Journal of Symbolic Logic 45.3. 464–482. — (1981). Reason, Truth and History. Cambridge University Press. Quine, Willard Van Orman (1960). Word and Object. The MIT Press. — (1969). “Linguistics and Philosophy.” In: Language and Philosophy: A Symposium. Ed. by Sidney Hook. New York University Press, 95–98. — (1972). “Methodological Reflection on Current Linguistic Theory.” In: Semantics of Natu- ral Language. Ed. by Donald Davidson and Gilbert Harman. Springer, 442–454. Rabern, Brian (2012). “Against the identification of assertoric content with compositional value.” Synthese 189.1. 75–96. Radford, Andrew (1988). Transformal Grammar: A First Course. Cambridge University Press. Rattan, Gurpreet S. (2006). “E-Language versus I-Language.” In: Encyclopedia of Language and Linguistics. Ed. by Keith Brown. Second. Elsevier. Reboul, Anne C. (2015). “Why language really is not a communication system: A cognitive view of language evolution.” Frontiers in Psychology 6.1434. 1–12. Rescorla, Michael (2013). “Against Structuralist Theories of Computational Implementation.” The British Journal for the Philosophy of Science 64.4. 681–707. — (2014). “A theory of computational implementation.” Synthese 191.6. 1277–1307. — (2015). “Convention.” In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Summer 2017. Stanford University. — (2019). “The Language of Thought Hypothesis.” In: The Stanford Encyclopedia of Philos- ophy. Ed. by Edward N. Zalta. Summer 2019. Stanford University. Rey, Georges (1997). of Mind: A Contentiously Classical Approach. Blackwell.

182 — (2003). “Chomsky, Intentionality, and a CRTT.” In: Chomsky and His Critics. Ed. by Louise M. Antony and Norbert Hornstein. Blackwell, 103–139. Rickheit, Gert, Hans Strohner, and Constanze Vorwerg (2008). “The concept of communicative competence.” In: Handbook of Communication Competence. Ed. by Karlfried Knapp and Gerd Antos. De Gruyter, 15–62. Rosen, Gideon (2010). “Metaphysical Dependence: Grounding and Reduction.” In: Modality: Metaphysics, Logic, and Epistemology. Ed. by Bob Hale and Aviv Hoffmann. Oxford Uni- versity Press, 109–136. Rumfitt, Ian (1993). “Content and Context: The Paratactic Theory Revisited and Revised.” Mind 102.407. 429–454. Ryle, Gilbert (1949). The Concept of Mind. Hutchinson. — (1974). “Mowgli in Babel.” Philosophy 49.187. 5–11. Schaffer, Jonathan (2009). “On What Grounds What.” In: Metametaphysics: New Essays on the Foundations of Ontology. Ed. by David Chalmers, David Manley, and Ryan Wasserman. Oxford University Press, 347–383. Scheer, Tobias (2010). A Guide to Morphosyntax-Phonology Interface Theories: How Extra- Phonological Information is Treated in Phonology since Trubetzkoy’s Grenzsignale. De Gruyter. Scheutz, Matthias (2001). “Computational versus Causal Complexity.” Minds and Machines 11. 543–566. Schiffer, Stephen (1972). Meaning. Oxford University Press. — (1982). “Intention-Based Semantics.” Notre Dame Journal of Formal Logic 23.2. 119–156. — (1986). “Stalnaker’s Problem of Intentionality.” Pacific Philosophical Quarterly 67. 87–97. — (1987). Remnants of Meaning. The MIT Press. — (1993). “Actual-Language Relations.” Philosophical Perspectives 7. 231–258. — (2003). The Things We Mean. Oxford University Press. — (2006). “Two Perspectives on Knowledge of Language.” Philosophical Issues 16.1. 275– 287. — (2015). “Meaning and Formal Semantics in Generative Grammar.” Erkenntnis 80.1. 61–87. — (2017a). “Gricean Semantics and Vague Speaker-Meaning.” Croatian Journal of Philoso- phy 17.3. 293–317. — (2017b). “Intention and Convention in the Theory of Meaning.” In: A Companion to the Philosophy of Language. Ed. by Bob Hale, Crispin Wright, and Alexander Miller. Second Edition. Blackwell, 49–72. — (2019). “Vague Speaker-Meaning.” In: Further Advances in Pragmatics and Philosophy: Part 2 Theories and Applications. Ed. by Alessandro Capone, Marco Carapezza, and Franco Lo Piparo. Springer, 3–23. Schulte, Peter (2017). “Postscript [to “A guide to naturalizing semantics”].” In: A Companion to the Philosophy of Language. Ed. by Bob Hale, Crispin Wright, and Alexander Miller. Blackwell, 190–196. Searle, John R. (1972). “Chomsky’s Revolution in Linguistics.” The New York Review of Books 18.12. 16–24. — (1983). Intentionality: An essay in the philosophy of mind. Cambridge University Press. Segal, Gabriel M. A. (2006). “Truth and Meaning.” In: The Oxford Handbook of Philosophy of Language. Ed. by Ernie Lepore and Barry C. Smith. Oxford University Press, 189–212. Shoemaker, Sydney (1981). “Some Varieties of Functionalism.” Philosophical Topics 12.1. 93– 119, Reprinted in Shoemaker (2003).

183 Shoemaker, Sydney (2003). Identity, Cause, and Mind: Philosophical Essays. Expanded Edi- tion. Oxford University Press. Siegel, Susanna (2006). “Which Properties are Represented in Perception?” In: Perceptual Ex- perience. Ed. by Tamar Szabo´ Gendler and John Hawthorne. Oxford University Press, 481– 503. Smith, Barry C. (2006). “Why We Still Need Knowledge of Language.” Croatian Journal of Philosophy VI.18. 431–456. — (2008). “What I Know When I Know a Language.” In: The Oxford Handbook of Philosophy of Language. Ed. by Ernie Lepore and Barry C. Smith. Oxford University Press. Smith, Neil (2005). Language, Frogs and Savants: More Linguistic Problems, Puzzles and Polemics. Blackwell. Smith, Neil and Nicholas Allott (2016). Chomsky: Ideas and Ideals. Third Edition. Cambridge University Press. Soames, Scott (1984). “Linguistics and Psychology.” Linguistics and Philosophy 7.2. 155–179, Reprinted in Soames (2009a). — (1997). “Skepticism about Meaning: Indeterminacy, Normativity, and the Rule-Following Paradox.” Canadian Journal of Philosophy 27.1. 211–249, Reprinted in Soames (2009b). — (2009a). Philosophical Essays, Volume 1: Natural Language: What It Means and How We Use It. Princeton University Press. — (2009b). Philosophical Essays, Volume 2: The Philosophical Significance of Language. Princeton University Press. Sosa, Ernest (1999a). “How Must Knowledge Be Modally Related to What Is Known?” Philo- sophical Topics 26.1/2. 373–384. — (1999b). “How to Defeat Opposition to Moore.” Philosophical Perspectives 13. 141–153. Stainton, Robert J. (2011). “In Defense of Public Languages.” Linguistics and Philosophy 34.5. 479–488. — (2016). “A Deranged Argument Against Public Languages.” Inquiry 59.1. 6–32. Stalnaker, Robert C. (1978). “Assertion.” In: Syntax and Semantics, Vol. 9: Pragmatics. Ed. by Peter Cole. Academic Press, 315–332. Reprinted in Stalnaker (1999). — (1984). Inquiry. Cambridge University Press. — (1986). “Replies to Schiffer and Field.” Pacific Philosophical Quarterly 67. 113–123. — (1999). Context and Content: Essays on Intentionality in Speech and Thought. Oxford Uni- versity Press. — (2010). “Responses to Stanley and Schlenker.” Philosophical Studies 151.1. 143–157. — (2014). Context. Oxford University Press. — (2015). “Luminosity and the KK thesis.” In: Externalism, Self-Knowledge, and Skepticism: New Essays. Ed. by Sanford C. Goldberg. Oxford University Press, 19–40. Reprinted in Stalnaker 2019. — (2019). Knowledge and Conditionals: Essays on the Structure of Inquiry. Oxford University Press. Stanley, Jason (2010). ““Assertion” and Intentionality.” Philosophical Studies 151.1. 87–113. — (2011). “Knowing (How).” Noˆus 45.2. 207–238. Stanley, Jason and Timothy Williamson (2001). “Knowing How.” The Journal of Philosophy 98.8. 411–444. — (2017). “Skill.” Noˆus 51.4. 713–726.

184 Stark, Brielle C., Sharon Geva, and Elizabeth A. Warburton (2017). “Inner Speech’s Relation- ship With Overt Speech in Poststroke Aphasia.” Journal of Speech, Language, and Hearing Research 60. 2406–2415. Stenius, Erik (1967). “Mood and Language-Game.” Synthese 17.3. 254–274. Stich, Stephen P. (1971). “What Every Speaker Knows.” The Philosophical Review 80.4. 476– 496. — (1980). “What every speaker cognizes.” The Behavioral and Brain Sciences 3. 38–39. — (1983). From Folk Psychology to Cognitive Science: The Case Against Belief. The MIT Press. Strawson, Peter F. (1950). “On Referring.” Mind 59.235. 320–344, Reprinted in Strawson (1971). — (1971). Logico-Linguistic Papers. Methuen. Szabo,´ Zoltan´ Gendler (2008). “Structure and Conventions.” Philosophical Studies 137.3. 399– 408. — (2012). “Against Logical Form.” In: Donald Davidson on Truth, Meaning, and the Mental. Ed. by Gerhard Preyer. Oxford University Press, 105–126. — (2017). “Compositionality.” In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Summer 2017. Stanford University. Tiehen, Justin (2018). “Recent Work on Physicalism.” Analysis 79.3. 537–551. Tomasello, Michael (2003). Constructing a Language: A Usage-Based Theory of Language Acquisition. Harvard University Press. — (2008). Origins of Human Communication. The MIT Press. Vicente, Agustin and Fernando Martinez Manrique (2011). “Inner Speech: Nature and Func- tions.” Philosophy Compass 6.3. 209–219. Whiting, Daniel (2007). “The Normativity of Meaning Defended.” Analysis 67.2. 133–140. — (2016). “What is the Normativity of Meaning?” Inquiry 59.3. 219–238. Wiggins, David (1997). “Languages as Social Objects.” Philosophy 72.282. 499–524. Williamson, Timothy (2000). Knowledge and Its Limits. Oxford University Press. Wilson, Jessica M. (2005). “Supervenience-based Formulations of Physicalism.” Noˆus 39.3. 426–459. — (2014). “No Work for a Theory of Grounding.” Inquiry 57.5–6. 535–579. — (2018). “Grounding-Based Formulations of Physicalism.” Topoi 37. 495–512. Wright, Crispin (1984). “Kripke’s Account of the Argument Against Private Language.” The Journal of Philosophy 81.12. 759–778, Reprinted in Wright (2001). — (1986). “How Can the Theory of Meaning be a Philosophical Project?” Mind and Language 1.1. 31–44. — (1987). “Theories of Meaning and Speakers’ Knowledge.” In: Realism, Meaning and Truth. Oxford University Press, 204–238. — (2001). Rails to Infinity: Essays on Themes from Wittgenstein’s Philosophical Investiga- tions. Harvard University Press. Yalcin, Seth (2014). “Semantics and Metasemantics in the Context of Generative Grammar.” In: Metasemantics: New Essays on the Foundations of Meaning. Ed. by Alexis Burgess and Brett Sherman. Oxford University Press, 17–54. Ziff, Paul (1984). Epistemic Analysis: A Coherence Theory of Knowledge. Springer.

185