<<

The of : The Puzzle and Its Solution

Jared Warren

Abstract

It seems that our arithmetical notions are determinate — that there is a fact of the matter about every single arithmetical question whether or not we can discover it. But the determinacy of arithmetic is extremely puzzling given the algorithmic nature of human cognition and famous limiting results in , such as Gödel’s incompleteness theorems, which seem to show that no theory of arith- metic learnable by algorithmic creatures like us could possibly be determinate. I call this the puzzle of arithmetical determinacy. This paper has three central goals: (i) to vividly present this puzzle and impress upon the reader that it isn’t a mere idle curiosity (section 1); (ii) to argue that extant attempts to solve the puzzle, in- cluding those using the open-endedness of our arithmetical notions, have thus far failed (sections 2 and 4); and (iii) to clearly explain the notion of open-endedness in a philosophically illuminating fashion and show how the notion can be used to solve the puzzle when coupled with the externalist lessons of contemporary metasemantics (sections 3, 5, and 6).

Keywords: Arithmetic, Indeterminacy, Open-Endedness, Incompleteness, Metaseman- tics, Externalism

1 The Puzzle

Consider the following three claims:

1. ARITHMETIC: we have a determinate conception of arithmetic

2. COGNITION: human cognition is algorithmic

3. FAILURE: no determinate theory of arithmetic is algorithmic

1 1 THE PUZZLE

Here I’m going to briefly explain each of these claims and argue that they’re individ- ually plausible before showing that jointly they give rise to an important puzzle about our grasp of arithmetic. I call this The Puzzle of Arithmetical Determinacy. The puzzle is perhaps best classified as metasemantic or metaconceptual—it resides in the border- lands of the of mind, language, and mathematics. As such, this puzzle should be of interest to a wide assortment of philosophers, but it isn’t as widely appre- ciated as it should be. (1) ARITHMETIC: we have a determinate conception of arithmetic. Roughly, this means that there is a fact of the matter about any given arithmetical question that can be asked. To illustrate: consider Goldbach’s conjecture, a famous, unproven arithmetical conjecture saying that every even natural greater than two is the sum of two, possibly repeated, prime . Despite centuries of effort by mathematicians, this conjecture has been neither proven nor refuted. According to ARITHMETIC, there is a fact about Goldbach’s conjecture even though we haven’t found it. What’s more: even if we never prove or refute Goldbach’s conjecture, according to ARITHMETIC, the conjecture is currently either true or false, not both, not neither. ARITHMETIC encodes the plausible idea that arithmetical truth is bivalent: every single arithmetical sentence f is either true or false, not both, not neither.1 This is intimately related to, and often taken to be explained by, the fact that every single of our arithmetical language that isn’t somehow ruled out by our practice has the very same structure. Say that an interpretation I of our arithmetical language is “admissible” if it isn’t ruled out by our practice, “inadmissible” otherwise. If every admissible interpretation of arithmetic has the very same structure, then any sentence f will be true in an admissible interpretation of arithmetic I just in case it is true in every admissible interpretation of arithmetic. Against this background, ARITHMETIC claims that in every admissible interpreta- tion of our arithmetical language, the “numbers” form an w-sequence. This is a claim that platonists and structuralists alike can endorse.2 In , even nominalists— those who reject the existence of abstract objects like numbers—will need to provide

1Given this, it may be wondered I don’t simply characterize determinacy using truth and falsity predi- cates (“T”, “F”) and a name-forming device (“ , ”) as follows: T f F f (or perhaps this conjoined p q p q _ p q with “ (T f F f )” and “ ( T f F f )”)? I don’t because there are subtle issues concerning ¬ p q ^ p q ¬ ¬ p q ^¬ p q the relationship between claims like these “internal” to our formal truth theory and “external” claims like bivalence. In particular, I’m skeptical that this disjunction states bivalence in the sense we want unless cer- tain assumptions about the background logic and proof system are also made. In addition, note that the “not neither” conjunct of the more involved statement of bivalence is simply equivalent to the simpler, dis- junctive statement if the truth theory is formulated in a logic with the DeMorgan laws and double negation elimination. 2Platonism is endorsed in, e.g., Gödel (1964), Maddy (1990), and Woodin (2004); structuralism is en- dorsed in, e.g., Benacerraf (1965), Resnik (1997), and Shapiro (1997).

2 1 THE PUZZLE some surrogate theory to handle the work that arithmetic does in science and daily life, so nominalists can endorse a version of ARITHMETIC couched in terms of the primi- tive notions of their favored surrogate theory.3 The key point is that ARITHMETIC isn’t a philosophically controversial claim accepted only by those who favor a particular of mathematics; rather it is a key point of agreement that cuts across most other issues in the philosophy of mathematics. ARITHMETIC is intuitively appealing—it sure seems like there is a fact of the mat- ter about Goldbach’s conjecture that is independent of our ability to discover it. But intuitively appealing claims are sometimes false. Why should we think that ARITH- METIC is true? ARITHMETIC is so fundamental and ingrained in our way of thinking that it’s dif- ficult to argue for it in a non-question begging way. Perhaps the best we can do is to point to the difficulty of conceiving of its falsity. Goldbach’s conjecture is true if there is a that is both even and not the sum of two primes, false otherwise; what possible borderline case could there be? The difficulty of even imagining ARITH- METIC’s falsity is itself some evidence that ARITHMETIC is true. The informal being used here is: if D is some state of affairs, from the inconceivability of D’s obtaining, we can conclude that D doesn’t, in fact, obtain:

D is inconceivable (Incon) D ¬ It’s plausible that (Incon) is highly reliable, but it’s almost certainly invalid. There are more things on Heaven and Earth than are dreamt of in our philosophy.4 Still, the difficulty of conceiving of the falsity of ARITHMETIC is some reason to accept its truth, however weak. In addition, as far as we can conceive of the falsity of ARITHMETIC, its falsity is implausible.5 Combined with its intuitive plausibility, these considerations give us serious (but perhaps not independent) reason to accept ARITHMETIC. (2) COGNITION: human cognition is algorithmic. An is a recipe for solving a problem using a sequence of small, mindless steps. To say that some process is algorithmic is to say that it results from the execution of . The general notion of an algorithm abstracts away from issues of memory, lifespan, and computa- tional speed. This means that a process can be algorithmic without being computation- ally feasible. Logicians have proposed a number of formalized mathematical analyses

3Nominalism is endorsed in, e.g., Azzouni (2005), Chihara (1990), and Field (1980). 4Consider the converse inference, from D to D is conceivable, there is no reason to think this is valid, unless our minds were somehow specially designed for universal fact finding. And in many contexts these two inference rules will be interderivable, e.g., if we start with (Incon) we can derive the validity of its converse using conditional proof and a contrapositive rule ( f y y f). ¬ !¬ ` ! 5Some countervailing considerations are adduced in Field (1994).

3 1 THE PUZZLE of this informal notion, e.g., Turing machines, register machines, recursive functions, etc.6 In perhaps the most remarkable convergence in the , virtually all formal analyses of the notion of an algorithm that have been seriously proposed turn out to agree—a is computable by a just in case it computable by a register machine just in case it is recursive and so forth and so on. In addition, everything that is intuitively algorithmic is Turing computable, recursive, etc. This alignment of our formal and informal notions gives rise to Church’s Thesis: a func- tion is computable by an algorithm just in case it is computable by a Turing machine just in case it is computable by a register machine just in case it is recursive, etc. Al- though the thesis links an informal notion (algorithmic computability) to formal notions (e.g., Turing computability, recursiveness) and so isn’t open to a rigorous mathematical demonstration, no serious threat to Church’s thesis has ever been proposed.7 So far we’ve just been discussing numerical functions, but as every modern com- puter user knows, algorithms can do much more than crunch numbers when hooked up to action performing input/output machines. In this way, our modern digital computers play movies, process documents, analyze visual data and so on and so forth. This opens up the possibility that anything the human mind can do is done via some algorithmic process. Given Church’s thesis, to say that human cognition is algorithmic is to say that the mind can’t do anything that can’t be done, in principle, by a Turing machine.8 Al- though the human mind isn’t perfectly understood, one central feature of any scientific approach to the mind is that human cognition results from brain activity; the relevant kind of brain activities consist of neurons firing in various complicated patterns and thereby executing various complicated algorithms. The idea that minds either are or result from the actions of brains, lends strong support to COGNITION. While the consensus here is overwhelming, it is not universal. Most notoriously, J.R. Lucas and Roger Penrose have argued from, in essence, ARITHMETIC and FAIL- URE to the rejection of COGNITION.9Other philosophers have rejected COGNITION for more general reasons: Hubert Dreyfus and John Searle seem to reject COGNITION in their criticisms of artificial intelligence; Paul Boghossian has recently flirted with rejecting COGNITION to account for linguistic rule-following; and B.J. Copeland has explored the idea that the analog nature of neurons puts the brain beyond the computa- tional powers of any digital, discrete computer—including Turing Machines.10 I won’t

6See any theory textbook for details, e.g., Cutland (1980) and Odifreddi (1989). 7Although Smith (2013), chapter 45 discusses attempts to establish the thesis using squeezing . 8Section I.8 of Odifreddi (1989) contains a nice general discussion of Church’s thesis 9More precisely: Lucas and Penrose have argued from Gödel’s Incompleteness Theorems to the rejection of something like COGNITION; see Lucas (1961) and Penrose (1989) and (1994) for these arguments and Benacerraf (1967) and Shapiro (1998) for important responses. 10See Dreyfus (1972), Searle (1980), Boghossian (forthcoming), and Copeland and Proudfoot (1999).

4 1 THE PUZZLE discuss any of these views in detail here, but they are all controversial positions that go against the mainstream picture of human cognition endorsed in cognitive science, psychology, neuroscience, artificial intelligence, and elsewhere. Accepting the main- stream scientific view of the human mind, even in broadest outline, involves accepting COGNITION. (3) FAILURE: no determinate theory of arithmetic is algorithmic. Let’s say that a sentence in the language of arithmetic is determinately true just in case it is true in every admissible interpretation of our arithmetical practice. A determinate theory of arithmetic is then one where every sentence in the language of the theory is either determinately true or its negation is determinately true. Assuming that admissibility is simply a matter of making true every sentence in the theory, then a theory T will be determinate just in case it is negation complete, i.e., just in case for every sentence f in the language of T, either f is in T or f is in T; and say that a theory T is a ¬ theory of “arithmetic” just in case it extends the weak arithmetical theory of .11 Against this background, and together with Church’s thesis, celebrated metamathematical results show that an arithmetical theory cannot be both determinate and algorithmic. Gödel’s first incompleteness theorem (with the strengthening due to Rosser) entails that for any consistent, recursive arithmetical theory A, there is a sentence GA in the language of the theory such that neither G nor G is provable in A.12 This means A ¬ A that both the theory A + G and the theory A + G are consistent and—by a version A ¬ A of the completeness theorem—each of these theories has a model, an interpretation in which it is true. So, assuming Church’s thesis, if the admissibility of an interpretation of arithmetic merely requires that the interpretation vindicate all of the claims and inferences in our theory, no determinate theory is algorithmic. Related results plague any attempts to explain how an algorithmic theory of arithmetic could be determinate. It is simply a mathematical fact that no consistent, negation complete theory extending Robinson arithmetic is recursive. Together with Church’s thesis, this mathematical fact entails that no consistent, negation complete theory extending Robinson arithmetic is algorithmic. And given the definitions of “determinateness” and “arithmetical theory”, FAILURE is a straightforward consequence of Gödel’s theorem together and Church’s thesis. The Puzzle. The triad of ARITHMETIC,COGNITION, and FAILURE is not incon- sistent or straightforwardly incoherent, but it is close. If we add a further principle, we end up with an incoherent quadrad. That principle is:

11Robinson arithmetic is introduced and defined in Tarksi, et al. (1953). 12Gödel (1931) and Rosser (1936).

5 2 SEMANTIC EXTERNALISM TO THE RESCUE?

THEORY: our conception of arithmetic comes from acceptance of a theory of arithmetic

If we accept THEORY along with ARITHMETIC,COGNITION, and FAILURE, we are mired in inconsistency as follows: THEORY explains our grasp of arithmetical notions in terms of accepting a theory, so if our arithmetical notions are determinate, it must be because we accept a theory of arithmetic A that is determinate. Assuming that A is consistent and arithmetical, then by COGNITION and Church’s thesis, A must be algorithmic, for if otherwise, human brains could do something that Turing machines could not and that is ruled out by COGNITION. Thus, A is an algorithmic, consistent, and determinate theory of arithmetic, something that FAILURE assures us does not exist. Reductio. So the quadrad of ARITHMETIC,COGNITION,FAILURE, and THEORY, together with trivial supplementation, is incoherent. Above I’ve argued that the first three of these are all extremely plausible. On the other hand THEORY, while somewhat appeal- ing at first-glance, is less secure. With this background in place, the puzzle of arithmetical determinacy can now be put simply: how is it that ARITHMETIC,COGNITION, and FAILURE all manage to be true together? We have already seen that there is no inconsistency between these three, as long as we don’t also accept THEORY. And given that THEORY is much less plau- sible than any of the other three, I think THEORY will need to be rejected to solve the puzzle. But merely rejecting THEORY does not suffice for solving the puzzle. While rejecting THEORY allows for us to consistently accept all of ARITHMETIC,COGNI- TION, and FAILURE, we still have no idea how all of these three can be true together. To explain this, we can’t just reject THEORY and call it a day, we must also provide an alternative account of our arithmetical competence that is consistent with COGNITION and leads to DETERMINACY without running afoul of FAILURE. In essence, to solve the puzzle we need a metasemantic or metaconceptual account of arithmetic that vindicates ARITHMETIC while also respecting both uncontrover- sial mathematical results (FAILURE) and the modern scientific approach to the mind (COGNITION). Only by solving this puzzle—or showing that it cannot be solved—will have a satisfying philosophical account of basic arithmetic.

2 Semantic Externalism to the Rescue?

THEORY is natural as an account of our mathematical concepts, but theory-based ac- counts of non-mathematical concepts have fallen out of favor in contemporary philoso-

6 2 SEMANTIC EXTERNALISM TO THE RESCUE? phy. In the last fifty years, the traditional idea that meaning is fully determined by what is in our heads (i.e., our theories/beliefs) has been overturned. Influential work by Saul Kripke, Hilary Putnam, Tyler Burge, Gareth Evans, and others, has led philosophers of mind and language to allow that meaning is (partly) determined by causal interactions with the the world and with other speakers. This general metasemantic approach is called semantic externalism. Given the popularity of semantic externalism, a natural thought is that the puzzle of arithmetical determinacy might be solved by moving from THEORY to an external- ist metasemantics for arithmetic. The problem with this idea is that it doesn’t seem like familiar externalist gambits can be directly applied to give a THEORY-rejecting metasemantics for arithmetic that solves our puzzle. For instance: coordination be- tween speakers seems powerless here—if each speaker in the language’s arithmetical competence is simulated by a Turing machine, then given that there are a finite num- ber of speakers, a single Turing machine can simulate the arithmetical competence of the entire community—and causal factors don’t seem relevant, since numbers and other mathematical objects are paradigmatic non-causal abstract objects. So standard externalist moves don’t directly apply to arithmetic, but there is one approach to the metasemantics of arithmetic in the literature that, in effect, attempts to use externalist moves indirectly to explain our grasp of determinate mathematical notions. The approach I’m referring to has been developed by Hartry Field in a series of papers.13 Te explain Field’s approach: let S be the theory consisting of our theory, pure and applied, together with our total physical theory and define a predicate F(Z) of S as follows:

Z is a set of events which (i) has an earliest and a latest member, and (ii) is such that any two members of Z occur at least one second apart

Let’s also make the following cosmological assumptions:

1. Time is infinite in extent (i.e., there is no finite bound on the size of sets satisfying F)

2. Time is Archimedean (i.e., any set satisfying F will have only finitely many members)

In S we will be able to derive the following principle concerning our “finitely many”- quantifier, F :

13Field’s works on this topic are his (1994), (1998a), and (1998b).

7 2 SEMANTIC EXTERNALISM TO THE RESCUE?

( ) F xY(x) Y Z f (F(Z) Y = x Y(x) f unction f mapsY 1 1 into Z) ⇤ $9 9 9 ^ { | } ^

( ) links there being finitely many Y’s to there being a one-one mapping from the Y’s ⇤ into some set of events Z satisfying F.14 We have not yet guaranteed the determinacy of “F ”, since our background the- ory S might have non-standard models where this quantifier is given a non-standard interpretation. However, this is avoided if we assume that the physical vocabulary in S (“event”, “one second apart”, “earlier than”, etc.) is determinate to the extent that every admissible interpretation of S assigns physical predicates extensions that contain only items that actually have the corresponding property, e.g., any interpretation I of S where “event” is assigned an extension that includes non-events is inadmissible—let’s call this “Field’s constraint”. If the cosmological assumptions are true, then any non- standard model of S that treats finiteness non-standardly, will by ( ), violate Field’s ⇤ constraint. So, assuming the cosmological assumptions and Field’s constraint, our notion of finiteness is determinate; and the determinacy of finiteness leads to the deter- minacy of arithmetic.15 Field is quick to stress that alternative cosmological assumptions would allow the same basic approach to go through, but I don’t think any version of this approach can provide a satisfying resolution of our puzzle. Before arguing for this, it’s worth noting that Field himself is somewhat ambivalent about his proposal, writing:

It might be thought objectionable to use physical hypotheses to secure the determinacy of mathematical concepts like finiteness. I sympathize—it’s just that I don’t know any other way to secure determinacy.16

Field notes that the success of his approach depends upon the actual, not merely pos- sible, truth of the cosmological assumptions; so if Field is right, then ARITHMETIC is true because the cosmological assumptions are true and metasemantics requires that Field’s constraint be met. If the cosmological assumptions were false, we wouldn’t

14Field is quick to note that (*) is not a definition of finiteness, i.e., Field is not assuming that we couldn’t discover that time is actually finite in extent. 15For example, by adding to our standard number theoretic the claim that each natural number has only finitely-many predecessors. Building the notion of finiteness into our logic or our theories can be done in multiple ways and each of these ways allows for a proof of the categoricity of arithmetic, see Shapiro (1991), chapter 9 for details. 16Field (1998b), page 342; a similar quote from page 418 of his earlier (1994) is still ambivalent, but slightly less skeptical of his own approach and slightly more skeptical of alternatives: “I am sure that some will feel that making the determinateness of the notion of finite depend upon cosmology is unsatisfactory; perhaps, but I do not see how anything other than cosmology has a chance of making it determinate.”

8 2 SEMANTIC EXTERNALISM TO THE RESCUE? have a determinate theory of arithmetic. While this might make us uneasy—what a narrow miss!—perhaps we shouldn’t have expected it to be necessary that, in any world where we exist and reason and do arithmetic, our arithmetical notions are determinate, so I don’t think this is a catastrophic problem for Field’s approach. The key to evaluating Field’s approach is the assumption of Field’s constraint. In commenting on the assumption that only interpretations of our physical theory meeting Field’s constraint are admissible, Charles Parsons writes:

I find it hard to see how someone could accept that assumption who does not already accept some hypothesis that rules out nonstandard models as unintended on mathematical grounds. If our powers of mathematical concept formation are not sufficient...why should our powers of physical concept formation do any better? We are not talking about events in a common-sense context, but rather in the context of a physical theory de- veloped in tandem with sophisticated mathematics.17

Parsons goes on to note that Field must be tacitly assuming that the determinacy of our physical notions is explained, independently of mathematical notions, by some kind of externalist metasemantics. I agree. This is why I’ve claimed that Field’s approach indirectly relies upon semantic externalism, though his discussion doesn’t insist upon this. I take Field to have established (and perhaps this is all he takes himself to have established) that if our physical theory is determinate enough that Field’s constraint is met, and the cosmological assumptions are true, then ARITHMETIC can be explained in a manner that solves our puzzle. Granting the cosmological assumptions—as surely we must—the success of Field’s proposal depends upon the assumption that the metasemantics of our physical vocabu- lary requires that Field’s constraint is met. Does it? I don’t see how. It’s very difficult for me to imagine that our event-theory isn’t satisfied by some items that are not events but are highly event-like, perhaps we can call these items “schmevents” or “events*”. When we make alterations in our high-level, sophisticated, event-theory, isn’t it natural to think that we have simply changed the subject? To think otherwise seems to be to think that a term like “event” (or some other high-level physical term) is some kind of worldly natural kind, but it’s difficult to see what this could even mean without making dubious metaphysical assumptions. For this reason, I don’t think there is any plausible metasemantic story that leads to Field’s constraint. Field himself is sensitive to this type of worry, and notes that weaker determinacy assumptions concerning our physical vocabulary would still allow for a version of his

17Parsons (2001), page 22.

9 3 OPEN-ENDEDNESS account to go through.18 In particular, it seems that Field’s approach could work if we could, in some manner, introduce a physical predicate into our total theory that had only w-sequences in its extension. But in order for Field’s approach to provide a satisfying resolution of the puzzle, a metasemantic account of how we manage to do this is required. For my part, like Parsons, I find it difficult to imagine any plau- sible explanation that doesn’t help itself to arithmetical determinacy. N.B., the issue isn’t that assuming arithmetical determinacy is required for working with arithmetical notions.19 But rather that if the explanation of our arithmetical determinacy comes in terms of our physical determinacy, then the explanation of our physical determinacy can’t come in terms of our arithmetical determinacy; such a circular explanation would be no explanation at all. For Field’s approach to work, it isn’t enough that there simply be some worldly w- sequence of events or spacetime regions or whatnot, but we also need to be able to, in effect, uniquely single out some worldly w-sequence. And I don’t see a plausible way to use familiar externalist appeals to causal access to do this. And if we can’t somehow offload the work of fixing the extension of “event” to the world, using causation, then our task reduces to a notational variant of the arithmetical problem with which we started. For if we could introduce a mathematical predicate that had only true w- sequences in its extension, we would likewise be able to account for ARITHMETIC. Absent a satisfying metasemantic story of our “powers of physical concept formation”, as Parsons says, I don’t find Field’s ingenious approach satisfying. The lessons of semantic externalism will once again be relevant in section 5 below, but for now let’s move on to discuss another popular approach to rejecting THEORY and solving our puzzle: the idea that our arithmetical commitments are, in a sense to be explained, open-ended.

3 Open-Endedness

What is it for a principle or inference rule to be open-ended? An open-ended rule continues to apply as the language expands via the introduction of new vocabulary. To illustrate, consider the inference rule of disjunctive syllogism:

ffy (DS) ¬ _ y

18For example, on page 340 of his (1998b). 19As Field himself points out, e.g., on page 342 of his (1998b).

10 3 OPEN-ENDEDNESS

The standard textbook explanation of a schematic rule of inference like this explains that the schematic letters “f” and “y” have as substitution instances every sentence in the specified formal language in question. But according to the textbook understanding, if we are working in a language L and we move to L+ by adding some new vocabulary to L, there is nothing in our previous explanation that guarantees that (DS) continues to be valid in L+. This makes the standard understanding of schemas unsatisfying as a model of nat- ural language linguistic rule-following: following the natural language analog of an inference rule like (DS) is not just a matter of accepting every instance of the rule that can be formulated in your present language, (DS) also commits you to continuing to ac- cept new instances of the rule as your language expands. Here is Timothy Williamson on open-endedness:

. . . my commitment to disjunctive syllogism is not exhausted by my com- mitment to its instances in my current language; when I learn a new word, I am not faced with an open question concerning whether to apply dis- junctive syllogism to sentences in which it occurs. Indeed, open-ended commitment may well be the default sort of commitment: one’s commit- ment is open-ended unless one does something special to restrict it.20

Despite the standard formal account found in logic textbooks, the open-endedness of our inference rules has long been recognized and accepted (cf. Bertrand Rus- sell’s distinction between claiming that everything satisfies a condition and claiming that anything whatsoever satisfies the condition).21 The philosophical importance of open-endedness has recently made something of a comeback in contemporary philos- ophy: J.H. Harris used the open-endedness of our logical rules to prove equivalence results concerning different logics and Shaughan Lavine, Vann McGee, and Timothy Williamson have used the open-endedness of our logical rules to undermine skolem- style indeterminacy worries about absolutely unrestricted quantification.22 The idea that the schematic principles used in mathematics are likewise open-ended has also met with widespread acceptance—, Hartry Field, Shaughan Lavine, Vann McGee, Charles Parsons and others have explicitly endorsed this claim.23 Consider the principle of formulated, as is standard, as a first-

20Williamson (2006), page 377. 21See Lavine (1994) for a discussion of these historical antecedents; and Russell (1908) for Russell’s fullest discussion of the matter. 22See Harris (1982), Lavine (1994) and (unpublished), McGee (2000) and (2006), and Williamson (2006). 23See Feferman (1991), Field (2001), Lavine (unpublished), McGee (1997), and Parsons (2001).

11 3 OPEN-ENDEDNESS order schema:24

(Induction)((j(0ˆ) x(j(x) j(s(x))) j(b)) ^8 ! ! Once again the standard, textbook understanding of the schematic letters differs from the open-ended understanding. Shoenfield’s influential graduate textbook Mathemat- ical Logic explains the first-order induction schema as follows (where L(N) is our first-order arithmetical language):

One statement of the induction is that if a set contains 0 and con- tains the successor of every natural number in the set, then it contains every natural number. We cannot express this form in L(N), since we have no variables which vary through sets of natural numbers. Another form of the induction axiom states that if 0 has some property, and if the successor of every natural number having this property also has this property, then every natural number has this property. We cannot fully express this ei- ther, since we have no variables which vary through properties of natural numbers. However, we can express it for each property of natural numbers which can be expressed in L(N).25

If the textbook understanding of induction applied to our actual arithmetical language, we would be committed to induction only for arithmetical predicates formulable in our current language. This means that if we added some new arithmetical predicate F to our language, it would be an open question whether or not induction held for F. This shows that the standard, non-open-ended, understanding of schemas is too weak to represent the actual commitments involved in accepting a principle like (Induction)— someone who hesitated to apply induction to a newly introduced arithmetical pred- icate would be betraying conceptual incompetence. The proper way to understand (Induction), as a model of our actual arithmetical practice, is open-endedly. Again, this point is not new to the literature: here’s Vann McGee on induction:

Whenever we adopt and name a stray dog or we introduce a new scientific theory with concomitant vocabulary, we expand the language. Fundamen- tal principles of inference, like the principle of mathematical induction, are upheld even with the enlarged vocabulary. We don’t have to reassess the validity of mathematical induction when we expand our inventory of

24If we’re working in a non-arithmetical background theory, we’ll want to formulate induction using a number predicate “N”, but I’ll ignore this here. 25Shoenfield (1967), page 204.

12 3 OPEN-ENDEDNESS

theoretical concepts, for our current understanding of the natural numbers ensures that, no matter how we expand our language in the future, in- duction axioms formulated with the enlarged vocabulary will still be true. That’s one reason for the remarkable stability of mathematics; scientific paradigms may come and go, but the principles of , pure and applied, remain the same.26

Arithmetical principles and logical principles—at least—involve open-ended commit- ments that aren’t perfectly modeled by our fixed-language formal theories. If our arithmetical competence involves an open-ended commitment to (Induction), then this competence goes beyond any given formal theory of arithmetic we could learn. So another way to reject THEORY is to see our arithmetical competence as rooted not in acceptance of any particular theory in a fixed language, but rather in acceptance of open-ended principles like (Induction). To see if this idea could possibly help in solving the puzzle of arithmetical determinacy, we need to say more about how to un- derstand open-ended commitments. Standardly, “accepting” a schema is a matter of being disposed to accept each in- stance of the schema in your current language (where, as usual, the dispositions might be masked or blocked in particular cases). Open-ended acceptance of a schema goes beyond this to include a disposition to continue accepting instances of the schema as your language expands. There are a number of obstacles to understanding here. One is that some “expansions” of our language result in insuperable problems, e.g., inconsis- tency. We can also imagine weird “expansions” where we add predicates or terms to our language along with strange rules, such as that induction doesn’t apply for this new predicate or that this predicate systematically alters the meanings of other expressions in sentences in which it occurs. For reasons like this, sometimes proponents of open- endedness have specified that expansions are only “possible” in the relevant sense, if they involve only “well-defined” expressions.27 Whatever else the restriction to “well-defined” predicates is meant to do, it is meant to rule out vague predicates. Paradigm cases of vague predicates like “bald” and “heap” have no application to arithmetic, but Wang’s paradox gives an example of a vague arithmetical predicate (“small”—zero is small, smallness is inherited under successors, so every number is small).28 Generally we can say that an arithmetical predicate “F” is vague if (i) there is some natural number n such that for any natural number m greater than n, it is determinately true that m is not F ; (ii) there is some natural number k

26McGee (2006), page 187. 27This type of approach goes back to Zermelo (1930). 28See Dummett (1975) for Wang’s paradox.

13 3 OPEN-ENDEDNESS such that any natural number l less than k, it is determinately true that l is F; and (iii) there is some natural number a such that it is indeterminate whether or not a is F. Arguably Wang’s paradox shows that induction fails for vague predicates, so we can count a predicate as “well-defined” if it isn’t vague. An open-ended commitment to (Induction) involves a commitment to accepting induction for every well-defined predicate in any possible expansion of your current language, but even with vague predicates excluded, because of the issues of inconsistency and weird rules mooted above, we don’t have a perfectly clear idea on which expansions of our language should count as “possible” in the relevant sense. I think that a straightforward metasemantic approach sheds light on this issue. Con- sider a natural language L and an expansion of L, L+, where L+ includes all of the old vocabulary and rules of use of L but adds to L at least one new piece of terminology, D, together with new inference rules for D. We don’t need a precise understanding of natu- ral language inference rules here, but I think it best to think of them as a generalization of standard inference rules. We only want to capture the completely uncontroversial idea that natural languages don’t simply consist of a lexicon and a grammar, like for- mal languages, but also include some kind of further rules of operation, whether they be syntactic, semantic, or pragmatic. We can now say that L+ is a possible expan- sion of L, just in case expressions of L occurring in L+ can successfully be translated homophonically into their syntactically identical counterparts in L, and vice-versa. This procedure just encodes the obvious point that the expansions that we are con- cerned with are just those that don’t alter the meanings of the old terms in our language, say by engendering an inconsistency or by having, in their rules of use, odd clauses that explicitly alter the meanings of old expressions. Simply put: when discussing open- ended commitments, we aren’t concerned about what would happen in cases where we alter our linguistic practice so drastically that we are no longer speaking a version of our current language. Those kinds of cases simply aren’t relevant; our interest is in our current arithmetical notions.29 In formulating this definition, I have spoken about “homophonic translation” rather than “meaning” for a couple of reasons. The first is that, as is standardly assumed, translation preserves meaning, at least at some fineness of grain. If two terms dif- fer too drastically in meaning, then they cannot appropriately be translated into each other. The second is that, even those—like Quine—who are skeptical about meaning

29A truly complete account of open-endedness would need to integrate open-ended commitments into a full metasemantic theory. I do this in my book—Syntactic Shadows: A Linguistic Theory of Logic and Mathematics—where I give an inferentialist metasemantics for logic and mathematics. But inferentialism is not being assumed here; open-ended rules could presumably be integrated into many other approaches to metasemantics as well, though perhaps not as straightforwardly.

14 3 OPEN-ENDEDNESS in general and synonymy in particular, often have no problem appealing to norms of translation in their stead. Indeed, Quine himself gave important arguments concerning the inappropriateness of homophonic translation in certain circumstances in his late work on deviant logics.30 While it is true that Quine and others have argued that the idea of correct translation is indeterminate, most philosophers have rejected Quine’s arguments, at least in the broad form in which he gave them. Still, it would ridiculous to assume that there is always a perfectly determinate fact of the matter about correct translation, but I think we can admit this without undercutting the importance of open- endedness or the purchase we gain upon the notion by appealing to canons of correct translation. My preferred, metasemantically sensitive, manner of understanding open-endedness contrasts sharply with the approach taken by many proponents of open-endedness in the literature. These philosophers prefer to use a formal, model-theoretic approach to understanding open-endedness.31 In addition, the natural tendency of many logicians and philosophers, when first encountering open-endedness, is to try to understand it in a purely model-theoretic fashion. While I don’t object to this procedure as a mere heuris- tic, I strongly object to it if the formal account is meant to explain open-endedness in a philosophically illuminating fashion. Far from explaining the idea, this approach distorts it by forcing it into the standard, THEORY-based picture. To explain this complaint: imagine that we characterize open-endedness in some T, if T itself includes schemas that are understood in an open-ended fash- ion, then we will have failed in our attempt to explain open-endedness, for we would simply be explaining open-endedness in terms of itself. For this reason, any schemas in T must be understood in a perfectly straightforward fashion and the quantifiers in T must be taken to range over determinate totalities, either sets or classes, countenanced by T. This, in essence, amounts to reducing open-ended understanding of arithmetic to the perfectly standard, THEORY-based understanding of T, but now T faces the puzzle we were trying to solve: either T is determinate and learnable (algorithmic) or indeterminate and unlearnable. This approach to open-endedness, in any form it might take, simply reduces open-endedness to non-open-endedness and thereby guts the entire idea. Since this point is absolutely crucial, it’s worth making it concrete with a specific example: to think of open-endedness in this manner is simply to think of the well- defined predicates as forming a class that is fixed in advance and ranged over by the quantifiers of our background model theory. To think of an open-ended first-order rule

30See the infamous “change of logic, change of subject” from Quine (1970). 31See, e.g, McGee (2000).

15 3 OPEN-ENDEDNESS like (Induction) in this fashion is to, in essence, assimilate it to the standard second- order induction axiom:

(SO Induction) X(X0 x(Xx Xs(x))) xXx) 8 ^8 ! !8 The quantifier in (SO Induction) is standardly taken to range over numerical properties in extension or (equivalently) sets of natural numbers. While we can prove categoricity for second-order formal theories of arithmetic (such as second-order Peano arithmetic), second-order number theory faces the puzzle of section 1 in its starkest form. Cate- goricity only holds if the second-order quantifiers are understood as ranging over all sets of natural numbers, but it is well-known that alternative semantics for second or- der logic, for which the categoricity results fail, vindicate all of the axioms and rules of inference of our second-order theory and its . This seems to show, at best, that (SO Induction) is indeterminate between its standard and non-standard semantics if we stick with THEORY as our approach to grasp of arithmetical concepts.32 So to understand open-endedness by appeal to a background model theory, in essence, treats first-order Peano arithmetic, understood open-endedly, as equivalent to a sub-theory of 1 33 full-second order arithmetic—second-order Peano arithmetic’s P1-fragment. Critics of open-endedness have likewise objected to model-theoretic attempts to characterize or understand open-endedness, and for good reasons. But these critics miss the crucial point that there is an alternative way to understand open-endedness. As I understand open-endedness, it provides a well-motivated way of rejecting THEORY: the open-endedness of our logical and arithmetical principles is a well-recognized, di- achronic aspect of our logical and arithmetical practices. To attempt to understand open-endedness by reducing it to a background formal notion is to try to fit this di- achronic process into a synchronic mold—something crucial is lost in the fitting. Open-ended commitments add another tool that we can use to rule out deviant interpretations of our practice. Say that we are committed to an instance of a schema S in our current language and that we are precommitted to every instance of a schema S in a possible expansion of our language. The approach of THEORY, the standard one in metasemantics, demands only that any interpretation of our arithmetical language A respect all of our commitments by making them all come out true. To this, the proponent of open-endedness can add the demand that any admissible interpretation of

32I think that this complaint about second-order categoricity was first clearly made in the literature in Weston (1976). 33 1 The P1-fragment of second-order logic consists of the set of all sentence f in the language of second- order logic that are equivalent to sentences of the form X ... X y where y doesn’t contain any second- p8 1 8 n q order quantifiers. Field (2001) also makes this point concerning open-endedness as standardly understood, but Field, unlike me, seems to think that this understanding of open-endedness is the only one available.

16 4 HOW NOT TO USE OPEN-ENDEDNESS our practice respects not only our commitments, but also our precommitments. Let’s say that an interpretation I of our arithmetical language is “admissible” if and only if it respects both all of our commitments and all of our precommitments. So far I haven’t said anything worthy of controversy: everybody should admit that admissible interpretations of arithmetic must respect both our commitments and our precommitments. But admitting this doesn’t amount to admitting that this new metase- mantic demand is enough to solve our puzzle. To get to that point, the proponent of open-endedness must give an argument showing how our precommitments rule out non-standard interpretations of our practice. The literature has seem several impor- tant attempts to explain arithmetical determinacy using open-endedness. Unfortunately though, these attempts are seriously flawed: section 4 criticizes them before section 5 develops my own argument from open-endedness to determinacy.

4 How Not to Use Open-Endedness

Several philosophers have seen open-endedness as the key to explaining the determi- nacy of arithmetic, including Shaughan Lavine, Vann McGee, and Charles Parsons.34 But the road from open-endedness to determinacy is not straightforward, and I think that the two strategies that have been pursued in the literature are both seriously flawed. This section discusses both of these strategies and explains why I find them unsatisfy- ing. (i) The True Number Predicate. Vann McGee’s discussion of open-endedness and categoricity suggests the following simple argument for determinacy (Hartry Field has also noted that several distinguished logicians endorsed this type of argument to him in conversation):35 imagine that our arithmetical competence is rationally reconstructed by Peano arithmetic with the induction schema understood in an open-ended manner, call this “theory” PA+ while keeping in mind that it isn’t literally a “theory” in the standard sense. Here’s a simple proof that PA+ is determinate: assume in some expan- sion of our language there is a predicate, call it “M”, that has as its extension the true natural numbers. PA+ allows induction on this predicate and thus allows us to prove that our numbers are isomorphic to the true, M-numbers, ergo in our current language, any nonstandard interpretation is ruled out by our precommitment to acceptance of induction on the One True Number Predicate “M”. This argument is too good to be true—the problem is that it simply helps itself to a future determinate grasp of natural numbers and this, in addition to being objectionable

34Lavine (unpublished); McGee (1997); Parsons (2001). 35McGee (1997) and Field (2001).

17 4 HOW NOT TO USE OPEN-ENDEDNESS in and of itself, makes the open-endedness of induction inessential. Hartry Field makes these points in amusing and devastating fashion:

Of course, it’s true that if I could add a predicate that by some magic has as its determinate extension the genuine natural numbers, then I will be in a position to have determinately singled out the genuine natural num- bers. That’s a tautology, and has nothing to do with whether I extend the induction schema to this magical predicate. But if you think that we might someday have such magic at our disposal, you might as well think that we have the magic at our disposal now; and again, it won’t depend on schematic induction. So the only possible relevance of schematic induc- tion is to allow you to carry postulated future magic over to the present; and future magic is no less mysterious than present magic.36

For these reasons, this argument, considered in its bare form, is a cheat. In general, any argument using open-endedness to explain determinacy can’t simply postulate without explanation that in some expansion of our language, there is a predicate that determi- nately singles out the natural numbers. That is cheating because it simply assumes we could possibly do what we want to explain how we could possibly do. To solve the puz- zle of arithmetical determinacy, we need to explain how our open-ended arithmetical practice manages to rule out all non-standard interpretations, and this strategy provides no such explanation. (ii) Equivalence Proo f s. A more popular approach—pursued by Shaughan Lavine and Charles Parsons—is to mimic a version of Dedekind’s categoricity theorem for second-order arithmetic in open-ended arithmetic.37 Let me explain this approach in detail before explaining what I think is a fatal objection to it. Imagine that we have two theorists, Alpha and Beta, each of whom speaks a dif- ferent language (a and b respectively) and each of whom accepts the full open-ended first-order in their home language. Imagine now that Alpha and Beta want to know how to translate the arithmetical language of the other. Let Na and Nb be their respective number predicates (and similarly for their respective zero constants and successor functions) and let t be a translation from Beta’s language into Alpha’s lan- guage and let h be a translation from Alpha’s language into Beta’s language. Without loss of generality, let’s take Alpha’s perspective to make things easier to follow. Alpha wants to prove that his number theory is equivalent to Beta’s number theory; his strategy for doing so is to replicate Dedekind’s categoricity proof for second-order

36Field (2001), pages 355-356. 37Dedekind’s categoricity theorem was first proved in his (1888).

18 4 HOW NOT TO USE OPEN-ENDEDNESS arithmetic. To start Alpha defines a function f :

Definition. f : N N : (i) f (0 )=0 ; (ii) f (s (x)) = s ( f (x)) a ! b a b a b To complete the proof, Alpha must show that f is both one-one and onto Beta’s numbers. By using the open-endedness of arithmetic in both a and b, Alpha can proceed as follows:

Lemma. f is one-one

Proof. Assume otherwise so that there is some smallest Na , n such that for some smaller N , m, f (n)= f (m). By definition, n = s (p) for some p m, so f (n)= a b a s ( f (p)). We have two cases to consider: (i) m = 0 and (ii) m = 0 . If (i) then by the b a 6 a definition of f , f (m)=0b and so since f (m)= f (n), sb ( f (p)) = 0b which violates one of Beta’s Peano Axioms (the one saying that 0b isn’t a successor. If (ii) then m = sa (q) so f (m)=sb ( f (q)) by the definition of f , so f (m)= f (n) gives sb ( f (p)) = sb ( f (q)) so by another of Beta’s Peano Axioms (the one saying that sb is one-one) we have f (p)= f (q) but this means there is a p

Lemma. f is onto

Proof. Assume otherwise and let n be the smallest Nb that isn’t in the range of f . By the definition of f ,0 is in the range of f and so n = 0 , which means that n = s (k) but b 6 b b –by the induction hypothesis– k is in the range of f , i.e., for some Na , m, k = f (m) and so by the definition of f , n = f (sa (m)) which violates the definition of n and reduces the assumption that f is not onto to absurdity.

Theorem. f is an between Na and Nb

Proof. Follows immediately from the two lemmas above together with the definition of f .

I’m going to call a categoricity proof of this kind, that doesn’t explicitly talk in terms of models or interpretations but instead shows that analogous terms are equiva- lent, an equivalence proof. Can an equivalence proof like this be used to establish the determinacy of arithmetic?

19 4 HOW NOT TO USE OPEN-ENDEDNESS

Before we can answer, we must note that there are really two different versions of this equivalence result: an intra-language version and an inter-language version.38 In the intra-language version, we can assume that Alpha and Beta are working in the same background language. In fact, for this type of case, it’s easiest to imagine that Alpha and Beta are the very same person. Let’s call him “Alpha”. So Alpha has two open-ended arithmetical theories, and thus a notational variant of the above proof could be used by Alpha to prove, to himself, that his two theories of arithmetic are notational variants of each other. Nobody disputes the validity of the above proof in the intra- language case. Clearly though, nothing about this result is relevant to determinacy: the intra-language equivalence proof merely shows that Alpha must regard each of his number predicates as equivalent to the other. The inter-language situation is more subtle. Again let’s suppose that Alpha and Beta speak different arithmetical languages, a and b. In this case, the above proof is supposed to establish that the number predicates in a and b are equivalent and must be regarded as so by both Alpha and Beta. And since there was nothing special about a and b as languages, we can generalize the result to all languages including open- ended arithmetic and meeting certain very minimal background conditions. So if the inter-language equivalence proof goes through, it shows that any two versions of open- ended arithmetic, in whatever languages, are equivalent to each other. There are two questions about this: Does the inter-language proof go through? Does the equivalence result establish the determinacy of open-ended arithmetic? I think the answers to these questions are “yes” and “no”, respectively. Hartry Field has questioned the validity of the onto portion of the proof of equiva- lence in the inter-language case. The issue is that the use of induction in the proof that f was onto, in Alpha’s language, involved Beta’s open-ended induction axiom, and Field thinks that this is question-begging, since allowing Alpha to use Beta’s induction seems tantamount to assuming that Alpha and Beta can homophonically translate be- tween their arithmetical fragments and this, in effect, is the very thing we were trying to establish. Charles Parsons, one of the most careful advocates of the equivalence theorem approach to determinacy, has admitted the justice of Field’s criticism and of- fered a reply. I don’t think Parsons’s reply succeeds in rescuing the proof, but I think a successful reply is available. Since this point is somewhat involved and distracts from the main thrust of the paper, I’ve relegated it to an appendix (see appendix I). Here is suffices to say that I think the inter-language equivalence proof goes through.

38This distinction applies generally to equivalence proofs of this kind and often has philosophical impor- tance, e.g., see my (2014a) for use of this distinction to show that the use of Harris (1982)’s equivalence results to refute quantifier variance in metaontology fail.

20 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

However, even with this result in hand, we have not vindicated ARITHMETIC. The problem here has also been pointed out by Field: in general, establishing that F and G are equivalent and must be regarded as so by both F-speakers and G-speakers does absolutely nothing to establish determinacy, since F and G could both be indetermi- nate in precisely the same way. Field provides a nice example to illustrate this crucial point: in a classical language, the predicate “bald(x)” will be provably equivalent to the predicate “ bald(x)”, but “bald(x)” is a paradigm case of an indeterminate pred- ¬¬ icate (i.e., a predicate without a determinate extension) and the predicate “ bald(x)” ¬¬ clearly just inherits the indeterminacy of “bald(x)”.39 All that equivalence results of this kind can show is that the two notions proved equivalent are semantic duplicates, but this can happen even when both of the notions are indeterminate. So even with an uncontested inter-language equivalence proof in hand, we wouldn’t have shown that arithmetic is determinate. Because of this, this prima facie promising strategy is a dead end. The proper strategy for establishing determinacy can’t simply work with quasi- syntactic equivalence results, but instead must deal with the possibility of non-standard interpretations directly and show how and why they are ruled out by our practice. The tightrope we must walk in arguing for this involves not succumbing to the temptation to cheat, however subtly.

5 An Argument for Arithmetical Determinacy

The first problematic use of open-endedness to argue for determinacy failed by begging the question; the second failed by establishing a result irrelevant to determinacy. How can we succeed? We have to argue, either directly or by reductio, that our arithmetical language has no non-standard admissible interpretations when precommitments are recognized as a constraint on admissible interpretations. This section provides a simple argument for this conclusion. At first, it will look like a rabbit is being pulled out of a hat; I will try to dispel this initial impression in my subsequent discussion by justifying each premise and responding to various counterpoints. Here’s the argument:

1. Our open-ended arithmetical practice rules out any deviant interpretation of arith- metic that can, in principle, be communicated to us

2. Any deviant interpretation of arithmetic can, in principle, be communicated to us 39See the appendix to chapter 12 of Field (2001).

21 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

3. So: our open-ended arithmetical practice rules out any deviant interpretation of arithmetic (1,2)

4. So: arithmetic is determinate, i.e., our arithmetical practice has no admissible deviant interpretations (3)

(3) and (4) follow from (1) and (2), so the key to establishing this argument is justify- ing premises (1) and (2) and showing that when properly understood, they don’t beg any questions. For simplicity I’m going to tacitly assume that our arithmetical practice has ad- missible interpretations that are standard—interpretations in which the numbers in the interpretation form an w-sequence. In effect, I’ll be assuming that our arithmetic is w-consistent. The indeterminacy worry is that our arithmetical practice also admits of non-standard interpretations. When I talk about “our numbers” below I’m not making an assumption of determinacy, only talking about claims and beliefs we have about numbers (i.e., those using our number predicate “N”); and similarly when I talk about I-numbers for some interpretation I. As such, a “deviant” interpretation I of our arith- metical practice is one in which our numbers form a proper superset of the I-numbers. With these preliminaries out of the way, let’s turn to the task of justifying and defending the argument, starting with premise (1):

(1) Our open-ended arithmetical practice rules out any deviant interpretation of arith- metic that can, in principle, be communicated to us

Suppose that there were some deviant (i.e., non-standard) interpretation of our arith- metic I that could be communicated to us. This interpretation will need to have a predicate for its numbers—the “I-numbers”—and will need to claim that our numbers, the N’s, include things other than the I-numbers. By assumption I can be communi- cated to us, so our language can be expanded to include a predicate for the I-numbers. And once we have expanded our language to include this predicate, we will have a new instance of induction telling us that if zero is an I-number, and I-numbers are closed under successors, then every number is an I-number. But this allows us to rule out the idea that our numbers are a superset of the I-numbers—as is required for I to to interpret our arithmetic as deviant. And since no special assumptions have been made about I beyond it being (i) an interpretation of our arithmetic that (ii) rules our practice as non-standard and (iii) can be communicated to us, we have shown that any interpre- tation meeting these conditions is ruled out by our open-ended arithmetical practice, which is exactly what (1) says.

22 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

This is fine reasoning as far as it goes, but just how far does it go? The central pressure point concerns the quantification over all interpretations of our language in premise (1). A critic may worry that accepting this premise is tantamount to assuming that we have a determinate grasp of “all interpretations” of our arithmetical language and that, in this context, this assumption is question begging. The charge is serous, but I think it can be answered in a satisfying manner. The key to seeing this is to realize that the quantifier in this premise is best under- stood as itself being open-ended. What this means is that in endorsing premise (1), we aren’t assuming that we have some fixed totality of “arithmetical interpretations” that we are quantifying over in the sense that grasp of this totality explains our grasp of the quantification in (1). Instead, we are simply assuming that the very idea of a deviant arithmetical interpretation that can be communicated to us but not ruled out is incoherent. This incoherence is what open-endedness is needed to explain. This might sound odd at first glance, but nothing spooky or mysterious is going on here. When we say that no squares are round, we don’t need to assume that we somehow grasp some ghostly domain of all possible squares. Still less do we need to assume that grasping this ghostly domain explains our understanding of the quantificational sentence “no squares are round”. Such an assumption would be metasemantically powerless as well as metaphysically mysterious. Instead, in claiming that no squares are round, we are merely claiming that the very idea of a round square is incoherent. Whatever may be presented to us, whatever objects we may encounter, we can be sure that it is not a round square. This assurance comes not from having performed some quasi-magical inspec- tion of all possible objects in advance, but merely in an open-ended commitment to the quantified claim. And while the claim involved in premise (1) is more complicated than the round square claim, it is directly analogous, conceptually speaking. In short, the quantification involved in premise (1) isn’t problematic or question-begging.40 When understood correctly, the content of premise (1) is virtually what was agreed upon in the discussion of open-endedness in section 3 above: if some deviant inter- pretation can be communicated to us this means, by definition, that we can introduce a predicate for this interpretation which, by open-endedness, allows us to extend in- duction to this predicate and in this manner rule it out. Thus, even before this inter- pretation is communicated to us, it is ruled out as an admissible interpretation of our practice, since it doesn’t respect our precommitments. Similar worries could also be raised against premise (2), but such worries would be answerable in exactly the same manner, so I’ll focus on other criticisms.

40An analogous response could be given to a similar worry concerning the notion of “deviance”.

23 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

(2) Any deviant interpretation of arithmetic can, in principle, be communicated to us At first glance, this premise seems to attribute to us a kind of semantic omniscience. Perhaps for this reason, some defenders of open-endedness have stopped short of endorsing it, witness Charles Parsons, after taking himself to have established inter- language equivalence: ...this does not protect the language of arithmetic from an interpretation completely from outside, that takes quantifiers over numbers as ranging over a non-standard model. One might imagine a God who constructs such an interpretation, and with whom dialogue is impossible. But so far the interpretation is in the Kantian phrase “nothing to us.” If we came to understand it (which would be an essential extension of our own linguis- tic resources) we would recognize it as unintended, since we would have formulated a predicate for which, on the interpretation, induction fails.41 Parsons is content to rest with the conditional that if a deviant interpretation can be communicated to us, then we can rule it out (this is what my premise (1) says) while also allowing the possibility of deviant interpretations that can’t be communicated to us but dismissing the importance of such interpretations. Yet I don’t see how such an interpretation would be “nothing to us”, since if it is not ruled out by our practice, then our arithmetical notions would be indeterminate and ARITHMETIC would be false. Let’s try to make sense of an in principle incommunicable deviant interpretation of our arithmetic by following Parsons’s lead and considering God’s arithmetical lan- guage. Assume that God’s expressive resources essentially go beyond our own in any manner you desire. Perhaps God’s language is uncountable? Perhaps it includes in- finitary connectives? It doesn’t matter. It doesn’t even matter if there are expressive resources in God’s language that are undreamt of in our philosophy, or so I will ar- gue. By assumption, God interprets our language non-standardly so that our numbers include things that aren’t God-numbers. According to premise (1), if God could com- municate this fact to us, we’d be able to rule out the non-standard interpretation, and so our pre-commitments would rule it out even before it was communicated to us. So let’s assume the contrary. This means that God’s non-standard interpretation of our practice must be in principle incommunicable to us. Crucially, this means that we can have no predicate for God’s numbers in any possible expansion of our language. Right from the start, this is an extremely odd supposition given what we know about metase- mantics; the idea that we could not have a single, lone predicate whose extension is the God-numbers is puzzling. What on Earth—or Heaven–could explain this? 41Parsons (2001), pages 19-20.

24 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

A natural thought is to explain this in principle incommunicability using whatever complex resources God’s language contains that are beyond our meager cognitive pow- ers. For example, perhaps God’s language is infinitary, and our finitude is what prevents us from understanding God’s arithmetical theory. There is nothing ad hoc about this re- sponse, but it confuses the question we were trying to answer with a different question, only tangentially related. We were trying to explain why we could not have a predicate in our language whose extension consisted of all and only the God-numbers, in answer we’ve been told that we cannot learn God’s language because it includes resources that go beyond our cognitive powers, but this simply isn’t responsive to the challenge. We aren’t trying to learn God’s language; we’re trying to add a predicate—call it “NG”— to our language that has as its extension God’s numbers. If adding such a predicate required us to learn infinitary or non-algorithmic notions, then we’d have a problem, but that isn’t obviously required. In fact, one of the most widely agreed upon discoveries of modern metasemantics is the externalist point that sometimes a predicate or name only means what it does in our mouths because we systematically defer to others in their use of the predicate or name. This central idea was called—by Hilary Putnam, one of its discoverers—the division of linguistic labor:

. . . everyone to whom gold is important for any reason has to acquire the word ’gold’; but he does not have to acquire the method of recognizing if something is or is not gold. He can rely on a special subclass of speakers. . . . In case of doubt, other speakers would rely on the judgement of these ’expert’ speakers. Thus the way of recognizing possessed by these ’ex- pert’ speakers is also, through them, possessed by the collective linguistic body, even though it is not possessed by each individual member of the body. . . 42

The cases that Putnam and others discussed concerned several speakers all speaking the same natural language, but this isn’t essential. I could count some stuff as an example of “water” just in case the world’s leading Spanish speaking water authority would treat it as an example of “aqua”, all that matters is that my deference allows a name or predicate in my mouth to mean what it does in someone else’s mouth. In fact, even less than that is required, since we only need the relevant expressions to have the same extension, not necessarily the same meaning.43

42Quoted from Putnam (1975). 43The difference between these is sometimes collapsed because of the popularity of Millian or direct reference theories of names and predicates, where the meaning of a term simply is its extension. But those

25 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY

Let’s apply this idea to the God example: since God is interpreting our arithmetic deviantly, God can understand our arithmetic. We can then get the right extension of the predicate “NG” in our language by simply deferring to God’s usage. God can, in a sense, simply add a predicate to our language that has as its extension his natural numbers. If God introduced the predicate to us and intended it to have this extension and we deferred to God in all matters concerning its extension, this would amount to our successfully adding a predicate with the same extension to our language.44 For these familiar externalist reasons, it simply doesn’t matter what kind of complicated resources God himself uses to fix the reference of his number predicate, we can pig- gyback upon these resources without ourselves possessing them simply by deferring to God. Some readers might be worried that I’ve rigged the game by using an example in- volving God rather beings without god-like powers. The worry proceeds: of course God understands our arithmetic, but maybe Martian mathematicians do not, so we can- not assume that they can simply “give” us a predicate, for example. This seems worri- some at first, but this response misses the key point that whoever we’re considering— God, The Devil, the Gauss-led heavenly council of mathematical greats, Martians,. . . — must understand our arithmetic, since they are giving a deviant interpretation of it. Is it problematic to assume that since they can understand our language, they can speak it (or, more appropriately, that they could “use it”, even if that use didn’t literally involve speaking)? I don’t see why it would be, and so I don’t see why a bilingual Martian (or whatever) couldn’t communicate to us that Martianese (or whatever) has “num- bers” (M-numbers) of which our numbers are a superset. And given this, we could introduce a new predicate into our language with the same extension as the Martian number predicate using the familiar division of linguistic labor. If an objector tries to argue that there could be a community that interpreted our arithmetic deviantly without understanding our arithmetic, then I simply don’t know what that could mean. Most importantly: it doesn’t really matter if the deviant interpreters can speak our language or not, for we could simply introduce a predicate for the Martian numbers

(the “M-numbers”) by description as follows: let “NM” be a predicate for the natural numbers of Martian arithmetic. And once we do that, we can apply induction to this new predicate and show that our numbers are isomorphic to the M-numbers. It’s ar- guable that this process of descriptive reference-fixing would work all by itself without who reject direct reference theories can still use the division of linguistic labor to explain how extensions can be shared by users of an expression F even when some speakers have no way of determining F’s extension. 44The meaning determining role of bilingual speakers’ disposition concerning translations had been noted before, e.g., by Burgess (2005) in response to Kripke (1979)’s famous puzzle about belief.

26 5 AN ARGUMENT FOR ARITHMETICAL DETERMINACY the points made above about the division of linguistic labor, but I think assuming this would be a little bit hasty. If we introduce a new predicate using the above stipulation, but then use the new predicate in ways incompatible with said stipulation, we would simply have introduced a predicate hoping it had as its extension the M-numbers, while actually failing to do so. This is because meaning, as almost all of us agree, is deter- mined by language use. So a stipulation has no metasemantic power apart from its to language use. But as long as we were disposed to defer to the Martians in our determinations concerning the extension of the descriptively introduced predi- cate “NM”, then “NM” would have the M-numbers as extension even if these disposi- tions were never realized. So, again, I think that uncontroversial ideas in contemporary metasemantics put unbearable pressure on the idea of beings interpreting our arithmetic deviantly but being, in principle, unable to communicate this to us. Is this enough to justify premise (2)? One potential reason for skepticism concerns the fantastic nature of the situations we’ve been considering. There likely isn’t any God of the kind imagined, nor are there Martians, etc. So what becomes of my arguments when this is admitted? As far as I can see, the arguments stand firm. The discussion above never assumed that any of the scenarios discussed were actual, only that they were possible in the broadest sense. Premise (2) is implicitly modal; to establish it we must consider possibilities that may or may not be actual. The discussion above shows that it is not possible for there to be deviant interpretations of our arithmetic that aren’t communicable to us. If something is not even possible, then it’s clearly not actual either. Another concern about my defense of premise (2) is ostensibly more serious: all of my examples discussed beings who spoke a language that interpreted our arithmetic deviantly. This was done to embody the deviant interpretations and make them easier to reason about, but what if there are interpretations of arithmetic that can’t be grasped by any possible language users? If this is possible, then while my arguments would show that any deviant interpretation of our arithmetic that can be grasped (linguistically or conceptually) can be communicated to us, at least in principle, premise (2) also requires that deviant interpretations that cannot be grasped can be communicated to us. But this is absurd on its face—if interpretation I can’t be grasped linguistically, then we can’t grasp it linguistically either and so, a fortiori, it cannot be communicated to us. So my arguments fail to establish premise (2) according to this objection. I confess that I’m unable to make good sense of this worry about ineffable indeter- minacy. What could it mean to suggest that there is a deviant interpretation I of our arithmetic that no possible being could linguistically or conceptually grasp? How could it be incoherent to simply stipulate that there is a God who speaks a language whose

27 6 SOLVING THE PUZZLE number predicate has as its extension all and only the I-numbers? The determinacy skeptics need to say something more to explain this mysterious ineffable indetermi- nacy if it is to seriously trouble us. The rest of the main argument proceeds smoothly now that the two crucial premises have been defended.

(3) So: we can rule out any deviant interpretation

This trivially follows from premises (1) and (2) of the main argument.

(4) So: arithmetic is determinate

This step follows from the just derived premise (3) by definition. We have established that arithmetic is determinate crucially using the open-endedness of mathematical in- duction to justify premise (1) and using metasemantic and anti-metaphysical argumen- tation to justify premise (2). The next and final section of the paper steps back to show how this approach results in a solution to the puzzle of arithmetical determinacy with which we started.

6 Solving the Puzzle

The puzzle of arithmetical determinacy was to explain how ARITHMETIC,COGNI- TION, and FAILURE could all be true together. A necessary step was rejecting THE- ORY, but this alone didn’t provide any explanation. Together the discussions of sections 2 through 5 provide a solution. By rejecting THEORY in favor of a picture of the metasemantics of arithmetic that allows a role for externalist constraints and the open-endedness of arithmetical laws, the argument of section 5 shows that our arithmetical language has no admissible non- standard interpretations. And if there are no non-standard admissible interpretations of our arithmetical language, then ARITHMETIC is true (though see appendix II for dis- cussion of a potential technical challenge to this reasoning). Obviously, this argument and the basic metasemantic picture it employs in no way contradict FAILURE. In addi- tion, the picture does not contradict COGNITION, since no theory of arithmetic that we can learn is required to be non-algorithmic. So an open-ended, externalist account of our powers of mathematical concept formation solves the puzzle of arithmetical deter- minacy by showing how all of ARITHMETIC,COGNITION, and FAILURE can be true together. The only remaining plausible challenge that I can imagine involves the worry that contrary to my claims, accepting open-ended arithmetical principles must violate COG-

28 6 SOLVING THE PUZZLE

NITION if the argument of section 5 is to go through. We can imagining an objector arguing as follows:

The argument of section 5 shows that by incorporating all possible expan- sions of our arithmetical language into our current arithmetical compe- tence via open-endedness, we thereby attain a determinate grasp of arith- metic. Let’s collect up all possible expansions of our arithmetical language into a super-theory, call it ASuper. Now ASuper cannot be algorithmic, be- cause if it were, then it would admit of non-standard interpretations and section 5’s argument would fail. So the friend of open-endedness must claim that ASuper is non-algorithmic. But then our current arithmetical practice amounts to a tacit grasp of ASuper and so, since this theory is non- algorithmic, COGNITION is false.

This objection is convincing at first-glance, but the move of collecting up all possible expansions is yet another attempt to fit open-endedness into a standard, THEORY-based mold. The open-endedness theorist is not claiming that there is some Peirce-style arith- metical “end of inquiry” where the ultimate supertheory is to be found.45 Friends of open-endedness will claim that we could have a predicate in our lan- guage for any set of natural numbers, but to read this as committing them to the su- pertheory view is akin to a scope confusion. Let “SetNat(x)” be a predicate meaning x is a set of natural numbers and “PredExt(x)” a predicate meaning there is a predicate in our language whose extension is x, and consider the following interpretation of this claim:

S(SetNat(S) PredExt(S)) ⌃8 ! If the claim was true on this reading, then friends of open-endedness would be claiming that we could learn a language containing predicates for every set of natural numbers. But since there are uncountably many sets of naturals, such a language would be un- countable and thus unlearnable by humans. So this reading is problematic, but happily, the reading endorsed by friends of open-endedness is rather:

S(SetNat(S) PredExt(S)) 8 ! ⌃ And this reading only requires that for any particular one of the uncountably many sets of natural numbers, there is some possible expansion of our language in which we

45For end of inquiry approaches to truth and other semantic notions, see Peirce (1878) as well as Putnam (1981).

29 6 SOLVING THE PUZZLE have a predicate whose extension is that particular set. The truth of this claim doesn’t in any way require that we end up with a supertheory that is, of course and of necessity, unlearnable. It only requires that the vast realm of metasemantic methods out there mean that no particular set of naturals, however infinite, gerrymandered, and ad hoc, can escape the mere possibility of linguistic encapturement. And this can be true while COGNITION is true, because the vast metasemantic jungle includes not just our theories, but external factors as well. We can consider God or angel-like beings without limitations who can introduce a predicate for the set of natural S directly and we can piggyback upon their reference as before. Or we can imagine a random, everlasting natural phenomenon which, let us say, emits pulses at irregular intervals—say we define a correspondence between the naturals and the progression of seconds and introduce a predicate “F” for this set so that according to this correspondence, n is in the extension of “F” just in case the phenomenon emits a pulse at the interval corresponding to n under the defined correspondence. In these and cases like them, we seem to be able to introduce a predicate into our language which has as its extension some infinite and arbitrary (from our point of view) set of natural numbers. So while the supertheory you end up with in the “collecting up” objection is indeed unlearnable, nobody, least of all friends of open-endedness, claimed that it was learnable, much less that it was already learned. What open-endedness shows is not that there is some super-theory that we aspire to, but that the very idea of a non-standard interpretation of our arithmetical prac- tice is incoherent. In essence, the argument of section 5 brought this feature out. It wasn’t something that followed automatically from open-endedness, since as we saw, a number of metasemantic fine points were required to move from open-endedness to this result. But with the right metasemantic supplementation in hand, we were able to show how the idea of a non-standard yet admissible interpretation of our open-ended arithmetical practice can be reduced to absurdity. This feature, in miniature, is what makes categoricity proofs for formal arithmeti- cal theories work: when proving categoricity for second-order arithmetic or some other theory, we show that formal theory of arithmetic T1 is categorical relative to background theory T2 by using the relationship between sets of naturals in T1 and T2: if T2 can form of natural numbers that T1 can’t, there is the possibility of non-standardness, not otherwise. An open-ended grasp of arithmetic explains why there is no possible metatheory from which our arithmetic can be shown indeterminate. No formal cate- goricity theorem involving a theory or super-theory, could solve our puzzle, for our puzzle is philosophical, not mathematical. and as a famous methodological sermon

30 6 SOLVING THE PUZZLE

concluded, there is no mathematical substitute for philosophy.46 Once we reject THE- ORY, adding to the grounds of our conceptual and semantic competence open-ended rules and principles while also taking on board the familiar lessons of semantic exter- nalism, we can see, not only how ARITHMETIC can be true against the background of COGNITION and FAILURE, but also, that is must be.47

Appendix I: Patching the Equivalence Proof

In section 4 I claimed that Field’s criticisms of the inter-language equivalence proof could be answered, but that Parsons’s response was problematic. Field objects to Al- pha’s uses of Beta’s induction in the onto direction of the proof. Parsons responded by claiming that the standpoint of radical interpretation that we’ve been assuming isn’t the proper way to see linguistic interaction and communication:

...a plausible alternative would be that speakers take each other’s words at face value and respond to them without theorizing about what they mean unless difficulties arise that such reflection could help to resolve. This would be in line with the view, which I have defended in earlier writing, that language as used is prior to semantic reflection on it.48

Parsons applies this general metasemantic perspective to a case like my Alpha/Beta case:

The attribution...of different number predicates was obviously somewhat artificial. Two English speakers would both use the expression ’natural number’ or, if no confusion with other numbers systems threatened, sim- ply ’number’. It’s reasonable to assume that it has the form of a one-place predicate. ...Then they will treat each other’s predicate ’number’ as “mean- ing the same thing,” at least so long as they do not come to disagree about the principles of arithmetic. Since it has the form of a one-place predi- cate, without an additional argument place that might possibly be filled by different “number sequences”, it seems that they are just talking of the objects that are natural numbers.49

46Kripke (1976), page 416. 47Thanks to Dave Chalmers and Hartry Field. 48Parsons (2001), page 15; the earlier writing to which Parsons refers is essay 9 of his (1983). 49Parsons (2001), pages 15-16; I have omitted a brief footnote in the original contrasting numbers to groups and fields (cases where nobody thinks there is a unique up to isomorphism instance).

31 6 SOLVING THE PUZZLE

Thus, Parsons thinks that this perspective on languages will allow for the equivalence proof to go through since there is—in effect—as assumption of homophonic translation as the default case and this assumption is allowed to stand if undefeated, which it surely is in this case Parsons’s patch is problematic: even if it worked, it would work only in cases where a homophonic translation was available. That is: it would fail utterly if we were trying to prove that an English speaker’s number theory is equivalent to a Russian speaker’s number theory. It would likewise go wrong if Beta’s language used Polish notation rather than standard logical nomenclature. I think we can all agree that the equivalence proof shouldn’t be hostage to idiosyncratic features of . Parsons could maintain, in response, that there is equivalence in other cases as well but we simply aren’t in an epistemic position to discover it, but this is implausible. Either way, Parsons’s attempt to fix the argument would only work in a problematically limited number of cases. A more satisfying patch can be found by paying close attention to the crucial issues of translation and metasemantics that are driving both Field’s criticisms and Parsons’s response. The central idea—one considered by both Field and Parsons—is to allow each of Alpha and Beta to use the facts established by the other, at least under transla- tion. Making the information about translations fully explicit, each theorist can prove the following claims, respectively:

Alpha: Na embeds one-one into t(Nb ) (Alpha’s version of the one-one lemma)

Beta: Nb embeds one-one into h(Na ) (Beta’s version of the one-one lemma) So, taking Alpha’s perspective once again, Alpha can also use his own version of “Beta”, which is:

Alpha-Beta: t(Nb ) embeds one-one into t(h(Na ))

If Alpha could conclude that Na = t(h(Na )) then he would be able to complete the proof. In essence, Field’s criticism is that this identification is question begging. Making the back-and-forth translation principles explicit, Field has, in effect, re- jected the following principle (when introducing a principle without endorsing it, I enclose the name in asterisks):

Recoverability : For languages L and K and acceptable translations t ⇤ ⇤ from L to K and h from K to L, then for a syntactic item f in L, h(t(f)) = f

Say that acceptable translation is one that meets all of the metasemantic and interpre- tive constraints on translations whatever they may be. This principle, though some- what appealing at first-glance, is too strong. To illustrate, consider two languages

32 6 SOLVING THE PUZZLE for sentential logic: (i) one with only disjunction and negation (the “or-language”) and (ii) the other with only conjunction and negation (the “and-language”). As is fa- miliar, we can translate between these languages in a seemingly acceptable fashion using the DeMorgan laws in both directions and negation and atomic sentences trans- lated homophonically. In this case, the translation of “(p q)” into the or-language is ^ “ ( p q)” but its translation back into the and-language will be: h( p q)= ¬ ¬ _¬ ¬ ¬ _¬ ( h(p) h(q)) = ( p q). This isn’t what we started with, so this ¬¬ ¬¬ ^¬¬ ¬¬ ¬¬ ^¬¬ intuitively acceptable translation violates recoverability, showing that, as Field tacitly suggests, this principle is too strong. But while recoverability is too strong, a weaker constraint on acceptable transla- tions will serve our purposes just as well:

Weak Recoverability : For languages L and K and acceptable translations t from L to K and h from K to L, then for a sentence f in L, if h(t(f)) = y then f and y are extensionally equivalent

The extension of a sentence is its truth-value; the extension of a name is the object it refers to; and the extension of a predicate is the collection of objects that the predicate is true of. Weak recoverability merely claims that acceptable translations preserve extensions. This seems totally unobjectionable: one shouldn’t translate a true sentence into a false sentence or vice versa; one shouldn’t translate two names into each other if they don’t refer to the same object; and one shouldn’t translate a predicate into another predicate with a distinct extension. All that weak recoverability claims is that the preservation of extensions is a nec- essary condition on acceptable translations, and it’s hard to deny this principle without giving up on the idea of acceptable translations altogether. Field himself may be will- ing to go this far (indeed he has informed me that he is skeptical of the very idea of sameness of extension across languages), but this is an extreme position. Most of us accept norms of correct translation and the possibility of sameness of extension in different languages, and so most of us will also accept weak recoverability. And weak recoverability is all that Alpha needs to complete the proof, since now Alpha has at his disposal: (i) Na embeds one-one into t(Nb ); (ii) t(Nb ) embeds one-one into t(h(Na )); and—by weak recoverability—(iii) t(h(Na )) embeds one-one into Na . From these three, it obviously follows, from Alpha’s perspective, that there is a one- one, onto mapping between the Na ’s and the t(Nb ) ’s (Alpha’s version of Beta’s num- ber predicate). So from the intuitive principle of weak recoverability, we can patch the external-equivalence proof, but, as I argued in the main text, this result does not suffice to establish arithmetical determinacy.

33 6 SOLVING THE PUZZLE

Appendix II: From Categoricity to ARITHMETIC?

When I explained ARITHMETIC in section 1, I did so in terms of the idea that arith- metical truth is determinate, but I quickly noted that this much follows from the iso- morphism of all interpretations of arithmetic, which is what the argument of section 5 established. The move from categoricity to determinacy of truth is standard in the literature, but my criticisms of the move from a categoricity theorem—in the form of inter-language equivalence—to ARITHMETIC should make us a bit wary. Indeed, in a as of yet unpublished paper, Joel Hamkins and Ruizhi Yang present results they take to show that the move from the determinacy of the natural numbers to the determinacy of arithmetical truth are too quick.50 The central relevant theorem is that two models of M1 and M2 can have the same natural numbers N and the same standard model of arithmetic while at the same time disagreeing about the truth in arithmetic of some shared arithmetical sentence j. Hamkins and Yang characterize their central philosophical claim as follows:

. . . in the case of arithmetic truth and the standard model of arithmetic N, we claim, it is a philosophical error to deduce that arithmetic truth is definite just on the basis that the natural numbers themselves and the natural number structure. . . is definite. At bottom, our claim is that one does not get definiteness-of-truth for free from definiteness-of-objects and definiteness-of-structure, and that, rather, one must make a separate anal- ysis and justification. . . 51

They believe that this follows from the theorem just noted. I want to make two brief points about this: (i) In using the theorem stated above to argue for their key philosophical claim, Hamkins and Yang are assuming that the relevant arithmetical sentence j means the same thing against the background of both M1 and M2, but it is extremely difficult to formal claims like this to delicate metasemantic points concerning meaning and translation. In fact, Hamkins and Yang report that Roman Kossak responded to their argument by making this meaning variance claim and pointing to a more natural surrogate meaning for “j” in M1, but they think Kossak’s point can be answered by pointing to similar results in which there is no obvious surrogate meaning. But the general meaning variance point doesn’t depend upon any particular non-homophonic translation of “j”; “j” may simply lack an obvious translation into M2’s point of

50See Hamkins and Yang (manuscript). 51Hamkins and Yang (manuscript), page 26.

34 REFERENCES REFERENCES

52 view. This point isn’t ad hoc, for consider by analogy, a sentence like “CONPA ” in an w-inconsistent model of arithmetic, say a model of the theory PA + CON —it is ¬ PA natural to think that this sentence means something different from the point of view of non-standard models of arithmetic vs standard models.53 (ii) Hamkins and Yang themselves admit that certain further arguments and consid- erations can allow us to conclude that arithmetical truth is definite from the definiteness of the natural numbers (together with the aforementioned supplementary hypotheses):

The claim of definiteness for arithmetic truth amounts in a certain sense to the claim that one’s meta-theoretic concept of natural number aligns with the natural number concept in the object theory. . . . surely if one can be confident that one’s meta-theoretic natural number concept coincides with the object-theoretic account, then one should expect definiteness of arithmetic truth. But what we would desire is an account of how this defi- niteness is supposed to work.54

In essence, as section 6 already pointed out, this is exactly what the open-ended per- spective and the argument of section 5 is aimed at establishing, viz., that when one has an open-ended understanding of arithmetic, the possibility of an arithmetical per- spective which differs from the given one is foreclosed. For this reason, nothing in Hamkins and Yang’s interesting paper undercuts the open-ended argument for ARITH- METIC given here.

References

[1] Azzouni, Jody. (2004). Deflating Existential Consequence: A Case for Nominal- ism. Oxford: Oxford University Press.

[2] Benacerraf, Paul. (1965). “What Numbers Could Not Be” Philosophical Review 74: 47-73.

[3] Benacerraf, Paul. (1967). “God, the Devil, and Gödel”. The Monist 51: 9-32.

[4] Boghossian, Paul. (forthcoming). “What is Inference?” Philosophical Studies.

[5] Burgess, John P. (2005). “Translating Names”. Analysis 65: 196-204.

52See the discussion of this in Hamkins and Yang (manuscript) starting on page 29. 53See my (2014b) for extensive discussion of this case. 54Hamkins and Yang (manuscript), pages 27-28.

35 REFERENCES REFERENCES

[6] Chihara, Charles. (1990). Constructability and Mathematical Existence. Oxford: Oxford University Press.

[7] Copeland, B.J. & Proudfoot, Diane. (1999). “Alan Turing’s Forgotten Ideas in ”. Scientific American. April 1999 edition.

[8] Cutland, N.J. (1980). Computability: An Introduction to Recursive Function The- ory. Cambridge: Cambridge University Press.

[9] Dedekind, Richard. (1888). Was sind und was sollen die Zahlen? Braunschweig: Vieweg.

[10] Dreyfus, Hubert. (1972). What Computers Can’t Do. New York: Harper & Row.

[11] Dummett, Michael. (1975). “Wang’s Paradox”. Synthese 30: 301-324.

[12] Feferman, Solomon. (1991). “Reflecting on Incompleteness.” Journal of Symbolic Logic 56: 1-49.

[13] Field, Hartry. (1980). Science Without Numbers. Princeton University Press: Princeton.

[14] Field, Hartry. (1994). “Are Our Mathematical and Logical Concepts Highly Inde- terminate?” in P. French, T. Uehling, and H. Wettstein (eds). Midwest Studies in Philosophy 19.

[15] Field, Hartry. (1998a). “Do We Have a Determinate Conception of Finiteness and Natural Number?” in Schirn (ed.), The Philosophy of Mathematics Today. Oxford: Oxford University Press.

[16] Field, Hartry. (1998b). “Which Undecidable Mathematical Sentences Have Deter- minate Truth Values?” in Dales & Olivari (eds.), Truth in Mathematics. Oxford: Oxford University Press.

[17] Field, Hartry. (2001). Truth and the Absence of Fact. Oxford: Oxford University Press.

[18] Gödel, Kurt. (1931). “Über formal unentscheidbare Sätze der Principia Mathe- matica und verwandter Systeme I.” Montashefte für Mathematik und Physik 38: 173-198.

[19] Gödel, Kurt. (1964). “What is Cantor’s Continuum Problem?” revised edition. in Benacerraf & Putnam, eds. Philosophy of Mathematics: Selected Readings 2nd edition. Cambridge University Press: New York.

36 REFERENCES REFERENCES

[20] Hamkins, Joel David. & Yang, Ruizhi. (under review). “Satisfaction in Not Ab- solute”.

[21] Harris, J. H. (1982). “What’s so Logical about the “Logical” Axioms’?” Studia Logica 41: 159-171.

[22] Kripke, Saul A. (1976). “Is There a Problem about Substitutional Quantification?” in Evans and McDowell (eds.), Truth and Meaning: Essays in Semantics. Oxford: Clarendon Press.

[23] Kripke, Saul A. (1979). “A Puzzle About Belief”. in Margalit (ed.), Meaning and Use. Dordrecht: Reidel.

[24] Lavine, Shaughan. (1994). Understanding the Infinite. Cambridge: Harvard Uni- versity Press.

[25] Lavine, Shaughan. (unpublished). Skolem was Wrong.

[26] Lucas, J.R. (1961). “Minds, Machines, and Gödel”. Philosophy 36: 112-137.

[27] Maddy, Penelope. (1990). Realism in Mathematics. Oxford: Clarendon Press.

[28] McGee, Vann. (1997). “How We Learn Mathematical Language.” The Philosoph- ical Review 106: 35-68.

[29] McGee, Vann. (2000). “Everything.” in Sher & Tieszen (eds.) Between Logic and Intuition. Cambridge: Cambridge University Press.

[30] McGee, Vann. (2006). “There’s a Rule for Everything.” in Rayo & Uzquiano (eds.). Absolute Generality. Oxford: Oxford University Press.

[31] Odifreddi, Piergiorgio. (1989). Classical Recursion Theory: The Theory of Func- tions and Sets of Natural Numbers. Amsterdam: Elsevier.

[32] Parsons, Charles. (1983). Mathematics in Philosophy: Selected Essays. Ithaca: Cornell University Press.

[33] Parsons, Charles. (2001). “Communication and the Uniqueness of the Natural Numbers.” The Proceedings of the First Seminar of the Philosophy of Mathemat- ics in Iran, Shahid University.

[34] Peirce, Charles Sanders. (1878). “How to Make Our Ideas Clear” Popular Science Monthly. January edition: 286-302.

37 REFERENCES REFERENCES

[35] Penrose, Roger. (1989). The Emperor’s New Mind. New York: Oxford University Press.

[36] Penrose, Roger. (1994). Shadows of the Mind. Oxford: Oxford University Press.

[37] Putnam, Hilary. (1975). “The Meaning of ‘Meaning’” In Gunderson (1975), 131–93.

[38] Putnam, Hilary. (1981). Reason, Truth and History. Cambridge: Cambridge Uni- versity Press.

[39] Quine, W.V. (1970). Philosophy of Logic. Englewood Cliffs: Prentice Hall.

[40] Resnik, Michael. (1997). Mathematics as a Science of Patterns. Oxford: Oxford University Press.

[41] Russell, Bertrand. (1908). “Mathematical Logic as Based on the Theory of Types.” American Journal of Mathematics 30: 222-262.

[42] Searle, John. (1980). “Minds, Brains, and Programs.” The Behavioral and Brain Sciences 3: 417-457.

[43] Shapiro, Stewart. (1991). Foundations Without Foundationalism: A Case for Second-Order Logic. Oxford: Oxford University Press.

[44] Shapiro. Stewart. (1997). Philosophy of Mathematics: Structure and Ontology. New York: Oxford University Press.

[45] Shapiro, Stewart. (1998). “Incompleteness, , and Optimism.”. Bulletin of Symbolic Logic 4: 273–302.

[46] Smith, Peter. (2013). An Introduction to Gödel’s Theorems: Second Edition. Cam- bridge: Cambridge University Press.

[47] Shoenfield, Joseph. (1967). Mathematical Logic. Reading: Addison-Webley.

[48] Tarski, Alfred. & Mostowski, Andrzej. & Robinson, Raphael. (1953). Undecid- able Theories. Amsterdam: North Holland.

[49] Warren, Jared. (2014a). “Quantifier Variance and the Collapse Argument”. The Philosophical Quarterly.

[50] Warren, Jared. (2014b). “Conventionalism, , and Consistency Sen- tences”. Synthese.

38 REFERENCES REFERENCES

[51] Weston, Thomas. (1976). “Kreisel, the Continuum Hypothesis and Second Order Set Theory.” Journal of Philosophical Logic 5(2): 281-298.

[52] Williamson, Timothy. (2006). “Absolute Identity and Absolute Generality.” in Rayo & Uzquiano (eds.). Absolute Generality. Oxford: Oxford University Press.

[53] Wittgenstein, Ludwig. (1953). Philosophical Investigations. Prentice Hall: En- glewood Cliffs.

[54] Woodin, W. Hugh. (2004). “Set Theory after Russell: The Journey Back to Eden.” in Link (ed.), One Hundred Years of Russell’s Paradox: Mathematics, Logic, Philosophy. de Gruyter.

[55] Zermelo, Ernst. (1930). “Über Stufen der Quantifikation und die Logik desUnendlichen.” Jahresbericht der Deutschen Mathematiker-Vereinigung (An- gelegenheiten) 31: 85-88.

39