Logical Inferentialism and

Chris Mitsch

April 28, 2017

Abstract As Wehmeier [1, 2] has shown, the identity-free system of Wittgen- steinian predicate (W-logic), enriched with a predicate expressing that individual constants corefer, is expressively equivalent to first- order logic with identity (FOL=). I show that the introduction and elimination rules in W-logic for the usual logical constants (except classical negation) are harmonious in both the intuitive and the Praw- itzian sense, while those for the coreference predicate share the troubles of identity in FOL=. Thus, if we follow Wehmeier [3] in viewing clas- sical identity as an amalgamation of several distinct concepts, among which is coreference, we can precisely separate the harmonious logical component of identity (responsible for variable co-ordination) from its non-harmonious and non-logical component (i.e., coreference). We are thus able to account, from an inferentialist perspective, both for the intuition that classical identity is a logical notion, as well as for its fail- ure to be fully harmonious in the intuitive sense, as shown by Griffiths [4].

1 Inferentialism

The inferentialist is engaged in a debate over the core notion in semantics– that is, what is it that grounds our account of linguistic meaning? One account, typically called truth-theoretic semantics, characterizes the mean- ing of a given sentence in terms of its truth conditions. The guiding idea for the account is that the meaning of a statement is given in the way it latches onto the world. In general this approach relies strongly on associating sets of possible worlds with sentences–depending on the flavor of truth-conditional semantics we are working with, we will speak of sentences referring to tuples of objects, singular terms referring to objects, predicates referring to a set of objects, etc. Given the primacy of the notions of truth and reference, model

1 theory tends to be the most natural framework in which to flesh out one’s truth-conditional semantics. In contrast, the inferentialist seeks to ground the meaning of a statement or term in the way it is used in language. The strategy for the inferentialist is to characterize the meaning of a statement by focusing on what Tennant calls the “to-and-fro” of conversation or inference[5]. The guiding idea for this account is that the when we add to a language a new sentence or sub- sentential phrase (like a noun phrase, name, verb, etc.) we are (implicitly or explicitly) associating it with (i) the grounds for asserting statements containing it, or (ii) those statements that are implied by it [6]. Thus, Dummett[7]:

For utterances considered quite generally, the bifurcation be- tween the two aspects of their use lies in the distinction between the conventions governing the occasions on which the utterance is appropriately made and those governing both the responses of the hearer and what the speaker commits himself to by making the utterance: schematically, between the conditions for and the consequences of it.

Some inferentialists, like Dummett and Prawitz, are concerned primarily with the meaning of the logical constants. The goal of this restricted form of inferentialism–called logical inferentialism–is to cash out the meanings of the logical constants in terms of their introduction and elimination rules, typically in a natural deduction setting. Fundamental to this project is the notion of proof. In light of this, contrary to the model-theoretic semantics championed by truth-conditional semantics, advocates of logical inferential- ism advance a proof-theoretic semantics. This approach seeks to ground the meanings of the logical constants in terms of their inference rules in the proof theory. For the logical inferentialist the meanings of the logical constants are manifest in and completely analysed by these rules. The job of the logical inferentialist thus becomes describing exactly which transitions, using a given set of logical constants, are admissible in reasoning. Consider the case of conjunction. What can we conclude from A∧B–in other words, what are the elimination rules for ∧? Well, we can conclude A, and we can conclude B. Conversely, what is required of us to conclude A∧B–i.e. what is the introduction rule for ∧? Here we must already have established that A and established that B. With this analysis we have characterized the admissible transitions to and from A ∧ B, and thus characterized the meaning of ∧.

2 Given the simplicity of giving the meaning of ∧ in terms of its use, it may seem that all that we require for characterizing the meaning of a given logical constant are rules for its introduction and elimination in proof. Further, one may be inclined to believe that this kind of full explication of the meaning of a given constant–in terms of introduction and elimination rules–suffices for its being logical. The thought is that if determination of the meaning of a given constant is sufficient for determination of the validity of any inference in which it plays the dominant role, the constant must be logical. Or, as Prior put it, “it is sometimes alleged that there are inferences whose validity arises solely from the meanings of certain expressions occurring in them...let us say that such inferences, if any such there be, are analytically valid.”[8] The inference to logicality is thus one that assumes that I and E rules for a constant–in virtue of their fully specifying the meaning–confer validity on all inferences in which it occurs dominantly. But consider Prior’s infamous tonk [8]. Let its introduction and elim- ination rules be A ` A tonk B and A tonk B ` B, respectively. By the transitivity of `, we can conclude that A ` B. The purported logical con- stant tonk thus straightforwardly leads to the inconsistency of any logical system to which it is added. Now here’s the question: do we count tonk as logical or not? Pushing it farther back still, we might wonder whether we’ve even fixed a meaning for tonk at all. Assuming that we’d like to avoid inconsistency, the answer seems to be an unambiguous “no”. So what went wrong?

2 Inferentialism and FOL=

2.1 Inferentialism and Harmony One response is to say that we made a mistake in thinking that any pair of introduction and elimination rules for a constant suffices for the specification of that constant’s meaning. Looking back at our case for ∧, we might notice a certain correspondence or harmony between the inference rules that is lacking in the case for tonk: from the elimination rule we can derive no more than what it took to infer A ∧ B, namely A and B. In the tonk case we concluded B from A tonk B, which cannot in general be inferred from A, so that there is a clear lack of harmony. It is at this point that Gentzen [9] is typically invoked, having said that “the introductions represent, as it were, the ‘definitions’ of the symbols concerned, and the eliminations are no more, in the final analysis, than the consequences of these definitions.” The rough idea we can take from this is

3 that for the logical constants, we expect there to be a precise matching of the information content imputed by the introduction rule and drawn out by the elimination rule. In addition to providing introduction and elimination rules for a constant, then, we might additionally require that they be harmonious in this sense. I will call this the intuitive notion of harmony. In an effort to provide a formal definition of this, Prawitz, following Lorenzen, introduced the inversion principle1 [12]:

Let α be an application of an elimination rule that has B as con- sequence. Then, deductions that satisfy the sufficient condition for deriving the major premiss of α, when combined with deduc- tions of the minor premisses of α (if any), already “contain” a deduction of B; the deduction of B is thus obtainable directly from the given deductions without the addition of α.

In coarser terms this principle states that any detour in a proof through an introduction rule for a constant followed directly by its corresponding elimination rule is, in principle, eliminable. The principle is supposed to be seen as a formal model of the intuitive notion of harmony in virtue of what it purports to rule out. Since any introduction of a constant followed by its immediate elimination in a deduction is avoidable, it can’t be the case that the elimination rule is too strong relative to the introduction rule, in the sense that we can draw out more information content than was put in (or, equivalently, that the introduction rule is too weak relative to the elimination rule). As Prawitz puts it, nothing is “gained” by the detour [12]. Note that our concern here is with proofs in which the major premise is introduced canonically–that is, by the corresponding introduction rule for the dominant operator. The reason for this restriction relates directly to the inferentialist’s purpose: what is intended to be recovered in these de- tour conversions is the subproof of the desired conclusion which was already in the proof. To demonstrate that in situations where the major premise is introduced non-canonically we can’t expect this to obtain, consider the following examples:

1Inversion, as we use the word here, is not to be confused with inversion as is discussed in e.g. [10]. It is instead what they call detour conversion. Also note that this does not privilege either I or E rules. Though Gentzen’s quote above, privileging I rules, represents the majority position, others such as Rumfitt[11] don’t take the I rules to be (universally) privileged.

4 Γ ∆ [A]Γ [B] ∆ Σ Π Σ Π A → (B ∧ C) A A ∨ B C ∧ D C ∧ D → E ∨ E B ∧ C C ∧ D ∧ E ∧ E B C

In each of these proofs we have introduced the major premise–respectively, B ∧ C and C ∧ D–non-canonically. Surveying the proof whose conclusion is each major premise, it should be clear that we cannot expect that a proof of B (resp. C) occurs therein. In contrast, it is reasonably intuitive to expect there to occur such a proof in cases of canonical introduction of the major premise. For example, in this proof:

Γ0 Γ1 Σ0 Σ1 A0 A1 A0 ∧ A1 Ai where A0 ∧ A1 is introduced canonically, we can see clearly that a proof of Ai will be contained within the proof leading to the canonical introduction of A0 ∧ A1. This holds also for each of the other logical operators in FOL. In light of this, it Prawitz’s reduction procedure would appear to be an adequate explication of the intuitive notion of harmony.

2.2 Harmony and Logicality The reduction procedure as a criterion for harmony thus has at least three desirable properties. First, the procedure guarantees that the E rules aren’t “too strong” relative to the corresponding I rule–again, in the sense that we can’t “pull out” more information content than is put in. It is for this reason that reduction procedures rule out tonk-like operators, something we desire. Looking back at tonk, we can see that this condition fails: without the detour through A tonk B we couldn’t derive B from A. This then gives us a way of ruling out tonk as having its meaning fully specified: to fully specify the meaning of a logical constant, the I and E rules must exhibit harmony. If we take invertibility to be an appropriate formal model of our intuitive notion of harmony, then the rules for tonk fail to fully determine its meaning (what Read calls being autonomous [13]). This is a move endorsed by Dummett, as evident here[14]:

5 The demand that the introduction rules and the elimination rules be in harmony is not reasonable in a general context... but it is compelling when it is being maintained that the meaning of the logical constant in question can be completely determined by laying down the fundamental logical laws governing it.

As understood by Read[15] and Griffiths[4], Dummett here stipulates that harmony is a requirement for logicality. Tennant[5]also requires harmony between I and E rules for logicality. For each of these authors, Prawitz’s invertibility is a necessary condition for logicality. Second, the procedure doesn’t privilege I or E rules in the sense that one or the other must be singled out as the primary “seat” of meaning. For our purposes this is a desirable for it does not commit us to any position on the primacy of I or E rules. In avoiding this debate, the result obtained here should be more palatable than it might otherwise be. Third, as Prawitz showed in his dissertation[12] the rules for each of ¬, ∧, ∨, →, ∃, and ∀ all admit of reduction procedures in the above sense (in intuitionistic FOL)[12].2 Thus for the logical inferentialist the meaning of the logical constants of FOL count as fully specified by their rules for introduction and elimination. However, there is one downside to this procedure. We would also like to know, for any set of I and E rules governing an operator, that the E rules aren’t “too weak” relative to the I rule–that is, that it doesn’t return less information content than was initially put in by the I rule. This is a point on which the reduction procedure remains silent; the existence of a reduction procedure provides no guarantee against this case. Despite this, the reduction procedure remains a useful notion for its ability to at least discriminate against E rules that are “too strong.”

2In addition, if we follow Read[13], ¬ is harmonious for classical logic. The diagnosis, according to Read, is that if one allows bivalence, a multiple-conclusion natural deduc- tion proof-system for classical logic is harmonious. Similarly, Rumfitt[16] argues against intuitionistic negation’s being the only harmonious I/E rule pair for ¬ by denying that one rule dominates (here the I rule), a strategy which he argues is underwritten by a uni- lateral understanding of linguistic meaning (i.e. meaning is determined by one aspect of use). Instead he recommends a bilateral approach: “A necessary condition for a complete sentence to have a coherent bilateral sense is that the acts of asserting it and rejecting it should be co-ordinated: that is to say, the conditions for the sentence to be correctly asserted should not intersect with the conditions for it to be correctly denied.”

6 2.3 Logicality and Identity But what about =? If we’re a logical inferentialist and want to count it as logical it must admit of harmonious introduction and elimination rules. As we reviewed, one way of cashing this out is through Prawitz’s principle of in- vertibility: if any detour through an introduction rule followed immediately by the corresponding elimination rule can be eliminated, then the rules are harmonious. The standard rules for identity are:

Reflexivity Congruence Γ Σ a = b F a = a Fa[b] where F is a formula and Fa[b] is the formula had by replacing in F all occurences of a with b. As is easily seen, these rules indeed admit of a reduction procedure: Γ Σ Γ a = a F Σ F 7−→ F As can also be seen, identity admits of a reduction procedure in a way we don’t see with the logical constants discussed earlier. For identity, no appeal to the major premise a = a is even needed to effect the deduction of F –the proof of F is precisely the minor premise! It seems that identity has no problem counting as logical since it’s so obviously invertible. But are these rules for identity intuitively harmonious? Notice that in Reflexivity the same term flanks identity, whereas in Congruence the terms flanking identity are allowed to be different. Thus the elimination rule is spuriously general with respect to the introduction rule. If what one is aiming for is the intuitive notion of harmony the existence of a reduction procedure is insufficient for establishing it, and in particular identity is not intuitively harmonious. We obviously don’t want to modify Congruence, for the substitutivity property of identity is an essential feature of identity. Should one desire to modify the rules to be harmonious, the change would then seem to come with the I rule.

7 In [15], Read proposed the following alternative rules in an attempt to save identity3:

[P a] Σ P b a = b F =I =E a = b Fa[b] where P is a variable ranging over (monadic [6]) predicates and F an arbitary formula. As can be seen, the standard rules are easily recovered: simply let a be b in =I. More importantly, though, we now appear to have a way of introducing mixed-identity statements through the introduction rule, so that there appears to be intuitive harmony. In [4] Griffiths argues that Read’s rules =I and =E are inferentially equivalent to the original rules Reflexivity and Congruence, hence not harmonious. Read[6] responds to this argument, claiming that inferentially equivalent rules need not have the same status with respect to harmony. I take no side in this disagreement, and instead show that the problem as understood in [15] can be avoided. I propose that to do so we must look closer at identity and what our mo- tivation for counting it as logical consists in. As the disagreement between Griffiths and Read shows, the harmony trouble with identity does seem to run deep, and one might be inclined to abandon the search for harmonious rules altogether. Independent of any particular motivations that the logi- cal inferentialist might have, identity has been counted as logical by many. Frege[17], for instance, included identity is his system. Quine[18] also con- sidered identity to be logical. Without diving into the history of identity, something about identity does seem to suggest its counting as logical. Despite this, there also appear to be reasons why identity shouldn’t count as logical. For one, there appear to be a posteriori statements that record empirical facts. Consider, for example, the statement ‘Hesperus=Phosphorus’. The introduction of this statement into arguments does not seem to come from Reflexivity. Instead, the statement is introduced into arguments in the same way as any other empirical and contingent fact, as a supposition whose acceptability is determined based on information about the world.

3In [15] Read relied heavily on second-order resources to show that =E holds for ar- bitrary formulas F. In [6] Read clarifies that one can avoid adverting to second-order comprehension to justify =E by inducting on the complexity of the formulas containing monadic predicates. It is still unclear, at least to this author, how one can get from arbitrary formulas containing only monadic predicates to arbitrarily complex formulas containing predicates of any arity, which seems to be what is desired, without appeal to something like second-order comprehension.

8 To put it more plainly, mixed-identity statements are often justified on em- pirical grounds that go beyond the meaning of identity as given by a pair of introduction and elimination rules such as Reflexivity and Congruence. Is the search for harmony in identity then a fruitless effort? I think not. Instead, I think the problem is more subtle than simply finding I and E rules that harmonious. Following Wehmeier[3], what we really seem to be working with in identity is several different concepts all lumped under one heading. Two such concepts are relevant here. First is that of variable coordination. In this capacity, identity is used to determine the mapping relationships among multiple variables. For example, in the statement ‘∃x∃y(x 6= y)’, identity is being used to ensure that ‘x’ and ‘y’ map to distinct objects. The second concept relevant for our purposes deals exclusively with constants: co-reference. This concept appears to be the one at play in many arguments containing mixed-identity statements. Consider again our classic example, the statement ‘Hesperus is Phosphorus’. The proposal which I follow, argued for in [3], is that this statement is best read as Hesperus≡Phosphorus, where ≡ expresses the relation of co-reference. On this reading, the statement ‘Hesperus is Phosphorus’ expresses that there is one object that both Hesperus and Phosphorus denote. The relation is thus best read as a relation between terms of the language and not their referents–more precisely it is a relation between individual constants (i.e. names) of the language. Having separated these distinct concepts, I will argue that identity qua variable coordination is harmonious. This will be done by absorbing the variable-coordinating work done by identity into the rules governing the logical constants. In doing so, we can show that the rules are harmonious, and thus the variable-coordination fragment of identity does not contribute to the apparently inharmonious nature of identity at-large, as is used in FOL=. I then propose the addition of a predicate expressing co-reference, which grants the logic an expressive power equivalent to that of FOL= [2]. Here we expect no harmony. The logic I will use to accomplish this task is a natural deduction presentation of Wehmeier’s W-logic[1].

3 W-logic

3.1 Syntax and Semantics of W-logic The guiding semantic idea behind W-logic is that distinct free variables must be assigned distinct values [1]. With this in mind, we introduce the concepts and terminology of the system. The language of W-logic has as its

9 primitive symbols:

• the 0-ary propositional connective: ⊥

• the propositional connectives: →, ∨, ∧ (we define ¬A as an abbrevia- tion for A → ⊥)

• the quantifier symbols: ∀, ∃

• countably-many bound variables: X = {xi|i ∈ N}

• countably-many free variables: A = {ai|i ∈ N} n • for every arity n ≥ 1, countably-many relation symbols: P = {Pi |i ∈ N}

Definition 1 Inductively define the formulas of W-logic as follows:

• P (a0, . . . , an−1) are formulas, for P ∈ P and a0, . . . , an−1 ∈ A; • whenever F and G are formulas, then (F → G), (F ∨ G), and (F ∧ G) are formulas;

• whenever F is a formula, a ∈ A, and x ∈ X doesn’t occur in F, then ∀xFa[x] and ∃xFa[x] are formulas (where for any formula F , Fs[t] is the result of replacing all occurrences of s in F with t).

More definitions

• the set of free variables occurring in a formula F is denoted by FV (F ). If FV (A) = ∅, we say that F is a sentence.

• Γ0, Γ1,... denote sets of formulas

• Σ0, Σ1,... denote proofs

U • a structure is a tuple U = hU, (P )P ∈P i, where U is the non-empty domain of U, and for each n-ary predicate symbol P in the language of W-logic, P U is an n-ary relation over U.

• a U-assignment σ is a function from the free variables into U.

10 Definition 2 Let A, F , and G be formulas in the language of W-logic. We recursively define W-satisfaction of A by σ on U (write U A[σ]) as follows:

U •U P (a0, . . . , an−1)[σ] iff hσ(a0), . . . , σ(an−1)i ∈ P ; •U 1 ⊥[σ];

•U F ∨ G[σ] iff U F [σ] or U G[σ]; •U F → G[σ] iff U 1 F [σ] or U G[σ]; •U F ∧ G[σ] iff U F [σ] and U G[σ];

•U ∀xFa[x][σ] iff U F [σ{a := u}] for all u∈ / σ[FV (∀xFa[x])];

•U ∃xFa[x][σ] iff U F [σ{a := u}] for some u∈ / σ[FV (∃xFa[x])].

Further definitions

• F is W-valid in U, U F , if, for every U-assignment σ 1-1 on FV (F ), U A[σ]. • F is W-valid, F , if U F for all U. • When U F and F is a sentence, we say that F is W-true in U. • F is a W-logical consequence of Γ, Γ F , if for every structure U, if U Γ[σ] whenever a U-assignment σ is 1-1 on FV (Γ,F ), then U F [σ].

Theorem 1 Let F be a formula and a, a0 be free variables. Then for all structures U:

0 0 U F [σ{a := u}] iff U Fa[a ][σ], where σ(a ) = u.

Proof See appendix.

3.2 Natural Deduction Rules of W-logic For each natural deduction rule of W-logic I will first introduce it in writ- ing, then present its graphical representation and prove its soundness on the semantics. Following this, when appropriate, will be some brief comments on the role that the variable restrictions for the rule play in the soundness proof.

11 ∀ Introduction

As for the standard rule, to conclude ∀xAa[x], we must have a proof of A from a set of assumptions Γ0 in which a does not occur. In addition we require that, for each free variable ai occurring in Γ0, there is a proof of Aa[ai] from some set of assumptions (call them Γi). Graphically this is represented as:

Γ0 Γ1 ... Γn Σ0 Σ1 ... Σn

A Aa[a1] ... Aa[an] where a∈ / FV (Γ0) ∀xAa[x] and the ais are the free variables in Γ0 minus those in ∀xAa[x].

Sn Proof of soundness Let Γ = i=1 Γi. We require to show that if Γ0 A and Γi Aa[ai] for each 1 ≤ i ≤ n, then Γ ∀xAa[x]. Assume Γ0 A and Γi Aa[ai] for each 1 ≤ i ≤ n. Let U and σ be such that U Γ[σ], where σ is 1-1 on FV (Γ, ∀xAa[x]). By the clause for ∀ in the definition of satisfaction, we know that U ∀xAa[x][σ] iff for every u∈ / σ[FV (∀xAa[x])], U Aa[b][σ{b := u}]. The following two cases exhaust the possibilities:

• Assume u ∈ σ[{a1, ..., an}]. Say u = σU (ai) for some 1 ≤ i ≤ n. Since FV (Γi,Aa[ai]) ⊆ FV (Γ, ∀xAa[x]) and by assumption σ is 1-1 on FV (Γ, ∀xAa[x]), σ is 1-1 on FV (Γi,Aa[ai]). But by assumption Γi Aa[ai] and U Γi[σ], hence it follows that U Aa[ai][σ]. By the substitution lemma, this implies U Aa[b][σ{b := u}] for u ∈ σ[FV (Γ0) \ FV (∀xAa[x])].

• Assume u∈ / σ[{a1, ..., an}]. We know that Γ0 A, by assumption, which is just to say that for every variable assignment τ on U that is 1- 1 on FV (Γ0, ∀xAa[x]), if U Γ0[τ] then U A[τ]. I claim that σ{a := u} is 1-1 on FV (Γ0,A). We know that σ is 1-1 on FV (Γ, ∀xAa[x]) by assumption, and FV (Γ0, ∀xAa[x]) ⊆ FV (Γ, ∀xAa[x]), so σ is 1-1 on FV (Γ0, ∀xAa[x]). It is enough to show that u∈ / σ[FV (Γ0, ∀xAa[x])]. But this is just to say that u∈ / σ[{a1, ..., an} ∪ FV (∀xAa[x])], which is guaranteed by our case assumption and the clause for ∀ in Definition 2. Therefore σ{b := u} is 1-1 on FV (Γ0,Aa[b]). So by U Γ0[σ{b := u}] and Γ0 Aa[b], we have that U Aa[b][σ{b := u}].

In each case U Aa[b][σ{b := u}], so U ∀xAa[x][σ]. Hence Γ ∀xAa[x], as required to prove.

12 ∀ Elimination

Given a proof of ∀xAa[x] from assumptions Γ, we can conclude Aa[b] for a free variable b not occurring in ∀xAa[x]. Graphically:

Γ Σ ∀xAa[x] A

Proof of soundness It suffices to show that if Γ ∀xAa[x], then Γ A. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ,A). Assume that Γ ∀xAa[x]. Since a∈ / FV (∀xAa[x]), FV (∀xAa[x]) ⊆ FV (A). But since σ is 1-1 on FV (A) by assumption, σ is 1-1 on FV (∀xAa[x]). Hence by assumption of U Γ[σ] and Γ ∀xAa[x], it follows that U ∀xAa[x][σ]. But by the clause for ∀ in Definition 2, U ∀xAa[x][σ] iff U A[σ{a := u}] for every u∈ / σ[FV (∀xAa[x])]. So U A[σ{a := u}]. But σ is already 1-1 on FV (A), hence σ(a) = u for some such u, so U A[σ]. Hence Γ A.

∃ Introduction

Given a proof of A from assumptions Γ, we can conclude ∃xAa[x], where the free variable a we are quantifying in for doesn’t occur in ∃xAa[x] and either it occurs already in our assumptions Γ or there are no free variables in Γ or ∃xAa[x]. Graphically:

Γ Σ A

∃xAa[x] where either a ∈ FV (Γ) or FV (Γ, ∃xAa[x]) = ∅.

Proof of soundness It suffices to show that if Γ A, then Γ ∃xAa[x]. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ, ∃xAa[x]). Assume Γ A. By cases:

• Assume a ∈ FV (Γ). Then σ is clearly 1-1 on FV (Γ,A), so by Γ A, U A[σ]. But σ is 1-1 on FV (Γ, ∃xAa[x]) by assumption so that a is mapped to some u∈ / σ[FV (∃xAa[x])]. Hence U A[σ{a := u}] for some u∈ / σ[FV (∃xAa[x])]. But by the ∃ clause in Definition 2 this is the case iff U ∃xAa[x][σ]. Hence U ∃xAa[x][σ].

13 • FV (Γ, ∃xAa[x]) = ∅. Then σ is clearly 1-1 on FV (Γ,A), since any U-assignment is 1-1 on {a}. So from our assumptions of U Γ[σ] and Γ A, it follows that U A[σ]. But as in the previous case, we have U A[σ{a := u}] for some u∈ / σ[FV (∃xAa[x])], since here FV (∃xAa[x]) = ∅. Then by appeal to the ∃ clause in Definition 2 we again have U ∃xAa[x][σ].

Hence Γ ∃xAa[x].

∃ Elimination In the pictorial representation of the proof, the structure is identical to that of the standard FOL rule. There are, however, additional restrictions on the variables. First, the part which is similar. Given a proof of ∃xAa[x] from assumptions Γ0 and a proof of C from assumptions Γ1 and Aa[b], one can conclude C, discharging the assumption of Aa[b]. Like in FOL, we require that the b not occur in ∃xAa[x], C, or Γ1. Unlike FOL, we additionally require either that the free variables in ∃xAa[x] occur already in either Γ0, Γ1, or C, and b in Γ0, or that Γ0,Γ1, and C are sentences and ∃xAa[x] has at most one free variable. Graphically:

Γ0 Γ1 [Aa[b]] Σ Σ 0 1 where b∈ / FV (∃xAa[x],C, Γ1) ∃xAa[x] C and either FV (∃xAa[x]) ⊆ FV (Γ0, Γ1,C) and b ∈ FV (Γ0), C or FV (Γ0, Γ1,C) = ∅ and |FV (∃xAa[x])| ≤ 1.

Proof of soundness Let Γ = Γ0 ∪ Γ1. It suffices to show that if Γ0 ∃xAa[x] and Γ1,Aa[b] C, then Γ C. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ,C). Assume that Γ0 ∃xAa[x] and Γ1,Aa[b] C. By cases:

• Assume that FV (∃xAa[x]) ⊆ FV (Γ,C) and that b ∈ FV (Γ0). By assumption σ is 1-1 on FV (Γ,C) and FV (Γ0, ∃xAa[x]) ⊆ FV (Γ,C), so σ is clearly 1-1 on FV (Γ0, ∃xAa[x]). Then by our assumptions that U Γ0[σ] and Γ0 ∃xAa[x], it follows that U ∃xAa[x][σ]. But by the clause for ∃ in Definition 2, this happens iff U Aa[b][σ{b := u}] for some u∈ / σ[FV (∃xAa[x])], hence U Aa[b][σ{b := u}] for some such u. But then FV (Aa[b]) = FV (∃xAa[b]) ∪ {b}. Since σ is 1-1 on FV (Γ0, ∃xAa[x]), and since b ∈ FV (Γ0) by assumption, σ must be 1-1 on FV (Aa[b]); that is, σ(b) = u for some u∈ / σ[FV (∃xAa[x])]. So U

14 Aa[b][σ]. So U Γ1[σ], U Aa[b][σ] and σ is 1-1 on FV (Γ1,Aa[b],C). Hence by our assumption that Γ1,Aa[b] C[σ], U C[σ].

• Assume that FV (Γ,C) = ∅ and |FV (∃xAa[x])| ≤ 1. Then since any U- assignment is 1-1 on FV (Γ, ∃xAa[x]), σ is. But since FV (Γ0, ∃xAa[x]) ⊆ FV (Γ, ∃xAa[x]), σ must be 1-1 on FV (Γ, ∃xAa[x]. So by our assump- tions that UΓ0[σ] and Γ0 ∃xAa[x] U ∃xAa[x][σ]. But by the clause for ∃ in Definition 2, this happens iff U Aa[b][σ{b := u}] for some u∈ / σ[FV (∃xAa[x])]. So U Aa[b][σ{b := u}]. But since b∈ / FV (Γ1 we have also that U Γ1[σ{b := u}, for some such u. Since σ{b := u} is clearly 1-1 on FV (Γ1,Aa[b],C) = FV (Aa[b]), it follows from our assumption that Γ1,Aa[b] C that U C[σ{b := u}], for some u. But b∈ / FV (C), so U C[σ].

In each case U Aa[b][σ] so Γ Aa[b], as required to prove.

→ Introduction The rule for → introduction is the same as the standard one. If, from some assumptions Γ, along with the assumption of A, one can prove B, then one can conclude A → B. Graphically:

Γ[A] Σ B A → B

Soundness It suffices to show that if Γ,A B, then Γ A → B. Let U and σ be such that U Γ[σ], for σ 1-1 on FV (Γ,A → B). Assume that Γ,A B. So then by our assumptions that U Γ[σ], U A[σ], and Γ,A B, it follows that U B[σ]. But since U A → B[σ] iff U ¬A[σ] or U B[σ], it follows that U A → B[σ]. So Γ A → B.

→ Elimination Again, the rule is diagrammatically similar to the standard rule, with changes only in the variable restrictions. If from assumptions Γ0 one can prove A → B, and from assumptions Γ1 one can prove A, then one can conclude B. We further require that either FV (A) ⊆ FV (B), or |FV (A)| = 1 and FV (Γ0,B) = ∅. Graphically:

15 Γ0 Γ1 Σ0 Σ1 A → B A where either FV (A) ⊆ FV (B)

B or |FV (A)| ≤ 1 and FV (Γ0, Γ1,B) = ∅.

Proof of soundness Let Γ = Γ0 ∪ Γ1. It suffices to show that if Γ0 A → B and Γ1 A, then Γ B. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ,B). Assume Γ0 A → B and Γ1 A. By cases:

• Assume that FV (A) ⊆ FV (B). Then since σ is 1-1 on FV (B) and FV (A → B) = FV (B), so σ is 1-1 on FV (A → B). Hence by our assumptions that U Γ[σ] and Γ0 A → B, U A → B[σ]. Similarly, by our assumption that U Γ[σ] and Γ1 A, it follows that U A[σ]. But from U A → B[σ], U A[σ] and the → clause of Definition 2 it follows that U B[σ]. • Assume that |FV (A)| = 1 and FV (Γ,B) = ∅. Then FV (Γ,A → B) = FV (A) and |FV (A)| = 1, so σ is 1-1 on FV (Γ,A → B). So by our assumptions that U Γ[σ] and Γ0 A → B it follows that U A → B[σ]. Similarly, from our assumptions that U Γ[σ] and Γ1 A it follows that U A[σ]. But from U A → B[σ], U A[σ] and the → clause of Definition 2 it follows that U B[σ].

In each case U B[σ], hence Γ B as required to prove.

∧ Introduction This rule is the same as the standard rule for ∧ introduction. Given a proof of A0 from assumptions Γ0 and a proof of A1 from assumptions Γ1, one can conclude A0 ∧ A1. Graphically:

Γ0 Γ1 Σ0 Σ1 A0 A1 A0 ∧ A1

Soundness Let Γ = Γ0 ∪ Γ1. It suffices to show that if Γi Ai for i = 1, 2, then Γ A0 ∧ A1. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ,A0∧A1). Assume Γi Ai for i = 1, 2. But FV (Ai) ⊆ FV (A0,A1) for i = 1, 2, and σ is 1-1 on FV (A0,A1) by assumption, so σ is 1-1 on FV (Γi,Ai) for i = 1, 2. Then by our assumptions that U Γ[σ] and Γi Ai for i = 1, 2

16 it follows that U Ai[σ] for i = 1, 2. By the clause for ∧ in Definition 2 it follows that U A0 ∧ A1[σ]. So Γ A0 ∧ A1, as required to prove.

∧ Elimination As with previous rules, the diagrammatic form of this rule is the same as the standard elimination rule, but we place additional restrictions on the vari- ables. Given a proof of A0 ∧A1 from a set of assumptions Γ we can conclude A0 (analogously, A1). We require either that FV (A1) ⊆ FV (A0, Γ), or that exactly one free variable occur in A1 and FV (Γ,A0) = ∅. Graphically:

Γ Σ

A0 ∧ A1 where either FV (A1) ⊆ FV (A0, Γ), A0 or |FV (A1)| = 1 and FV (Γ,A0) = ∅.

Soundness It suffices to show that if Γ A0 ∧ A1, then Γ A0. Let U be a structure and σ a U-assignment 1-1 on FV (Γ,A0). Suppose U Γ[σ]. Assume Γ A0 ∧ A1. By cases:

• Assume that FV (A1) ⊆ FV (A0, Γ). Then clearly σ is 1-1 on FV (Γ,A0,A1) since σ is 1-1 on FV (A0, Γ) by assumption. So by our assumptions that U Γ[σ] and Γ A0 ∧ A1, it follows that U A0 ∧ A1[σ]. Thus by the W-satisfaction clause for ∧, U A0[σ].

• Assume that |FV (A1)| = 1 and FV (A0, Γ) = ∅. Then FV (Γ,A0 ∧ A1) = FV (A0 ∧ A1). Since |FV (A0 ∧ A1)| = 1, σ is clearly 1-1 on FV (A0 ∧ A1) = FV (Γ,A0 ∧ A1). Hence by our assumptions that U Γ[σ] and Γ A0 ∧ A1, it follows that U A0 ∧ A1[σ]. Thus by the W-satisfaction clause for ∧, U A0[σ].

In each case U A0[σ], hence Γ A0 as required to prove.

∨ Introduction

This rule is the same as the standard rule. Given a proof of Ai from Γ for i = 0 or i = 1, we can conclude A0 ∨ A1. Graphically: Γ Σ Ai A0 ∨ A1

17 Soundness It suffices to show that if Γ Ai for i = 0 or i = 1, then Γ A0 ∨ A1. Let U, σ be such that U Γ[σ] for σ 1-1 on FV (Γ,A0 ∨ A1). Assume Γ Ai for i = 0 or i = 1. Since FV (Γ,Ai) ⊆ FV (Γ,A0 ∨ A1) and σ is 1-1 on FV (Γ,A0 ∨ A1), σ is 1-1 on FV (Γ,Ai). So by our assumptions that U Γ[σ] and Γ Ai for i = 0 or i = 1, it follows that U Ai[σ] for i = 0 or i = 1. Hence by the clause for ∨ in Definition 2, U A0 ∨ A1[σ] as required to prove.

∨ Elimination This rule is similar to the standard rule, with the addition of several variable restrictions. Given proofs of A1 ∨ A2 from Γ0, C from Γ1,A1, and C from Γ2,A2, we can conclude C. Additionally, either the free variables of A1 ∨ A2 already occur in Γ0 or C, or there is only one free variable in A1 ∨ A2 and none in Γ or C.

Γ0 Γ1 [A1] Γ2 [A2] Σ0 Σ1 Σ2

A1 ∨ A2 C C where either FV (A1 ∨ A2) ⊆ FV (Γ0,C), C or |FV (A1 ∨ A2)| = 1 and FV (Γ,C) = ∅.

S2 Soundness Let Γ = i=0 Γi. It suffices to show that if Γ0 A1 ∨ A2 and Γi,Ai C for i = 1, 2, then Γ C. Let U and σ be such that U Γ[σ] for σ 1-1 on FV (Γ,C). Assume Γ0 A1 ∨ A2 and Γi,Ai C for i = 1, 2. By cases:

• Assume that FV (A1 ∨ A2) ⊆ FV (Γ0,C). Then since σ is 1-1 on FV (Γ0,C), σ is 1-1 on FV (A1 ∨ A2, Γ0). So by our assumptions that U Γ[σ] and Γ0 A1 ∨ A2 it follows that U A1 ∨ A2[σ]. But by the clause for ∨ in Definition 2, then U Ai[σ] for some i = 1, 2. Without loss of generality, let U A1[σ]. So we have U A1[σ] and U Γi[σ] (by assumption). Hence by our assumptions that σ is 1-1 on FV (Γ1,A1,C) and Γ1,A1 C, it follows that U C[σ].

• Assume that |FV (A1 ∨ A2)| = 1 and FV (Γ,C) = ∅. Then FV (Γ,A1 ∨ A2) = FV (A1 ∨ A2). Since |FV (A1 ∨ A2)| = 1, σ is 1-1 on FV (Γ,A1 ∨ A2). Hence by our assumptions that U Γ[σ] and Γ0 A1 ∨ A2 it follows that U A1 ∨ A2[σ]. Again by the clause for ∨ in Definition 2, U Ai[σ] for some i = 1, 2. Without loss of generality, let U A1[σ]. Then since σ is 1-1 on FV (Γ1,A1,C), by Γ1,A1 C, U C[σ].

In each case U C[σ], thus Γ C.

18 4 Harmony of W-logic

In this section we will prove that there exist reduction procedures (also called detour conversions[10]) which, given a proof with a local maximum of com- plexity arising from the elimination of an operator immediately following its introduction, returns a proof without the detour. These reduction proce- dures suffice to show that the I and E rules for the operators of W-logic are invertible (in the Prawitz sense). We begin with some definitions concerning deductions.

Definition free/bound in Σ. Following Troelstra and Schwichtenberg[10] we consider the variables free in a deduction to be as follows: • The deduction consisting of assumption A only has FV (A) as free variables,

• at each rule application, the free variables are inherited from the im- mediate subdeductions, except that

• in an application of ∃E the occurrences of the free variable b in 1 and Aa[b] become closed, • and in an application of ∀I the occurrences of the free variable a in 0 become closed,

• and in →I the variables in FV (A) have to be added in case no discharge of A occurs (i.e. the set of assumptions closed in →I is empty),

• in ∨I those in FV (Aj) j 6= i have to be added. We say that a variable is closed in a deduction when it is not free.

Definition Σa. Let Σ be a proof. Then Σa is the result of substituting b for a at all closed occurrences of a in Σ, where b is some free variable not occurring in Σ. We now show that proofs are schematic in their free variables in a way sufficient to establish that the result of an application of a reduction proce- dure is a proof.

Theorem 2 For any Σ that is a proof of F from Γ, Σba[b] is a proof of 4 Fa[b] from Γa[b], where a, b ∈ A.

4The strategy of this proof is based on that provided by Tennant[19].

19 Proof See appendix.

4.1 Reduction Procedures →

Γ0 [A] Γ1 Σ0 Γ1 Σ1 B Σ1 Γ0 A A → B A Σ0 B 7−→ B

This is to say that if from assumptions Γ0,A we can prove A → B by → introduction, and from assumptions Γ1 we can prove A, thus proving B by application of → elimination, then the reduction procedure transforms this into a proof from Γ1 of A which, along with assumptions Γ0, proves B. This resulting proof (called a reduct) is guaranteed to be a proof in virtue of the proofhood of Σ0 of B from Γ0,A–we are merely “tacking on” the proof of A from Γ1 instead of taking A as an assumption for discharge by application of → elimination.

Γ0 Γ1 Σ0 Σ1 A0 A1 Γi A0 ∧ A1 Σi Ai 7−→ Ai

In words, given a proof of the form which has proofs Σi of Ai from assump- tions Γi, followed by an application of ∧ introduction to get 0 ∧A1, and then followed immediately by ∧ elimination to get Ai, the reduction procedure returns the reduct of proof Σi from Γi of Ai. This reduct is guaranteed to be a proof in virtue of the proofhood of Σi from Γi of Ai, which occurred as a sub-proof in the proof which contained the detour.

Γ0 Γ0 Σ0 Γ1 [A1] Γ2 [A2] Σ0 Ai Σ1 Σ2 Γi Ai A1 ∨ A2 C C Σi C 7−→ C

20 In words, we are given a proof of the form which has a proof Σ0 from Γ0 of Ai, followed by ∨ introduction to get A1 ∨ A2 followed immediately by the elimination rule for ∨ with conclusion C, consisting of a proof Σ1 from assumptions Γ1,A1 of C, and a proof Σ2 from Γ2,A2 of C, discharging the assumptions of A1 and A2. The reduction procedure produces the reduct which consists of the proof Σi from Γi,Ai of C, where Ai is had by a proof Σ0 from Γ0. Like in the case for →, we are essentially “tacking on” the proof Σ0 from Γ0 of Ai instead of taking Ai as an assumption for discharge by application of ∨ elimination. In virtue of the proofhood of Σ0 from Γ0 of Ai, and of Σi from Γi of C, we are guaranteed that this reduct is a proof.

∀ In this case, we will effect one of two reductions based on whether b ∈ FV (Γ0) \ FV (∀xFa[x]) or not. First, if b∈ / FV (Γ0) \ FV (∀xFa[x]) the reduction is:

Γ0 Γ1 ... Γn Σ0 Σ1 ... Σn F Fa[a1] ... Fa[an] Γ0 ∀xFa[x] Σ0a[b] Fa[b] 7−→ Fa[b]

In words this is to say that if b∈ / FV (Γ0) \ FV (∀xFa[x]), then replace all occurrences of a in the proof Σ0 with b to prove Fa[b]. We are then assured that Σ0a[b] is still a proof of Fa[b] from Γ0 by recourse to the proofhood of Σ0 of F from Γ0, as well as the theorem on substitution in proofs. If b ∈ FV (Γ0) \ FV (∀xFa[x]) the reduction is even simpler:

Γ0 Γ1 ... Γn Σ0 Σ1 ... Σn F Fa[a1] ... Fa[an] Γi ∀xFa[x] Σi Fa[b] 7−→ Fa[ai] where b = ai for some 1 ≤ i ≤ n. In words this is simply to say that if we already had a proof Σ0a[ai] of Fa[ai] from Γi, the detour reduces to this proof. Since this is a subproof of the proof to be reduced, with no substitution, we are guaranteed that the reduct is a proof.

21 ∃

Γ0 Γ0 Σ0 Γ1 [Fa[b]] Σ0 F Σ1 Γ1 F ∃xFa[x] C Σ1b[a] C 7−→ C

This is to say that, on the left, we are given a proof Σ0 of F from Γ0, from which we conclude ∃xFa[x], followed by a proof Σ1 of C from Γ1,Fa[b]. Then by discharging the assumption of a[b] according to the elimination rule for ∃, the conclusion of C is drawn. The reduction procedure in effect replaces the assumption of Fa[b] with a proof Σ0 of F from Γ)0, replacing all occurrences of b in Σ1 with a. By appeal to the proofhood of Σ1,Σ0, and the theorem on substitution in proofs, we can see that the reduct in this case is a proof.

4.2 Harmony On top of their admitting of reduction procedures, at least the rules for the quantifiers appear to be intuitively harmonious. In the case of ∀, for example, it is quite easy to see that the rules match in the intuitive sense:

Γ0 Γ1 ... Γn Γ Σ0 Σ1 ... Σn Σ A Aa[a1] ... Aa[an] ∀xAa[x] ∀xAa[x] Aa[b] where a∈ / FV (Γ0) and the a1, ··· an are the free variables in Γ0 minus those in ∀xAa[x]. where b∈ / FV (∀xAa[x])

As can be seen, the sufficient conditions for introduction of the universal statement are precisely the necessary consequences of the statement. For this reason there is no apparent loss or–more importantly, in light of Stein- berger’s Principle of Innocence[20]–gain of information via any detour. The latter of these properties was even conclusively established by the existence of a reduction procedure for the rules. But what about, say, conjunction? At first glance, the restriction asym- metry makes the rule pair look awkward:

22 Γ0 Γ1 Γ Σ0 Σ1 Σ A0 A1 A0 ∧ A1 A0 ∧ A1 A0 where either FV (A1) ⊆ FV (A0, Γ), or |FV (A1)| = 1 and FV (Γ,A0) = ∅.

How does the proof system get away with not having symmetric restric- tions on the I rule? By appeal to the reduction procedure we have already shown that the rules respect the Principle of Innocence by preventing us from drawing too much information out via the E rule with respect to that contained in the premises leading to its introduction. More precisely, then, if one is always able to infer A0 ∧ A1 from proofs of A0 and A1, how is the elimination rule not seen to be overly restric- tive, preventing us from drawing out all of the information contained in the proof of the conjunction? Indeed, when we aren’t dealing with sentences– guaranteeing that the restriction on the E rule is always met–then it would appear that there will be cases where we can introduce, but not then im- mediately eliminate, logical constants, which seems to underwrite the worry that we’re not getting enough information out via the E rule. Let’s look at an example where the restrictions on ∧E are not met. Consider the following proof:

∀xP (x) ∀xP (x) ∀ E P (a) P (b) ∧ I P (a) ∧ P (b)

Recall that the restrictions on ∧E were that: either FV (A1) ⊆ FV (A0, Γ), or |FV (A1)| = 1 and FV (Γ,A0) = ∅. In this case FV (P (a)) * FV (P (b), ∀xP (x)), and |FV (∀xP (x),P (a))|= 6 ∅. We can thus introduce P (a) ∧ P (b), but can- not follow the introduction with a corresponding elimination. By not being able to do this, are we compromising harmony? In particular, are we losing information? I argue no to each of these. To see why, consider the semantic structure of the argument just given. The structure is:

∀xP (x) P (a), ∀xP (x) P (b) ⇒ ∀xP (x) P (a) ∧ P (b) In the antecedent, this is to say that for any U, for any U-assignment σ 1-1 on FV (∀xP (x),P (a)), if U ∀xP (x)[σ], then U P (a)[σ]. In the consequent, this is to say that for any U, for any U-assignment σ 1-1 on FV (∀xP (x),P (a),P (b)), if U ∀xP (x)[σ], then U P (a) ∧ P (b)[σ].

23 Notice that the information contained in the consequent is weaker–there are strictly fewer variable assignments involved than in the antecedent. If we’re motivated to preserve information, we actually don’t want to license elimina- tion in this case because we actually lost information inferring P (a) ∧ P (b)! Thus, to eliminate to either of P (a) or P (b) would be to pull a stronger claim out of a weaker one. The reason for this oddity is, I think, instructive. The argument against harmony in fact seems plausible because of a failure to distinguish between the information content of the premises of an introduction with the infor- mation content of its conclusion. By distinguishing between the information content of the premises and the conclusion, we can see that the restrictive- ness of the elimination rule actually prevents us from making a misstep; information content was lost in the transition to P (a) ∧ P (b), and we can’t just enrich to get it back. Thus, not only do the restrictions on the elimina- tion rule not weaken the information, they prevent specious reasoning based on weakened information. To put this another way, not only do the restric- tions on the elimination rule not violate harmony, they preserve harmony. What the restrictions on the E rules are actually tracking is information preservation in the transition from the premises of an I rule to its conclu- sion; if there is no such weakening, the E rules can always be applied. This is an interesting contrast with FOL. With the standard rules of FOL we can, in a sense, “keep proving” after making an unnecessary introduction. In W-logic, this is not always the case–in some cases the introduction must be removed before proceeding in the proof. This is not, however, due to a lack of intuitive harmony. Instead, it is because our proof system blocks information inflation via E rules. This is a property which isn’t noticeable in FOL because it has a special property, over and above intuitive harmony, of global information preservation. What this global information preservation amounts to is an indifference to accidental introduction. It is open for debate whether this property is desirable, however. For a proof in a system without the property to preserve information throughout– say, in W-logic–, it would be necessary to first effect the necessary elimi- nation rules and then proceed to the introduction rules. As is well-known, proofs where the elimination rules are applied first gives our proofs the de- sirable “hourglass” shape. In this way the loss of global information preser- vation actually results in our obtaining a clearly desirable property.

24 5 Conclusion

5.1 Harmony and Co-Reference In the previous section it was shown that the introduction and elimination rules for W-logic are harmonious. I also spoke earlier of identity as playing two distinct roles in FOL=, namely those of variable coordination and co- reference. But one should notice that throughout the exposition of W-logic, we dealt with a language containing no individual constants. Thus, the only notion we were working with was variable coordination. What I have then shown is that identity qua variable coordination, when absorbed into the rules for the propositional connectives and quantifiers, is harmonious, and hence is not ruled out as non-logical. Since we know that FOL=, in a language without constants, is expressively equivalent to W-logic, it then follows that we can express claims (e.g. numerical quantification) using only logical operators. One might worry that adding to FOL= the part of identity responsible for expressing co-reference of individual constants (i) breaks the expressive equivalence with W-logic, or (ii) doesn’t preserve harmony. While (ii) is true in a sense–but not worrisome, I will argue–the first worry is easily dispelled. In [2], Wehmeier showed that the expressive equivalence of W-logic with FOL= is restored when the language of both contains constants by adding to W-logic a predicate which expresses co-reference of individual constants. Harmony, however, is different, and this is where the two notions of identity diverge. When co-reference is treated in FOL=, the debate kicks off with the issues we noted with Reflexivity and Congruence, as discussed by Griffiths and Read in their correspondence. W-logic is, in fact, the same: adding to W-logic the co-reference predicate we needed to restore expressive equivalence with FOL= with constants comes in exactly the same form. For the introduction rule we have Reflexivity–every constant co-refers with itself. For the elimination rule we have Congruence–if a co-refers with b, then substitution of a for b (resp. b for a) in any formula F is admissible. These rules are represented in natural deduction as follows:

Reflexivity Congruence Γ Σ c ≡ d F c ≡ c Fc[d] As we noted, these rules don’t appear to be intuitively harmonious. To reiterate what was said earlier, the problem seemed to be with Reflexivity’s

25 inability to capture the way statements with different constants flanking ≡ are introduced. But why should this be surprising, or worse, worrisome? These state- ments are, after all, justified on empirical grounds, contingent on the way our language hooks up to the world5. That Hesperus and Phosphorus co-denote was a hard-won empirical fact, not one that came via some pre-established linguistic harmony concerning the use of the locutions ‘co-refers with’ or ‘is identical to’. This realization leads us down the same path that Read[15] follows when he introduced his new introduction rule for identity that goes over-and-above Reflexivity, namely one that cashes out the identity of con- stants in terms of their enjoying exactly the same properties. Unlike Read, however, we don’t expect such a coincidence of properties to be justified by the meaning of ‘=’. Instead the justification for expressions of co-reference comes in the same way as any other empirical predicate. Thus, the logical inferentialist should not expect the rules for co-reference to be harmonious because it is not logical. Were the introduction rules for ≡ (resp. identity qua co-reference) to exhibit harmony, not only would it come as a surprise, it could be seen as a serious indictment of the concept of harmony as a defining feature of logicality.

5.2 Identity and Logical Inferentialism In this paper we have made four primary points. First, we established that W-logic respects the Principle of Innocence. Recall that this principle states that, as Griffiths puts it, ”logical inference should allow us to manipulate truths that we have already discovered, but should not allow us to discover any previously unknown atomic truths” [4]. To demonstrate this, we showed that the introduction and elimination rules for each of the quantifiers and propositional connectives admitted of a reduction procedure which allowed us to eliminate from deductions local maximums of logical complexity. We also showed that there is no worry of the elimination rules failing to pull out all of the information content of the conclusion of introduction rules. From this we concluded that the rules of W-logic not only respect the Principle of Innocence, but, further, are intuitively harmonious. Second, we showed that W-logic preserves as harmonious identity qua variable coordination. This followed from the intuitive harmony of the rules of W-logic without constants combined with its co-expressiveness with FOL= without constants. Third, this separation of the variable coordination and

5See [3] and its references for more argumentation along these lines, especially Chapter 3 of Fiengo and May.

26 coreference functions of identity also allowed us to respect the intuition that expressions of the form c ≡ d are introduced on empirical grounds. For this reason the logical inferentialist should not expect harmony for coreference. Finally, we have provided an explanation of =’s disharmony by showing that it collapses two semantic functions. Our expectation that = should be logical rides on its role in variable coordination. However, the logicality of = is infected by also playing the role of coreference, wherein we noted the logical inferentialist should expect no harmony. The answer to the question, “Is identity harmonious?” would thus seem to depend on which aspect of identity one is speaking of.

27 A Proofs

Theorem 1 Let F be a formula and a, a0 be free variables. Then for all structures U:

0 0 U F [σ{a := u}] iff U Fa[a ][σ], where σ(a ) = u.

Proof Let F be a formula and a, a0 be free variables, U a W-logical struc- ture, and σ a U-assignment. By induction on the complexity of F .

Base Case Let F be an atomic formula of the form P (a, a1, . . . , an) for a1, . . . , an ∈ A and P a predicate of arity n+1. (We thus assume, without loss of generality, that a occurs in the first position.) By definition 2 U U P (a, a1, . . . , an)[σ{a := u}] iff hσ(a), σ(a1), . . . , σ(an)i ∈ P . Since U σ{a := u}(a) = u, this is just hu, σ(a1), . . . , σ(an)i ∈ P . But by as- 0 0 sumption σ(a ) = u, so this is equivalent to hσ(a ), σ(a1), . . . , σ(an)i ∈ U 0 P . Again by Definition 2, this is the case iff U P (a , a1, . . . , an)[σ]. 0 Hence U P (a, a1, . . . , an)[σ{a := u}] iff U P (a , a1, . . . , an)[σ], as required to prove.

Ind. Hyp. Assume this holds for formulas A, B.

Ind. Step → By the → clause of Definition 2 U A → B[σ{a := u}] iff U 1 A[σ{a := u}] or U B[σ{a := u}]. By application of 0 the inductive hypothesis, this is equivalent to U 1 Aa[a ][σ] or 0 U Ba[a ][σ]. But by the → clause of Definition 2, this is the 0 case iff U (A → B)a[a ][σ]. Hence U A → B[σ{a := u}] iff 0 U (A → B)a[a ][σ], as required to prove. ∨ By the ∨ clause of Definition 2 U A ∨ B[σ{a := u}] iff U A[σ{a := u}] or U B[σ{a := u}]. By application of the in- 0 ductive hypothesis, this is equivalent to U Aa[a ][σ] or U 0 Ba[a ][σ]. But by the ∨ clause of Definition 2, this is the case 0 iff U (A ∨ B)a[a ][σ]. Hence U A ∨ B[σ{a := u}] iff U 0 (A ∨ B)a[a ][σ], as required to prove. ∧ By the ∧ clause of Definition 2 U A ∧ B[σ{a := u}] iff U A[σ{a := u}] and U B[σ{a := u}]. By application of the 0 inductive hypothesis, this is equivalent to U Aa[a ][σ] and U 0 Ba[a ][σ]. But by the ∧ clause of Definition 2, this is the case 0 iff U (A ∧ B)a[a ][σ]. Hence U A ∧ B[σ{a := u}] iff U 0 (A ∧ B)a[a ][σ], as required to prove.

28 ∀ We split into two cases, depending on whether a = b.

∗ Assume a = b. Then U ∀xAb[x][σ{a := u}] iff U ∀xAa[x][σ{a := u}]. But a∈ / FV (∀xAa[x]), so this hap- 0 pens iff U ∀xAa[x][σ]. Observe that (∀xAa[x])a[a ] is just 0 ∀xAa[x], hence U ∀xAb[x][σ{a := u}] iff U (∀xAa[x])a[a ][σ]. ∗ Assume a 6= b. Then by the ∀ clause of Definition 2, U 0 ∀xAb[x][σ{a := u}] iff U A[σ{a := u}{b := u }], for all 0 u ∈/ σ[FV (∀xAb[x])]. Since a 6= b, this is the same as U A[σ{a := u}{b := u0}]. By application of the inductive hy- 0 0 pothesis, this is the case iff U Aa[a ][σ{b := u }]. Hence by the ∀ clause of Definition 2, this is the case iff U ∀xAa[x][σ], 0 which as observed above is the same U (∀xAa[x])a[a ][σ]. 0 So U (∀xAa[x])a[a ][σ]. 0 Hence U ∀xAb[x][σ{a := u}] iff U (∀xAa[x])a[a ][σ], as re- quired to prove. ∃ Again we split into two cases, depending on whether a = b.

∗ Assume a = b. Then U ∃xAb[x][σ{a := u}] iff U ∃xAa[x][σ{a := u}]. But a∈ / FV (∀xAa[x]), so U ∃xAa[x][σ]. 0 0 Observe that (∃xAa[x])a[a ] is just ∃xAa[x], hence U (∃xAa[x])a[a ][σ]. ∗ Assume a 6= b. Then by the ∃ clause of Definition 2, U 0 ∃xAb[x][σ{a := u}] iff U A[σ{a := u}{b := u }], for some 0 u ∈/ σ[FV (∃xAb[x])]. Since a 6= b, this is the same as U A[σ{a := u}{b := u0}]. By application of the inductive hy- 0 0 pothesis, this is the case iff U Aa[a ][σ{b := u }]. Hence by the ∃ clause of Definition 2, this happens iff U ∃xAa[x][σ], 0 which as observed above is the same U (∃xAa[x])a[a ][σ]. 0 So U (∃xAa[x])a[a ][σ]. 0 Hence U ∃xAb[x][σ{a := u}] iff U (∃xAa[x])a[a ][σ], as re- quired to prove.

Theorem 2 For any Σ that is a proof of F from Γ, Σba[b] is a proof of 6 Fa[b] from Γa[b], where a, b ∈ A.

Proof By induction on the length n of a proof Σ of F from assumptions Γ.

Base Let n = 1. Then trivially Σba[b] constitutes a proof.

6The strategy of this proof is based on that provided by Tennant[19].

29 Ind. Hyp. Assume this holds of proofs of length ≤ m − 1.

Ind. Step Let Σ be a proof of length m of F from assumptions Γ. I will prove this by cases, according to the rule applied to generate the last line of the proof. Note that the propositional connectives are trivial. I will provide proofs of the cases for → to demonstrate this.

→ I Suppose Σ is: Γ0 [A] Σ0 B A → B

Then Σba[b] is: Γ0a[b][Aa[b]] Σ0ba[b] Ba[b] (A → B)a[b]

But clearly (A → B)a[b] is Aa[b] → Ba[b]. So Σba[b] is:

Γ0a[b][Aa[b]] Σ0ba[b] Ba[b] Aa[b] → Ba[b]

By application of the inductive hypothesis to Σ0ba[b] it is a proof. But the final application of →I is correct, so that Σba[b] is a proof. → E Suppose Σ is:

Γ0 Γ1 Σ0 Σ1 A → B A B

Then Σba[b] is: Γ0a[b] Γ1a[b] Σ0ba[b] Σ1ba[b] A → Ba[b] Aa[b] Ba[b]

30 But clearly A → Ba[b] is Aa[b] → Ba[b], so Σba[b] is:

Γ0a[b] Γ1a[b] Σ0ba[b] Σ1ba[b] Aa[b] → Ba[b] Aa[b] Ba[b]

By the inductive hypothesis applied to Σ0ba[b] and Σ1ba[b] each is a proof. But clearly FV (Aa[b]) ⊆ FV (Ba[b]) or |FV (Aa[b])| ≤ 1 and FV (Γ0a[b], Γ1a[b],Ba[b]) = ∅ (according as FV (A) ⊆ FV (B) or |FV (A)| ≤ 1 and FV (Γ0, Γ1,B) = ∅), so the final application of →E is correct. So Σba[b] is a proof. ∀ I Suppose Σ is Γ0 Γ1 ... Γn Σ0 Σ1 ... Σn A Ad[d1] ... Ad[dn] ∀xAd[x]

Note that the final application of ∀I closes d in Σ0. Thus Σb is:

Γ0 Γ1 ... Γn Σ0b Σ1 ... Σn Ad[e] Ad[d1] ... Ad[dn] ∀xAd[e]e[x]

where e 6= b. Hence Σba[b] is:

Γ0a[b] Γ1 ... Γn Σ0ba[b] Σ1 ... Σn Ad[e]a[b] Ad[d1] ... Ad[dn] ∀xAd[e]e[x]a[b]

But then by the inductive hypothesis applied to Σ0ba[b], and since e∈ / FV (Γ0a[b]) and e∈ / FV (∀xAd[e]e[x]a[b]) by e 6= b, the final application of ∀I is correct and Σ is a proof. ∀ E Suppose Σ is: Γ0 Σ0 ∀xA Ax[d]

31 then Σba[b] is: Γ0a Σba[b] ∀xAa[b] Ax[d]a[b]

Now ∀xAa[b] is ∀x(Aa[b]), and Ax[d]a[b] is Aa[b]x[d]a[b]. So Σba[b] is just: Γ0a[b] Σba[b] ∀x(Aa[b]) (Aa[b])x[d]a[b]

Hence by the inductive hypothesis applied to Σ0,Σ0 is a proof. But clearly d∈ / FV (∀x(Aa[b])) since b would have to be d, which isn’t the case because b occurs free in Σba[b]. Hence the final application of ∀E is correct, so that Σba[b] is a proof. ∃ I Suppose Σ is: Γ0 Σ0 Ax[d] ∃xA

Then Σba[b] is: Γ0a[b] Σ0ba[b] Ax[d]a[b] ∃xAa[b] 7 Now ∃xAa[b] is ∃x(Aa[b]) and Ax[d]a[b] is Aa[b]x[d]a[b] . Thus Σba[b] is Γ0a[b] Σ0ba[b] Aa[b]x[d]a[b] ∃x(Aa[b])

By the inductive hypothesis applied to Σ0ba[b], Σ0ba[b] is a proof. But since d∈ / FV (∃xAa[b]) and either d ∈ FV (Γ0a[b]) or FV (Γ0a[b], ∃x(Aa[b]) = ∅ (according as d ∈ FV (Γ0) or FV (Γ0, ∃xA = ∅), the final appli- cation of ∃I is correct, so that Σba[b] is a proof.

7We need this second substitution of b for a in case a = d.

32 ∃ E Suppose Σ is Γ0 Γ1 [Ax[d]] Σ0 Σ1 ∃xA C C

Note that the final application of ∃E closes d in Σ1. Thus Σb is:

Γ0 Γ1 [Ax[e]] Σ0b Σ1b ∃xA C C

where e 6= b and e doesn’t occur in Σ. Hence Σba[b] is:

Γ0ab Γ1a[b][Ax[e]a[b]] Σ0ba[b] Σ1ba[b] ∃xAa[b] Ca[b] Ca[b]

But clearly ∃xAa[b] is ∃x(Aa[b]), and since e 6= b and e doesn’t occur in Σ, Ax[e]a[b] is Aa[b]x[e]. Thus Σba[b] is:

Γ0ab Γ1a[b][Aa[b]x[e]] Σ0ba[b] Σ1ba[b] ∃x(Aa[b]) Ca[b] Ca[b]

By the inductive hypothesis applied to Σ0ba[b] and Σ1ba[b], each are proofs. But since e∈ / FV (∃x(Aa[b]),Ca[b], Γ1a[b]) and ei- ther FV (∃x(Aa[b]) ⊆ FV (Γ0a[b], Γ1a[b],C) and e ∈ FV (Γ0a[b]), or FV (Γ0a[b], Γ1a[b],C) = ∅ and |FV (∃x(Aa[b]))| ≤ 1 (accord- ing as either FV (∃xA ⊆ FV (Γ0, Γ1,C) and d ∈ FV (Γ0), or FV (Γ0, Γ1,C) = ∅ and |FV (∃xA))| ≤ 1). So Σba[b] is a proof.

33 References

[1] Kai Wehmeier. Wittgensteinian predicate logic. Notre Dame Journal of Formal Logic, 45:1–11, 2004.

[2] Kai Wehmeier. Wittgensteinian tableaux, identity, and co-denotation. Erkenntnis, 69(3):363–376, 2008.

[3] Kai Wehmeier. How to live without identity–and why. Australasian Journal of Philosophy, 90(4):761–777, 2012.

[4] Owen Griffiths. Harmonious rules for identity. The Review of Symbolic Logic, 7(3):499–510, 2014.

[5] Neil Tennant. Inferentialism, logicism, harmony, and a counterpoint. In Alex Miller, editor, Essays for Crispin Wright: Logic, Language and Mathematics, page forthcoming. Oxford University Press.

[6] Stephen Read. Harmonic inferentialism and the logic of identity. The Review of Symbolic Logic, forthcoming, 2015.

[7] Michael Dummett. The philosophical basis of intuitionistic logic. pages 215–247. Cambridge, 1983.

[8] A.N. Prior. The runabout inference-ticket. Analysis, 21(2):38–39, 1960.

[9] Gerhard Gentzen. Untersuchungen iiber das logische schliessen. In M.E. Szabo, editor, The Collected Papers of Gerhard Gentzen, pages 68–131. North-Holland, 1969.

[10] A.S. Troelstra and H. Schwichtenberg. Basic Proof Theory. Cambridge University Press, 2000.

[11] Ian Rumfitt. “yes” and “no”. Mind, 109(436):781–823, 2000.

[12] Dag Prawitz. Natural Deduction. Almqvist Wiksell, Stockholm, 1965.

[13] Stephen Read. Harmony and autonomy in classical logic. Journal of Philosophical Logic, 29:123–154, 2000.

[14] Michael Dummett. The logical basis of . Harvard University Press, 1991.

[15] Stephen Read. Identity and harmony. Analysis, 64(2):113–119, 2004.

34 [16] Ian Rumfitt. Unilateralism disarmed: A reply to dummett and gibbard. Mind, 111(442):305–321, 2002.

[17] . Basic Laws of Arithmetic. Oxford University Press, 2013.

[18] W.V.O. Quine. Word and Object. MIT Press, Cambridge, Mass., 1960.

[19] Neil Tennant. Natural Logic. Edinburgh University Press, 1978.

[20] Florian Steinberger. Harmony and logical inferentialism. University of Cambridge, 2009.

35