Logical Inferentialism and Identity
Total Page:16
File Type:pdf, Size:1020Kb
Logical Inferentialism and Identity Chris Mitsch April 28, 2017 Abstract As Wehmeier [1, 2] has shown, the identity-free system of Wittgen- steinian predicate logic (W-logic), enriched with a predicate expressing that individual constants corefer, is expressively equivalent to first- order logic with identity (FOL=). I show that the introduction and elimination rules in W-logic for the usual logical constants (except classical negation) are harmonious in both the intuitive and the Praw- itzian sense, while those for the coreference predicate share the troubles of identity in FOL=. Thus, if we follow Wehmeier [3] in viewing clas- sical identity as an amalgamation of several distinct concepts, among which is coreference, we can precisely separate the harmonious logical component of identity (responsible for variable co-ordination) from its non-harmonious and non-logical component (i.e., coreference). We are thus able to account, from an inferentialist perspective, both for the intuition that classical identity is a logical notion, as well as for its fail- ure to be fully harmonious in the intuitive sense, as shown by Griffiths [4]. 1 Inferentialism The inferentialist is engaged in a debate over the core notion in semantics{ that is, what is it that grounds our account of linguistic meaning? One account, typically called truth-theoretic semantics, characterizes the mean- ing of a given sentence in terms of its truth conditions. The guiding idea for the account is that the meaning of a statement is given in the way it latches onto the world. In general this approach relies strongly on associating sets of possible worlds with sentences{depending on the flavor of truth-conditional semantics we are working with, we will speak of sentences referring to tuples of objects, singular terms referring to objects, predicates referring to a set of objects, etc. Given the primacy of the notions of truth and reference, model 1 theory tends to be the most natural framework in which to flesh out one's truth-conditional semantics. In contrast, the inferentialist seeks to ground the meaning of a statement or term in the way it is used in language. The strategy for the inferentialist is to characterize the meaning of a statement by focusing on what Tennant calls the \to-and-fro" of conversation or inference[5]. The guiding idea for this account is that the when we add to a language a new sentence or sub- sentential phrase (like a noun phrase, name, verb, etc.) we are (implicitly or explicitly) associating it with (i) the grounds for asserting statements containing it, or (ii) those statements that are implied by it [6]. Thus, Dummett[7]: For utterances considered quite generally, the bifurcation be- tween the two aspects of their use lies in the distinction between the conventions governing the occasions on which the utterance is appropriately made and those governing both the responses of the hearer and what the speaker commits himself to by making the utterance: schematically, between the conditions for and the consequences of it. Some inferentialists, like Dummett and Prawitz, are concerned primarily with the meaning of the logical constants. The goal of this restricted form of inferentialism{called logical inferentialism{is to cash out the meanings of the logical constants in terms of their introduction and elimination rules, typically in a natural deduction setting. Fundamental to this project is the notion of proof. In light of this, contrary to the model-theoretic semantics championed by truth-conditional semantics, advocates of logical inferential- ism advance a proof-theoretic semantics. This approach seeks to ground the meanings of the logical constants in terms of their inference rules in the proof theory. For the logical inferentialist the meanings of the logical constants are manifest in and completely analysed by these rules. The job of the logical inferentialist thus becomes describing exactly which transitions, using a given set of logical constants, are admissible in reasoning. Consider the case of conjunction. What can we conclude from A^B{in other words, what are the elimination rules for ^? Well, we can conclude A, and we can conclude B. Conversely, what is required of us to conclude A^B{i.e. what is the introduction rule for ^? Here we must already have established that A and established that B. With this analysis we have characterized the admissible transitions to and from A ^ B, and thus characterized the meaning of ^. 2 Given the simplicity of giving the meaning of ^ in terms of its use, it may seem that all that we require for characterizing the meaning of a given logical constant are rules for its introduction and elimination in proof. Further, one may be inclined to believe that this kind of full explication of the meaning of a given constant{in terms of introduction and elimination rules–suffices for its being logical. The thought is that if determination of the meaning of a given constant is sufficient for determination of the validity of any inference in which it plays the dominant role, the constant must be logical. Or, as Prior put it, \it is sometimes alleged that there are inferences whose validity arises solely from the meanings of certain expressions occurring in them...let us say that such inferences, if any such there be, are analytically valid."[8] The inference to logicality is thus one that assumes that I and E rules for a constant{in virtue of their fully specifying the meaning{confer validity on all inferences in which it occurs dominantly. But consider Prior's infamous tonk [8]. Let its introduction and elim- ination rules be A ` A tonk B and A tonk B ` B, respectively. By the transitivity of `, we can conclude that A ` B. The purported logical con- stant tonk thus straightforwardly leads to the inconsistency of any logical system to which it is added. Now here's the question: do we count tonk as logical or not? Pushing it farther back still, we might wonder whether we've even fixed a meaning for tonk at all. Assuming that we'd like to avoid inconsistency, the answer seems to be an unambiguous \no". So what went wrong? 2 Inferentialism and FOL= 2.1 Inferentialism and Harmony One response is to say that we made a mistake in thinking that any pair of introduction and elimination rules for a constant suffices for the specification of that constant's meaning. Looking back at our case for ^, we might notice a certain correspondence or harmony between the inference rules that is lacking in the case for tonk: from the elimination rule we can derive no more than what it took to infer A ^ B, namely A and B. In the tonk case we concluded B from A tonk B, which cannot in general be inferred from A, so that there is a clear lack of harmony. It is at this point that Gentzen [9] is typically invoked, having said that \the introductions represent, as it were, the ‘definitions’ of the symbols concerned, and the eliminations are no more, in the final analysis, than the consequences of these definitions.” The rough idea we can take from this is 3 that for the logical constants, we expect there to be a precise matching of the information content imputed by the introduction rule and drawn out by the elimination rule. In addition to providing introduction and elimination rules for a constant, then, we might additionally require that they be harmonious in this sense. I will call this the intuitive notion of harmony. In an effort to provide a formal definition of this, Prawitz, following Lorenzen, introduced the inversion principle1 [12]: Let α be an application of an elimination rule that has B as con- sequence. Then, deductions that satisfy the sufficient condition for deriving the major premiss of α, when combined with deduc- tions of the minor premisses of α (if any), already \contain" a deduction of B; the deduction of B is thus obtainable directly from the given deductions without the addition of α. In coarser terms this principle states that any detour in a proof through an introduction rule for a constant followed directly by its corresponding elimination rule is, in principle, eliminable. The principle is supposed to be seen as a formal model of the intuitive notion of harmony in virtue of what it purports to rule out. Since any introduction of a constant followed by its immediate elimination in a deduction is avoidable, it can't be the case that the elimination rule is too strong relative to the introduction rule, in the sense that we can draw out more information content than was put in (or, equivalently, that the introduction rule is too weak relative to the elimination rule). As Prawitz puts it, nothing is \gained" by the detour [12]. Note that our concern here is with proofs in which the major premise is introduced canonically{that is, by the corresponding introduction rule for the dominant operator. The reason for this restriction relates directly to the inferentialist's purpose: what is intended to be recovered in these de- tour conversions is the subproof of the desired conclusion which was already in the proof. To demonstrate that in situations where the major premise is introduced non-canonically we can't expect this to obtain, consider the following examples: 1Inversion, as we use the word here, is not to be confused with inversion as is discussed in e.g. [10]. It is instead what they call detour conversion. Also note that this does not privilege either I or E rules. Though Gentzen's quote above, privileging I rules, represents the majority position, others such as Rumfitt[11] don't take the I rules to be (universally) privileged.