Completeness
Total Page:16
File Type:pdf, Size:1020Kb
GABRIELE LOLLI COMPLETENESS Being an Account of the Vain Search for Meaning in Logic TABLE OF CONTENTS Introduction, whose (intended) effect is likely to be that of bewildering and confusing the reader, according to a Socratic maxim: first realise that you don’t know, contrary to the lazily received wisdom, then proceed to untangle the wool Enter the dramatis personae Approaching the problem, which doesn’t seem to be a problem of every-day experience, but at variance with it; it appears that is not a case of saving the phenomena, but of exploding them First tentative plunge into the proof, which comes to nothing because the two horns to be connected turn out to be one and the same, so there is nothing to prove Second start, with a new calculus, while the ambiguous one is reduced to a technique Solution with a cut of the Gordian knot: what we were trying to explain becomes a definition Second ending, where a discourse is a discourse Third ending, where again the discourse disappears, leaving only the phantoms of the things we want to talk about A seemingly idle digression on algorithms and their correctness and completeness, where one is in trouble to distinguish which is which, i.e., which is the syntax and which is the semantics Conclusion, where again everything gets mixed up, because if you persevere with splitting hairs you end up with nothing in your hands New conclusion, where one sees that the findings of sophisticated logical theorems correspond to the naïve experience: the discovery of hot water 1 Final conclusion, where it is explained that hot water is better than cold water Reopening of the case, with a supplementary investigation on the incompleteness of certain logics, in particular of the notorious second order one The strange case of Dr. Skolem and Mr. Gödel 2 Introduction, whose (intended) effect is likely to be that of bewildering and confusing the reader, according to a Socratic maxim: first realise that you don’t know, contrary to the lazily received wisdom, then proceed to untangle the wool In a logic course the completeness theorem is a point of no return; you cross it, as Caesar crossed the Rubicon, and you cannot go back to the lost innocence. But the pristine innocence was made up of a lot of sinful biases and misconceptions. After the theorem, you cannot cheat any more; logic changes the way it is used and conceived (hence its status). The completeness theorem is the first, sometimes the only theorem one meets in an introductory course - so it must be important . Before it, there are a lot of proofs, but no theorem (some say there is a sequence of lemmata, paving the way for the theorem). The proofs are very tiresomely detailed inductive proofs of properties of the languages and of the interpretations, which parallel the inductive definitions of the syntactic and semantic notions (assignments, satisfaction and so on); for example the property that A[x/y][y/x] is A if y is free for x in A and it is not free in A; or the property that if σ satisfies A and σ’ is such that σ’(y) = σ(x), again with a lot of strange conditions on variables, then σ’ satisfies A[x/y], and so on and so forth. The completeness theorem on the contrary is a real theorem, though ironically sometimes it is not proved at all, especially in computer science or philosophy classes (when in math classes the maximal ideal theorem is used, no mention is done of their equivalence). What is the use of a proof after all? Part of the innocence is that logic is a technique. In a logic course you spend your time improving your natural abilities by mastering new techniques, through formalisation and derivation exercises, but you do not prove theorems - that is a mathematicians’ affair (but for those painstaking and uninformative proofs of the lemmata, of course). Perhaps proving the theorem might lead you astray with mathematical spells (or might really teach you something). The completeness theorem opens the Pandora’s box of the distinction between syntax and semantics; but Pandora’s box is full of riches as well as of sorrows. Our previous experience with natural language or mathematics is of no use to understand the theorem, because such distinction is not to be found in the use or 3 in the study of any language. When learning a language beginners certainly do grammatical analysis, which is useful from many points of view, for example to disambiguate ambiguous sentences, or even to grasp the meaning conveyed by a sentence. If a sentence is not grammatically correct it has no sense, though it can sometimes be understood from the context, or thanks to extra linguistic circumstances; the parsing of the grammatical form is essential to understanding, as far as the sense is compositional, in its simplest and most intuitive sense - that is corresponding to the syntactic construction. However in the study of a language one does only grammatical analysis, without meeting an independent semantics, even if one learns to grasp, through correct use, the sense and meaning of discourse. In the first part of the logic course, students learn the syntax of predicative (first-order) languages, which is very much akin to grammar, as well as the rules of correct speech: alphabets, formation rules for terms and formulae, deduction rules and their chaining in inferences (derivations). Formulae are viewed as painstaking and absolutely rigorous regimentation of phrases. The aim is that of learning the action of the formation rules which give origin to schemata. In this way one can recognise all phrases with the same form, that is constructed with the same sequence of applications of the grammatical formation rules. The use of (variable) letters to express form, which goes back at least to Aristotle, is on a par with the primitive use of variables to be found at the beginning of modern arithmetic, when people began to use n for a generic number, in order to write down formulae or relations valid for all numbers. Sometimes people used instead a particular (albeit small) number, stating at the same time (without proof) that calculations and reasoning had however a general validity. If true, the effect was the same that with the generic n, the more so since the use of n wasn’t accompanied by a real general, say inductive, proof. History repeats itself with logical formulae. The use of letters in itself does not mark a novelty with respect to a rigorous, albeit informal study of language. The problem is to explain the why and wherefore of the quest for generality. The explanations are often off the mark and perversely twisted. The answer usually found in the introductions to logic textbooks is that a logical reasoning valid in a domain should be valid also in other domains. Aristotle had 4 already remarked that some inferences are valid by virtue of form alone, hence if valid in a domain they are valid also in others. But it is not often that one explicitly notes that a certain kind of inference can be found also in other circumstances; this happens only with very special arguments, e.g. proofs by contradiction (which deservedly have a widely known name). Here, however, the reference to the form of inferences is a loose one, only the very general strategy and structure of the proof is actually pointed out. What happens in truth with logical inferences is that instead of exporting generality one imports it; the appeal to the formal validity serves the purpose of justifying a particular inference in terms of general formal laws. Reasoning becomes the application of general laws to particular cases. The whole machinery of formal logic thus acquires an ambiguous status; when in action, it represents with its pedantic proceedings the famous chains which Poincaré contrasted to the wings of thought; as a theory, it carries along a debatable thesis on the nature of reasoning, namely that correct reasoning in each particular domain is just the application of universally valid (possibly innate, mental) rules. Those who study reasoning - knowledge engineers, psychologists - maintain on the contrary that domain specific inferences, those where the speaker knows what he is talking about, are not logical inferences; though the reasoner may say: A, if A then B, hence B, actually what is at work is some kind of intuition (lat. intuere), or of pattern-matching, appropriate to the domain; one really means something like: “if you see A then you see (you can see, you should see) B”, or “pattern A goes into pattern B” or the like. Elementary mathematics as is learned at school is not different from any other knowledge domain. It is far from formal: it has a content (germ. Inhalt). The same is true of higher mathematics; professional mathematicians see structures and mathematical entities; they see them because they have learned at school to see the Lockian general triangle in pictures on the blackboard. To explain how this can happen is not an easy matter; Kant has built his whole transcendental system to tune together sensible intuition and logical abstraction, but he can hardly be said to have succeeded. It is a fact that mathematics begins with sensibility, and this is the hard problem, both for foundations and for teaching. In algebra, formal practice is accompanied by a contentual (but 5 loosely related) description of what is being done; the justification is a realistic one, inherited from the material (in Carnap’s sense) way of talking of the first numerical experiences.