Formalizing Basic First Order Model Theory

Total Page:16

File Type:pdf, Size:1020Kb

Formalizing Basic First Order Model Theory Formalizing Basic First Order Model Theory John Harrison Intel Corporation, EY2-03 5200 NE Elam Young Parkway, Hillsboro, OR 97124, USA [email protected] Abstract. We define the syntax of unsorted first order logic as a HOL datatype and define the semantics of terms and formulas, and hence no- tions such as validity and satisfiability. We prove formally in HOL some elementary metatheorems such as Compactness, L¨owenheim-Skolem and Uniformity, via canonical term models. The proofs are based on those in Kreisel and Krivine’s book on model theory, but the HOL formalization raises several interesting issues. Because of the limited nature of type quantification in HOL, many of the theorems are awkward to state or prove in their standard form. Moreover, simple and elegant though the proofs seem, there are surprising difficulties formalizing Skolemization, one of the more intuitively obvious parts. On the other hand, we sig- nificantly improve on the original textbook versions of the arguments, proving two of the main theorems together rather than by separate ar- guments. 1 Introduction This paper deals with the formalization, in the HOL Light theorem prover, of the basic model theory of first order logic. Our original motivation was that we were writing a textbook on mathematical logic, and wanted to make sure that when presenting some of the major results, we didn’t make any slips – hence the desire for machine checking. We took as our model for this part of logic the textbook of Kreisel and Krivine [6], hereinafter referred to as K&K. However, as we shall see, in the process of formalizing the proofs we made some significant improvements to their presentation. In view of the considerable amount of mathematics that has been formalized, mainly in Mizar [14, 11], it is perhaps hardly noteworthy to formalize yet another fragment. However, we believe that the present work does at least raise a few interesting general points. – Formalization of syntax constructions involving bound variables has inspired a slew of research; see e.g. Chap. 3 of Pollack [9], or Gordon and Melham [3]. We offer an unusual angle, in that for us, syntax is subordinate to semantics. – The apparent intuitive difficulty of parts of the proof does not correlate well with the difficulty of the HOL formalization: an apparently straightforward part turns out to be the most difficult. This raises interesting questions over whether the textbook or HOL is at fault. – As part of the task of formalization, we found improvements to the textbook proof. Though this was an indirect consequence of formalization, resulting from the necessary close reading of the text, it serves as a good example of possible auxiliary benefits. – We provide a good illustration of how HOL’s simplistic type system can be a hindrance in stating results in the most natural way. In particular, care is needed because quantification of type variables happens implicitly at the sequent level, rather than via explicit binding constructs. HOL Light is our own version of HOL, and while its logical axiomatization, proof tools and even implementation language are somewhat different from other versions, the underlying logic is exactly equivalent to the one described by Gor- don and Melham [4]. This is a version of simply typed lambda calculus used as a foundation for classical higher order logic. 2 Syntactic Definitions We define first order terms in HOL as a free recursive type: a term is either a variable, identified by a numeric code, or else a function symbol, also identified by a number, followed by a possibly empty list of arguments. (We regard nullary functions as individual constants.) term = V num | Fn num (term list) Here we have already made two significant choices. First of all, we have re- stricted ourselves to countable languages, since the function symbols are indexed by N. This restriction is inessential, however; everything that follows extends to any infinite HOL type, given a theorem that for an infinite type α there is an injection α × α → α, a fairly easy consequence of Zorn’s Lemma. The restriction to countable languages was made entirely for convenience, to avoid having to specify types everywhere. Secondly, we have not made any restrictions on the arities of functions, e.g. that function 6 can only be applied to two arguments. Rather, we really consider a function symbol as identified by a pair consisting of its numerical code and its arity, so in f3(x) and f3(x, y), the two functions are different, identified by the pairs (3, 1) and (3, 2) respectively. In fact, we define a function that returns the set of functions, in this sense, that occur in a term: |- (∀v. functions_term (V v) = {}) ∧ (∀f l. functions_term (Fn f l) = (f,LENGTH l) INSERT (LIST_UNION (MAP functions_term l))) where: |- (LIST_UNION [] = {}) ∧ (LIST_UNION (CONS h t) = h UNION (LIST_UNION t)) Next we define formulas; there is a wide choice over which connectives to take as primitive and which as defined. We differ a bit from our textbook model by taking falsity, atoms, implications and universal quantifications as primitive. form = False | Atom num (term list) | --> form form | !! num form Once again, identically-numbered predicates used with different arities are considered different. We now define functions to return the set of functions and predicates in this sense occurring in a formula: |- (functions_form False = {}) ∧ (functions_form (Atom a l) = LIST_UNION (MAP functions_term l)) ∧ (functions_form (p --> q) = (functions_form p) UNION (functions_form q)) ∧ (functions_form (!! x p) = functions_form p) |- (predicates_form False = {}) ∧ (predicates_form (Atom a l) = {(a,LENGTH l)}) ∧ (predicates_form (p --> q) = (predicates_form p) UNION (predicates_form q)) ∧ (predicates_form (!! x p) = predicates_form p) It’s more convenient to lift these up to the level of sets of formulas, and we pair up the sets of functions and predicates as the so-called language of a set of formulas. |- functions fms = UNIONS {functions_form f | f IN fms} |- predicates fms = UNIONS {predicates_form f | f IN fms} |- language fms = functions fms, predicates fms Trivially we have then, for a singleton set of formulas: |- language {p} = functions_form p,predicates_form p Other logical constants are defined in a fairly standard way; it doesn’t really matter for what follows exactly how this is done. These are respectively negation, truth, disjunction (or), conjunction (and), bi-implication (iff) and existential quantification. |- Not p = p --> False |- True = Not False |- p || q = (p --> q) --> q |- p && q = Not (Not p || Not q) |- p <-> q = (p --> q) && (q --> p) |- ?? x p = Not(!!x (Not p)) The set of free variables in a term and formula are now defined: |- (∀x. FVT (V x) = {x}) ∧ (∀f l. FVT (Fn f l) = LIST_UNION (MAP FVT l)) |- (FV False = {}) ∧ (∀a l. FV (Atom a l) = LIST_UNION (MAP FVT l)) ∧ (∀p q. FV (p --> q) = FV p UNION FV q) ∧ (∀x p. FV (!! x p) = FV p DELETE x) We prove various simple consequences of the definition, most importantly, by induction over the syntax of terms and formulas, that these always give finite sets, e.g. |- ∀p. FINITE(FV p) The most complex syntactic definition is of substitution. We have chosen a ‘name-carrying’ formalization of syntax, rather than indexing bound variables using some scheme following de Bruijn [1]. The latter is usually preferred when formalizing logical syntax precisely because substitution is simpler to define cor- rectly. (Indeed, it is foreshadowed in the tradition – see Prawitz [10] for exam- ple – of distinguishing between [bound] variables and [free] parameters.) Our decision to stick with name-carrying terms was partly in order to stay closer to intuition, and as we shall see, defining substitution so that it renames variables appropriately is not too difficult. In some sense, things are easier for us since we always have semantics (to be defined later) as a check. By contrast, in formal- izing, say, beta reduction in lambda-calculus, the syntax is the primary object of investigation and substitution must be defined in a way that is correct and intuitively clear or the whole exercise loses its point. In order to define renaming substitution as a clean structural recursion over terms, it’s necessary to define multiple substitutions rather than single ones [13]. This is because for something like (∀x. P (x) ⇒ P (y))[x/y] we need recursively to perform two substitutions on the body of the quantifier, one to substitute for y and one to rename the variable x. Therefore we take multiple parallel substitu- tion as primitive. (Multiple serial substitution would also be acceptable, since by construction the substitutions in renaming clauses never interfere, but this seems less intuitive.) We represent substitutions simply as functions :num->term, map- ping each variable index to the term to be substituted for that variable. (The identity substitution is therefore the type constructor for variable terms, V.) Al- though in practice we only need substitutions that are nontrivial (i.e. differ from V) only for finitely many arguments, nothing seems to be gained by imposing this restriction generally. Substitution at the term level is simple: |- (∀x. termsubst v (V x) = v(x)) ∧ (∀f l. termsubst v (Fn f l) = Fn f (MAP (termsubst v) l)) It’s when we come to formulas that the complexities of renaming arise. We need to see if a variable in the substituted body would become captured by the bound variable in a universal quantifier, and if so, rename the bound variable appropriately. |- (formsubst v False = False) ∧ (formsubst v (Atom p l) = Atom p (MAP (termsubst v) l)) ∧ (formsubst v (q --> r) = (formsubst v q --> formsubst v r)) ∧ (formsubst v (!!x q) = let v’ = valmod (x,V x) v in let z = if ∃y. y IN FV(!!x q) ∧ x IN FVT(v’(y)) then VARIANT(FV(formsubst v’ q)) else x in !!z (formsubst (valmod (x,V(z)) v) q)) Here the function valmod modifies a substitution, or more generally any function, as follows: |- valmod (x,a) v = λy.
Recommended publications
  • Skolemization Modulo Theories
    Skolemization Modulo Theories Konstantin Korovin1 and Margus Veanes2 1 The University of Manchester [email protected] 2 Microsoft Research [email protected] Abstract. Combining classical automated theorem proving techniques with theory based reasoning, such as satisfiability modulo theories, is a new approach to first-order reasoning modulo theories. Skolemization is a classical technique used to transform first-order formulas into eq- uisatisfiable form. We show how Skolemization can benefit from a new satisfiability modulo theories based simplification technique of formulas called monadic decomposition. The technique can be used to transform a theory dependent formula over multiple variables into an equivalent form as a Boolean combination of unary formulas, where a unary for- mula depends on a single variable. In this way, theory specific variable dependencies can be eliminated and consequently, Skolemization can be refined by minimizing variable scopes in the decomposed formula in order to yield simpler Skolem terms. 1 The role of Skolemization In classical automated theorem proving, Skolemization [8, 3] is a technique used to transform formulas into equisatisfiable form by replacing existentially quanti- fied variables by Skolem terms. In resolution based methods using clausal normal form (CNF) this is a necessary preprocessing step of the input formula. A CNF represents a universally quantified conjunction of clauses, where each clause is a disjunction of literals, a literal being an atom or a negated atom. The arguments of the atoms are terms, some of which may contain Skolem terms as subterms where a Skolem term has the form f(¯x) for some Skolem function symbol f and a sequencex ¯ of variables; f may also be a constant (¯x is empty).
    [Show full text]
  • (Aka “Geometric”) Iff It Is Axiomatised by “Coherent Implications”
    GEOMETRISATION OF FIRST-ORDER LOGIC ROY DYCKHOFF AND SARA NEGRI Abstract. That every first-order theory has a coherent conservative extension is regarded by some as obvious, even trivial, and by others as not at all obvious, but instead remarkable and valuable; the result is in any case neither sufficiently well-known nor easily found in the literature. Various approaches to the result are presented and discussed in detail, including one inspired by a problem in the proof theory of intermediate logics that led us to the proof of the present paper. It can be seen as a modification of Skolem's argument from 1920 for his \Normal Form" theorem. \Geometric" being the infinitary version of \coherent", it is further shown that every infinitary first-order theory, suitably restricted, has a geometric conservative extension, hence the title. The results are applied to simplify methods used in reasoning in and about modal and intermediate logics. We include also a new algorithm to generate special coherent implications from an axiom, designed to preserve the structure of formulae with relatively little use of normal forms. x1. Introduction. A theory is \coherent" (aka \geometric") iff it is axiomatised by \coherent implications", i.e. formulae of a certain simple syntactic form (given in Defini- tion 2.4). That every first-order theory has a coherent conservative extension is regarded by some as obvious (and trivial) and by others (including ourselves) as non- obvious, remarkable and valuable; it is neither well-enough known nor easily found in the literature. We came upon the result (and our first proof) while clarifying an argument from our paper [17].
    [Show full text]
  • Abduction Algorithm for First-Order Logic Reasoning Frameworks
    Abduction Algorithm for First-Order Logic Reasoning Frameworks Andre´ de Almeida Tavares Cruz de Carvalho Thesis to obtain the Master of Science Degree in Mathematics and Applications Advisor: Prof. Dr. Jaime Ramos Examination Committee Chairperson: Prof. Dr. Cristina Sernadas Advisor: Prof. Dr. Jaime Ramos Member of the Committee: Prof. Dr. Francisco Miguel Dion´ısio June 2020 Acknowledgments I want to thank my family and friends, who helped me to overcome many issues that emerged throughout my personal and academic life. Without them, this project would not be possible. I also want to thank professor Cristina Sernadas and professor Jaime Ramos for all the help they provided during this entire process, whether regarding theoretical and practical advise or other tips concerning my academic life. i Abstract Abduction is a process through which we can formulate hypotheses that justify a set of observed facts, using a background theory as a basis. Eventhough the process used to formulate these hypotheses may vary, the innerent advantages are universal. It approaches real life problems within fields like medicine and criminality, while maintaining their usefulness within more theoretical subjects, like fibring of logic proof systems. There are, however, some setbacks, such as the time complexity associated with these algorithms, and the expressive power of the formulas used. In this thesis, we tackle this last issue. We study the notions of consequence system and of proof system, as well as provide relevant ex- amples of these systems. We formalize some theoretical concepts regarding the Resolution principle, including refutation completeness and soundness. We then extend these concepts to First-Order Logic.
    [Show full text]
  • Easy Proofs of L\" Owenheim-Skolem Theorems by Means of Evaluation
    Easy Proofs of Löwenheim-Skolem Theorems by Means of Evaluation Games Jacques Duparc1,2 1 Department of Information, Systems Faculty of Business and Economics University of Lausanne, CH-1015 Lausanne Switzerland [email protected] 2 Mathematics Section, School of Basic Sciences Ecole polytechnique fédérale de Lausanne, CH-1015 Lausanne Switzerland [email protected] Abstract We propose a proof of the downward Löwenheim-Skolem that relies on strategies deriving from evaluation games instead of the Skolem normal forms. This proof is simpler, and easily under- stood by the students, although it requires, when defining the semantics of first-order logic to introduce first a few notions inherited from game theory such as the one of an evaluation game. 1998 ACM Subject Classification F.4.1 Mathematical Logic Keywords and phrases Model theory, Löwenheim-Skolem, Game Theory, Evaluation Game, First-Order Logic 1 Introduction Each mathematical logic course focuses on first-order logic. Once the basic definitions about syntax and semantics have been introduced and the notion of the cardinality of a model has been exposed, sooner or later at least a couple of hours are dedicated to the Löwenheim-Skolem theorem. This statement holds actually two different results: the downward Löwenheim-Skolem theorem (LS↓) and the upward Löwenheim-Skolem theorem (LS↑). I Theorem 1 (Downward Löwenheim-Skolem). Let L be a first-order language, T some L-theory, and κ = max{card (L) , ℵ0}. If T has a model of cardinality λ > κ, then T has a model of cardinality κ. I Theorem 2 (Upward Löwenheim-Skolem). Let L be some first-order language with equality, T some L-theory, and κ = max{card (L) , ℵ0}.
    [Show full text]
  • Models and Games
    Incomplete version for students of easllc2012 only. Models and Games JOUKO VÄÄNÄNEN University of Helsinki University of Amsterdam Incomplete version for students of easllc2012 only. cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521518123 © J. Väänänen 2011 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2011 Printed in the United Kingdom at the University Press, Cambridge A catalog record for this publication is available from the British Library ISBN 978-0-521-51812-3 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Incomplete version for students of easllc2012 only. Contents Preface page xi 1 Introduction 1 2 Preliminaries and Notation 3 2.1 Finite Sequences 3 2.2 Equipollence 5 2.3 Countable sets 6 2.4 Ordinals 7 2.5 Cardinals 9 2.6 Axiom of Choice 10 2.7 Historical Remarks and References 11 Exercises 11 3 Games 14 3.1 Introduction 14 3.2 Two-Person Games of Perfect Information 14 3.3 The Mathematical Concept of Game 20 3.4 Game Positions 21 3.5 Infinite Games 24 3.6 Historical Remarks and References 28 Exercises 28 4 Graphs 35 4.1 Introduction 35 4.2 First-Order Language of Graphs 35 4.3 The Ehrenfeucht–Fra¨ısse´ Game on Graphs 38 4.4 Ehrenfeucht–Fra¨ısse´ Games and Elementary Equivalence 43 4.5 Historical Remarks and References 48 Exercises 49 Incomplete version for students of easllc2012 only.
    [Show full text]
  • Handout 4 VII. First-Order Logic
    06-26264 Reasoning The University of Birmingham Spring Semester 2019 School of Computer Science Volker Sorge 5 February, 2019 Handout 4 Summary of this handout: First-Order Logic — Syntax — Terms — Predicates — Quantifiers — Seman- tics — Substitution — Renaming — Ground Terms — Prenex Normal Form — Skolemisation — Clause Normal Form VII. First-Order Logic While propositional logic can express some basic facts we want to reason about, it is not rich enough to express many aspects of intelligent discourse or reasoning. We can talk about facts only very broadly, as we have no means of restricting ourselves to talking about what or who these facts apply to. We can express “it is raining” with a proposition, but we can not restrict it to a particular place like “it is raining in Birmingham” versus “it is not raining in London”. Moreover, we can not talk about general or abstract facts. E.g., a classic example which we can not yet express is: Socrates is a man, all men are mortal, therefore Socrates is mortal. In the following we will build the theoretical foundations for first-order predicate logic (FOL) that gives us a much more powerful instrument to reason. VII.1 Syntax Since the main purpose of first-order logic is to allow us to make statements about distinct entities in a particular domain or universe of discourse, we first need some notation to represent this. 29. Constants represent concrete individuals in our universe of discourse. Example: 0; 1; 2; 3 or “Birming- ham, London” or “John, Paul, Mary” are constants. 30. Functions maps individuals in the universe to other individuals, thereby relating them to each other.
    [Show full text]
  • Inclusion and Exclusion Dependencies in Team Semantics on Some Logics of Imperfect Information
    Inclusion and Exclusion Dependencies in Team Semantics On some logics of imperfect information Pietro Galliani Faculteit der Natuurwetenschappen, Wiskunde en Informatica Institute for Logic, Language and Computation Universiteit van Amsterdam P.O. Box 94242, 1090 GE AMSTERDAM, The Netherlands Phone: +31 020 525 8260 Fax (ILLC): +31 20 525 5206 Abstract We introduce some new logics of imperfect information by adding atomic for- mulas corresponding to inclusion and exclusion dependencies to the language of first order logic. The properties of these logics and their relationships with other logics of imperfect information are then studied. Furthermore, a game theoretic semantics for these logics is developed. As a corollary of these results, we characterize the expressive power of independence logic, thus an- swering an open problem posed in (Gr¨adel and V¨a¨an¨anen, 2010). Keywords: dependence, independence, imperfect information, team semantics, game semantics, model theory 2010 MSC: 03B60, 2010 MSC: 03C80, 2010 MSC: 03C85 1. Introduction The notions of dependence and independence are among the most fun- damental ones considered in logic, in mathematics, and in many of their ap- plications. For example, one of the main aspects in which modern predicate Email address: [email protected] (Pietro Galliani) Preprint submitted to arXiv June 9, 2011 logic can be thought of as superior to medieval term logic is that the for- mer allows for quantifier alternation, and hence can express certain complex patterns of dependence and independence between variables that the latter cannot easily represent. A fairly standard example of this can be seen in the formal representations of the notions of continuity and uniform continuity: in the language of first order logic, the former property can be expressed as ∀x(∀ǫ > 0)(∃δ > 0)∀x′(|x − x′| < δ → |f(x) − f(x′)| < ǫ), while the latter can be expressed as (∀ǫ> 0)(∃δ > 0)∀x∀x′(|x−x′| <δ →|f(x)−f(x′)| <ǫ).
    [Show full text]
  • A Framework for Exploring Finite Models
    A Framework for Exploring Finite Models by Salman Saghafi A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science May 2015 APPROVED: Professor Daniel J. Dougherty, Thesis Advisor, WPI Computer Science Professor Craig Wills, Head of Department, WPI Computer Science Professor Joshua Guttman, WPI Computer Science Professor George Heineman, WPI Computer Science Doctor John D. Ramsdell, The MITRE Corporation Abstract This thesis presents a framework for understanding first-order theories by investi- gating their models. A common application is to help users, who are not necessarily experts in formal methods, analyze software artifacts, such as access-control policies, system configurations, protocol specifications, and software designs. The framework suggests a strategy for exploring the space of finite models of a theory via augmen- tation. Also, it introduces a notion of provenance information for understanding the elements and facts in models with respect to the statements of the theory. The primary mathematical tool is an information-preserving preorder, induced by the homomorphism on models, defining paths along which models are explored. The central algorithmic ideas consists of a controlled construction of the Herbrand base of the input theory followed by utilizing SMT-solving for generating models that are minimal under the homomorphism preorder. Our framework for model- exploration is realized in Razor, a model-finding assistant that provides the user with a read-eval-print loop for investigating models. Acknowledgements I would like to extend several acknowledgements to my learning community, family, and friends. This dissertation would not have been possible with the support and assistance of people with whom I am connected professionally and those with whom I have shared my personal life.
    [Show full text]
  • Skolem's Paradox and Constructivism
    CHARLES MCCARTY AND NEIL TENNANT SKOLEM’S PARADOX AND CONSTRUCTIVISM INTRODUCTION Skolem’s paradox has not been shown to arise for constructivism. Indeed, considerations that we shall advance below indicate that the main result cited to produce the paradox could be obtained only by methods not in mainstream intuitionistic practice. It strikes us as an important fact for the philosophy of mathematics. The intuitionistic conception of the mathematical universe appears, as far as we know, to be free from Skolemite stress. If one could discover reasons for believing this to be no accident, it would be an important new con- sideration, in addition to the meaning-theoretic ones advanced by Dummett (1978, ‘Concluding philosophical remarks’), that ought to be assessed when trying to reach a view on the question whether intuitionism is the correct philosophy of mathematics. We give below the detailed reasons why we believe that intuitionistic mathematics is indeed free of the Skolem paradox. They culminate in a strong independence result: even a very powerful version of intuitionis- tic set theory does not yield any of the usual forms of a countable downward Lowenheim - Skolem theorem. The proof draws on the general equivalence described in McCarty (1984) between intuitionistic mathematics and classical recursive mathematics. But first we set the stage by explaining the history of the (classical) paradox, and the philosophical reflections on the foundations of set theory that it has provoked. The recent symposium between Paul Benacerraf and Crispin Wright provides a focus for these considerations. Then we inspect the known proofs of the Lbwenheim-Skolem theorem, and reveal them all to be constructively unacceptable.
    [Show full text]
  • Löwenheim's Theorem in the Frame Of
    The Birth of Model Theory: LÄowenheim'sTheorem in the Frame of the Theory of Relatives by Calixto Badesa Princeton, Princeton University Press, 2004 xiii + 240 pages. US $52.50. ISBN 0-691-05853-9 Reviewed by Jeremy Avigad From ancient times to the beginning of the nineteenth century, mathemat- ics was commonly viewed as the general science of quantity, with two main branches: geometry, which deals with continuous quantities, and arithmetic, which deals with quantities that are discrete. Mathematical logic does not ¯t neatly into this taxonomy. In 1847, George Boole [1] o®ered an alter- native characterization of the subject in order to make room for this new discipline: mathematics should be understood to include the use of any symbolic calculus \whose laws of combination are known and general, and whose results admit of a consistent interpretation." Depending on the laws chosen, symbols can just as well be used to represent propositions instead of quantities; in that way, one can consider calculations with propositions on par with more familiar arithmetic calculations. Despite Boole's e®orts, logic has always held an uncertain place in the mathematical community. This can partially be attributed to its youth; while number theory and geometry have their origins in antiquity, math- ematical logic did not emerge as a recognizable discipline until the latter half of the nineteenth century, and so lacks the provenance and prestige of its elder companions. Furthermore, the nature of the subject matter serves to set it apart; as objects of mathematical study, formulas and proofs do not have the same character as groups, topological spaces, and measures.
    [Show full text]
  • Foundations of AI 13
    Foundations of AI 13. Predicate Logic Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller 13/1 Contents . Motivation . Syntax and Semantics . Normal Forms . Reduction to Propositional Logic: Herbrand Expansion . Resolution & Unification . Closing Remarks 13/2 Motivation We can already do a lot with propositional logic. It is, however, annoying that there is no structure in the atomic propositions. Example: “All blocks are red” “There is a block A” It should follow that “A is red” But propositional logic cannot handle this. Idea: We introduce individual variables, predicates, functions, … . First-Order Predicate Logic (PL1) 13/3 The Alphabet of First-Order Predicate Logic Symbols: . Operators: . Variables: . Brackets: . Function symbols (e.g., ) . Predicate symbols (e.g., ) . Predicate and function symbols have an arity (number of arguments). 0-ary predicate: propositional logic atoms 0-ary function: constant . We suppose a countable set of predicates and functions of any arity. “=“ is usually not considered a predicate, but a logical symbol 13/4 The Grammar of First-Order Predicate Logic (1) Terms (represent objects): 1. Every variable is a term. 2. If are terms and is an n-ary function, then is also a term. Terms without variables: ground terms. Atomic Formulae (represent statements about objects) 1. If are terms and is an n-ary predicate, then is an atomic formula. 2. If and are terms, then is an atomic formula. Atomic formulae without variables: ground atoms. 13/5 The Grammar of First-Order Predicate Logic (2) Formulae: 1. Every atomic formula is a formula. 2.
    [Show full text]
  • Proof Mining for Nonlinear Operator Theory
    Proof Mining for Nonlinear Operator Theory Four Case Studies on Accretive Operators, the Cauchy Problem and Nonexpansive Semigroups vom Fachbereich Mathematik der Technischen Universität Darmstadt zur Erlangung des Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigte Dissertation Tag der Einreichung: 30 September 2016 Tag der mündlichen Prüfung: 21 Dezember 2016 Referent: Prof. Dr. Ulrich Kohlenbach 1. Korreferent : Prof. Jesús García–Falset, PhD 2. Korreferent : Keita Yokoyama, PhD von Angeliki Koutsoukou-Argyraki, Candidata Scientiarum (Master), Master de Sciences et Technologies aus Marousi, Athen, Griechenland Darmstadt, D 17 2017 i i URN: urn:nbn:de:tuda-tuprints-61015 URI: http://tuprints.ulb.tu-darmstadt.de/id/eprint/6101 ii “Die Mathematik ist eine Methode der Logik.[...] Die Erforschung der Logik bedeutet die Erforschung aller Gesetzmäßigkeit. Und außerhalb der Logik ist alles Zufall.” Ludwig Wittgenstein, Tractatus Logico-Philosophicus (Logisch-Philosophische Abhandlung), 1921, §6.234- 6.30 ([86]). Preface The logical form of a mathematical statement can, within a specific formal framework, a priori guarantee by certain logical metatheorems the extrac- tion of additional information via a study of the underlying logical structure of its proof. This new information is usually of quantitative nature, in the form of effective (computable) bounds -even if the original proof is prima facie ineffective- and highly uniform. Logically analyzing proofs of mathematical statements (of a certain logical form) and making their quantitative content explicit to derive new, constructive information is described as proof mining, and constitutes an ongoing research program in applied proof theory intro- duced by Ulrich Kohlenbach about 15 years ago though it finds its origins in the ideas of Georg Kreisel from the 1950’s who initiated it under the name unwinding of proofs.
    [Show full text]