Complexities of Proof-Theoretical Reductions

Total Page:16

File Type:pdf, Size:1020Kb

Complexities of Proof-Theoretical Reductions Complexities of Proof-Theoretical Reductions Michael Toppel Submitted in accordance with the requirements for the degree of Doctor of Philosophy The University of Leeds Department of Pure Mathematics September 2016 iii The candidate confirms that the work submitted is his own and that appropriate credit has been given where reference has been made to the work of others. This copy has been supplied on the understanding that it is copyright material and that no quotation from the thesis may be published without proper acknowledgement. c 2016 The University of Leeds and Michael Toppel iv v Abstract The present thesis is a contribution to a project that is carried out by Michael Rathjen 0 and Andreas Weiermann to give a general method to study the proof-complexity of Σ1- sentences. This general method uses the generalised ordinal-analysis that was given by Buchholz, Ruede¨ and Strahm in [5] and [44] as well as the generalised characterisation of provable-recursive functions of PA + TI(≺ α) that was given by Weiermann in [60]. The present thesis links these two methods by giving an explicit elementary bound for the i proof-complexity increment that occurs after the transition from the theory IDc ! + TI(≺ α), which was used by Ruede¨ and Strahm, to the theory PA + TI(≺ α), which was analysed by Weiermann. vi vii Contents Abstract . v Contents . vii Introduction 1 1 Justifying the use of Multi-Conclusion Sequents 5 1.1 Introduction . 5 1.2 A Gentzen-semantics . 8 1.3 Multi-Conclusion Sequents . 15 1.4 Comparison with other Positions and Responses to Objections . 19 1.5 Conservativity . 25 1.6 Conclusion . 27 2 Theory Reduction 29 2.1 Preliminaries . 30 2.2 Some Syntactic Notions of Theory Reduction . 40 2.2.1 Interpretability . 41 CONTENTS viii 2.2.2 Proof-Theoretical Reduction . 44 2.2.3 Translation . 52 2.2.4 Realisability . 54 2.3 An axiomatization of Theory Reduction . 67 2.4 Discussion . 71 3 A better Base Theory for Ordinal-Analysis 73 3.1 Ordinal-Analysis and Provable Conservativity . 74 3.2 Theories of Inductive Definitions and Fixed points . 82 i 3.3 Reducing IDc <!(strict) to HA . 86 3.3.1 A Proof by Buchholz . 88 3.3.2 A Proof by Ruede¨ and Strahm . 96 3.4 Conclusions . 101 i 4 A formulation of the Ω-Rule in IDc 2(strict) 105 4.1 A particular system including the Ω-rule . 106 i 4.2 Working out concepts in IDc 1 . 111 4.3 Formulating the Ω-rule . 112 4.4 Conclusion . 117 0 5 Proof-Complexity and the Deducibility of Σ1-Sentences 119 5.1 Proof-Complexity . 119 0 5.2 The Deducibility of Σ1-Sentences . 122 CONTENTS ix 5.2.1 A general characterisation of the provable recursive functions of PA + TI(≺ α) . 123 5.2.2 The classical approach . 127 0 5.2.3 A bound for the witnesses of Σ1-sentences . 129 i 6 IDc <!(strict) has at most an elementary speed-up over PA 133 6.1 Bounding Buchholz . 134 6.1.1 Formalised Recursion Theory . 134 6.1.2 Deduction-Complexity Increment for Basic Moves in Logic . 136 6.1.3 Realisability in HA . 143 6.2 Bounding Ruede¨ Strahm . 173 6.3 Sticking together and Conclusion . 186 Appendix 188 A Gentzen’s Method can be done Elementarily . 189 B The Proof-Theoretic Ordinal of EA . 206 C Deduction Systems . 214 Bibliography 218 CONTENTS x CONTENTS 1 Introduction The main purpose of the present thesis is to provide a technical lemma for a research project of Michael Rathjen and Andreas Weiermann that was outlined in Michael Rathjen’s talk at the Bertinoro International Center for Informatics in 2011. Rathjen’s and Weiermann’s work on Kruskal’s theorem in [41] foreshadows a general method to 0 bound the complexity of deductions of Σ1-sentences as well as a general method to prove 0 Π2-conservativity for arithmetical theories. Regarding the complexity of deductions of 0 Σ1-sentences, the idea is rather simple: 0 If the witnesses of a Σ1-sentence are very big, then the complexity of its deduction must lie above a certain limit. 0 The basic methods to make this precise are well known. In the case of a Σ1-sentence σ that is deducible in a theory T with a deduction of complexity n, in symbols T `n σ; one tries to give an ordinal-analysis of T . Under the assumption that this ordinal-analysis 0 gives the ordinal α one can usually prove that T is Π2-conservative over PA + TI(≺ α): T ≡ 0 PA + TI(≺ α): Π2 Since the theories PA + TI(≺ α) are well-studied, one can use classical subrecursion theory to bound the witnesses of σ by a Hardy function Hβn (0), where βn is smaller than α and depends on the complexity of the deduction of σ in T . Hence, if the witnesses of σ are bigger than Hβn (0), then the complexity of any deduction of σ must be bigger than n. Generalising this argument faces two issues even in the case when a natural ordinal notation system is used in the ordinal-analysis. First, one has to find a recipe to prove that T ≡ 0 PA + TI(≺ α) from an ordinal-analysis that is general enough to draw Π2 general conclusions from it. Here we face the problems that one has to unify the methods CONTENTS 2 of formalising an ordinal-analysis and that there are ordinal-analyses in the literature that include methods that hinder proving conservativity in the standard ways, e.g. the Ω-rule. Second, one has to find a general approach in subrecursion theory to bound the witnesses 0 of a deducible Σ1-sentence. Here we face the problem that the assignment of fundamental sequences to an ordinal notation system must be generalised in order to define a version of the Hardy hierarchy that is independent enough from the particular ordinal notation system, which was used in the ordinal-analysis, to give meaningful bounds. Both of these issues have been already resolved. Buchholz discovered in [5] that an 0 intuitionistic version of the theory of inductive definitions is Π2-conservative over PA and can therefore be used to replace PA in PA + TI(≺ α). Since all infinite deduction systems that are used in ordinal-analysis are inductively defined, this solves the first problem for a large number of ordinal-analyses that are present in the literature. In [44] 0 Ruede¨ and Strahm strengthened Buchholz’s result by showing that Π2-conservativity over PA is preserved, when certain inductive definitions are iterated; hence also the missing ordinal-analyses, those which include the Ω-rule, are covered as well. The second issue was solved by Weiermann in [60] by drawing on Buchholz’s, Cichon’s and Weiermann’s work, which is presented in [7]. Using Cichon’s notion of a norm that can be defined on all ordinal notation systems which are present in the literature, in [60] Weiermann was able to generalise the Hardy functions in a way that supports the previously given aims. However it remained open how these two solutions work together; in order to use 0 Weiermann’s approach of subrecursion theory in the case of the Π2-conservativity that was proved for Ruede’s¨ and Strahm’s theories of inductive definitions to study the 0 complexity of deductions of Σ1-sentences, one has to ensure that the function Hβn , that is found by Weiermann’s approach when working directly with PA + TI(≺ α), is not too far away from Hβm , that is found by Weiermann’s approach when working indirectly via theories of inductive definitions. In other words: one has to ensure that the proof- 0 theoretical reduction that Ruede¨ and Strahm used to prove their Π2-conservativity result gives a rather weak speed-up for the theories of inductive definitions over PA+TI(≺ α). CONTENTS 3 This is done in the present thesis. The plan of the thesis is as follows. Chapter 1 has no connection to the rest of the thesis. It is a philosophical paper that I tried to publish during my PhD. I developed a method for philosopy inspired by proof theory by following a Carnapian line, in contrast to American pragmatism. I hoped that it might be a philosophically fruitful alternative to the contemporary idealists and pragmatists in modern inferentialism. I tried this by solving a central problem in proof-theoretical semantic by working out the differences between the requirement for harmony and that for compositionality, and focusing on the latter without ignoring the former.1 I hoped that a purely syntax-based semantic, that is also an alternative to the commonly used holistic syntax-based semantic, would show the community of philosophy of logic that there is a fruitful way of doing philosophy syntactically without relying on common human behaviour, because common human behaviour seems to be a bad starting point, when a highly academic field like logic is the subject of philosophical theorising. For a certain amount of “educational brain-washing” has to take place before people consider studying logic in a certain manner, which usually differs from ordinary human behaviour.2 Chapter 2 starts with most of the bookkeeping that has to be done; hence most of the standard notions that are used in metamathematics are defined. Also the most common notions of theory reduction are presented and compared. Primarily the notion of proof- theoretical reduction is introduced and it is emphasised that most of the techniques that are used in proof-theory, including some other notions of theory reduction, fall under this notion.
Recommended publications
  • Proof Theory Can Be Viewed As the General Study of Formal Deductive Systems
    Proof Theory Jeremy Avigad December 19, 2017 Abstract Proof theory began in the 1920’s as a part of Hilbert’s program, which aimed to secure the foundations of mathematics by modeling infinitary mathematics with formal axiomatic systems and proving those systems consistent using restricted, finitary means. The program thus viewed mathematics as a system of reasoning with precise linguistic norms, governed by rules that can be described and studied in concrete terms. Today such a viewpoint has applications in mathematics, computer science, and the philosophy of mathematics. Keywords: proof theory, Hilbert’s program, foundations of mathematics 1 Introduction At the turn of the nineteenth century, mathematics exhibited a style of argumentation that was more explicitly computational than is common today. Over the course of the century, the introduction of abstract algebraic methods helped unify developments in analysis, number theory, geometry, and the theory of equations, and work by mathematicians like Richard Dedekind, Georg Cantor, and David Hilbert towards the end of the century introduced set-theoretic language and infinitary methods that served to downplay or suppress computational content. This shift in emphasis away from calculation gave rise to concerns as to whether such methods were meaningful and appropriate in mathematics. The discovery of paradoxes stemming from overly naive use of set-theoretic language and methods led to even more pressing concerns as to whether the modern methods were even consistent. This led to heated debates in the early twentieth century and what is sometimes called the “crisis of foundations.” In lectures presented in 1922, Hilbert launched his Beweistheorie, or Proof Theory, which aimed to justify the use of modern methods and settle the problem of foundations once and for all.
    [Show full text]
  • Systematic Construction of Natural Deduction Systems for Many-Valued Logics
    23rd International Symposium on Multiple Valued Logic. Sacramento, CA, May 1993 Proceedings. (IEEE Press, Los Alamitos, 1993) pp. 208{213 Systematic Construction of Natural Deduction Systems for Many-valued Logics Matthias Baaz∗ Christian G. Ferm¨ullery Richard Zachy Technische Universit¨atWien, Austria Abstract sion.) Each position i corresponds to one of the truth values, vm is the distinguished truth value. The in- A construction principle for natural deduction sys- tended meaning is as follows: Derive that at least tems for arbitrary finitely-many-valued first order log- one formula of Γm takes the value vm under the as- ics is exhibited. These systems are systematically ob- sumption that no formula in Γi takes the value vi tained from sequent calculi, which in turn can be au- (1 i m 1). ≤ ≤ − tomatically extracted from the truth tables of the log- Our starting point for the construction of natural ics under consideration. Soundness and cut-free com- deduction systems are sequent calculi. (A sequent is pleteness of these sequent calculi translate into sound- a tuple Γ1 ::: Γm, defined to be satisfied by an ness, completeness and normal form theorems for the interpretationj iffj for some i 1; : : : ; m at least one 2 f g natural deduction systems. formula in Γi takes the truth value vi.) For each pair of an operator 2 or quantifier Q and a truth value vi 1 Introduction we construct a rule introducing a formula of the form 2(A ;:::;A ) or (Qx)A(x), respectively, at position i The study of natural deduction systems for many- 1 n of a sequent.
    [Show full text]
  • An Integration of Resolution and Natural Deduction Theorem Proving
    From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. An Integration of Resolution and Natural Deduction Theorem Proving Dale Miller and Amy Felty Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Abstract: We present a high-level approach to the integra- ble of translating between them. In order to achieve this goal, tion of such different theorem proving technologies as resolution we have designed a programming language which permits proof and natural deduction. This system represents natural deduc- structures as values and types. This approach builds on and ex- tion proofs as X-terms and resolution refutations as the types of tends the LCF approach to natural deduction theorem provers such X-terms. These type structures, called ezpansion trees, are by replacing the LCF notion of a uakfation with explicit term essentially formulas in which substitution terms are attached to representation of proofs. The terms which represent proofs are quantifiers. As such, this approach to proofs and their types ex- given types which generalize the formulas-as-type notion found tends the formulas-as-type notion found in proof theory. The in proof theory [Howard, 19691. Resolution refutations are seen LCF notion of tactics and tacticals can also be extended to in- aa specifying the type of a natural deduction proofs. This high corporate proofs as typed X-terms. Such extended tacticals can level view of proofs as typed terms can be easily combined with be used to program different interactive and automatic natural more standard aspects of LCF to yield the integration for which Explicit representation of proofs deduction theorem provers.
    [Show full text]
  • Natural Deduction and the Curry-Howard-Isomorphism
    Natural Deduction and the Curry-Howard-Isomorphism Andreas Abel August 2016 Abstract We review constructive propositional logic and natural deduction and connect it to the simply-typed lambda-calculus with the Curry- Howard Isomorphism. Constructive Logic A fundamental property of constructive logic is the disjunction property: If the disjunction A _ B is provable, then either A is provable or B is provable. This property is not compatible with the principle of the excluded middle (tertium non datur), which states that A _:A holds for any proposition A. While each fool can state the classical tautology \aliens exist or don't", this certainly does not give us a means to decide whether aliens exist or not. A constructive proof of the fool's statement would require either showcasing an alien or a stringent argument for the impossibility of their existence.1 1 Natural deduction for propositional logic The proof calculus of natural deduction goes back to Gentzen[1935]. 1 For a more mathematical example of undecidability, refer to the continuum hypothesis CH which states that no cardinal exists between the set of the natural numbers and the set of reals. It is independent of ZFC, Zermelo-Fr¨ankel set theory with the axiom of choice, meaning that in ZFC, neither CH nor :CH is provable. 1 1.1 Propositions Formulæ of propositional logic are given by the following grammar: P; Q atomic proposition A; B; C ::= P j A ) B implication j A ^ B j > conjunction, truth j A _ B j ? disjunction, absurdity Even though we write formulas in linearized (string) form, we think of them as (abstract syntax) trees.
    [Show full text]
  • On Synthetic Undecidability in Coq, with an Application to the Entscheidungsproblem
    On Synthetic Undecidability in Coq, with an Application to the Entscheidungsproblem Yannick Forster Dominik Kirst Gert Smolka Saarland University Saarland University Saarland University Saarbrücken, Germany Saarbrücken, Germany Saarbrücken, Germany [email protected] [email protected] [email protected] Abstract like decidability, enumerability, and reductions are avail- We formalise the computational undecidability of validity, able without reference to a concrete model of computation satisfiability, and provability of first-order formulas follow- such as Turing machines, general recursive functions, or ing a synthetic approach based on the computation native the λ-calculus. For instance, representing a given decision to Coq’s constructive type theory. Concretely, we consider problem by a predicate p on a type X, a function f : X ! B Tarski and Kripke semantics as well as classical and intu- with 8x: p x $ f x = tt is a decision procedure, a function itionistic natural deduction systems and provide compact д : N ! X with 8x: p x $ ¹9n: д n = xº is an enumer- many-one reductions from the Post correspondence prob- ation, and a function h : X ! Y with 8x: p x $ q ¹h xº lem (PCP). Moreover, developing a basic framework for syn- for a predicate q on a type Y is a many-one reduction from thetic computability theory in Coq, we formalise standard p to q. Working formally with concrete models instead is results concerning decidability, enumerability, and reducibil- cumbersome, given that every defined procedure needs to ity without reference to a concrete model of computation. be shown representable by a concrete entity of the model.
    [Show full text]
  • Generating Hints and Feedback for Hilbert-Style Axiomatic Proofs
    Generating hints and feedback for Hilbert-style axiomatic proofs Josje Lodder Bastiaan Heeren Johan Jeuring Technical Report UU-CS-2016-009 December 2016 Department of Information and Computing Sciences Utrecht University, Utrecht, The Netherlands www.cs.uu.nl ISSN: 0924-3275 Department of Information and Computing Sciences Utrecht University P.O. Box 80.089 3508 TB Utrecht The Netherlands Generating hints and feedback for Hilbert-style axiomatic proofs Josje Lodder Bastiaan Heeren Johan Jeuring Open University of the Open University of the Open University of the Netherlands Netherlands Netherlands and Utrecht [email protected] [email protected] University [email protected] ABSTRACT From these students, 22 had to redo their homework exer- This paper describes an algorithm to generate Hilbert-style cise on axiomatic proofs. This is significantly higher than, axiomatic proofs. Based on this algorithm we develop lo- for example, the number of students in the same group who gax, a new interactive tutoring tool that provides hints and had to redo the exercise on semantic tableaux: 5 out of 65. feedback to a student who stepwise constructs an axiomatic A student practices axiomatic proofs by solving exercises. proof. We compare the generated proofs with expert and Since it is not always possible to have a human tutor avail- student solutions, and conclude that the quality of the gen- able, an intelligent tutoring system (ITS) might be of help. erated proofs is comparable to that of expert proofs. logax There are several ITSs supporting exercises on natural de- recognizes most steps that students take when constructing duction systems [23, 22, 6].
    [Show full text]
  • Types of Proof System
    Types of proof system Peter Smith October 13, 2010 1 Axioms, rules, and what logic is all about 1.1 Two kinds of proof system There are at least two styles of proof system for propositional logic other than trees that beginners ought to know about. The real interest here, of course, is not in learning yet more about classical proposi- tional logic per se. For how much fun is that? Rather, what we are doing { as always { is illustrating some Big Ideas using propositional logic as a baby example. The two new styles are: 1. Axiomatic systems The first system of formal logic in anything like the contempo- rary sense { Frege's system in his Begriffsschrift of 1879 { is an axiomatic one. What is meant by `axiomatic' in this context? Think of Euclid's geometry for example. In such an axiomatic theory, we are given a \starter pack" of basic as- sumptions or axioms (we are often given a package of supplementary definitions as well, enabling us to introduce new ideas as abbreviations for constructs out of old ideas). And then the theorems of the theory are those claims that can deduced by allowed moves from the axioms, perhaps invoking definitions where appropriate. In Euclid's case, the axioms and definitions are of course geometrical, and he just helps himself to whatever logical inferences he needs to execute the deductions. But, if we are going to go on to axiomatize logic itself, we are going to need to be explicit not just about the basic logical axioms we take as given starting points for deductions, but also about the rules of inference that we are permitted to use.
    [Show full text]
  • The Natural Deduction Pack
    The Natural Deduction Pack Alastair Carr March 2018 Contents 1 Using this pack2 2 Summary of rules3 3 Worked examples5 3.1 Implication..............................5 3.2 Universal quantifier..........................6 3.3 Existential quantifier.........................8 4 Practice problems 11 4.1 Core.................................. 11 4.2 Conjunction.............................. 13 4.3 Implication.............................. 14 4.4 Disjunction.............................. 15 4.5 Biconditional............................. 16 4.6 Negation................................ 17 4.7 Universal quantifier.......................... 19 4.8 Existential quantifier......................... 21 4.9 Identity................................ 23 4.10 Additional challenges......................... 24 5 Solutions 26 5.1 Core.................................. 26 5.2 Conjunction.............................. 35 5.3 Implication.............................. 36 5.4 Disjunction.............................. 42 5.5 Biconditional............................. 50 5.6 Negation................................ 55 5.7 Universal quantifier.......................... 71 5.8 Existential quantifier......................... 86 5.9 Identity................................ 106 5.10 Additional challenges......................... 117 1 1 Using this pack This pack consists of Natural Deduction problems, intended to be used alongside The Logic Manual by Volker Halbach. The pack covers Natural Deduction proofs in propositional logic (L1), predicate logic (L2) and predicate logic with
    [Show full text]
  • Natural Deduction
    Natural deduction Constructive Logic (15-317) Instructor: Giselle Reis Lecture 02 1 Some history David Hilbert was a German mathematician who published, in 1899, a very influential book called Foundations of Geometry. The thing that was so revo- lutionary about this book was the way theorems were proven. It followed very closely an axiomatic approach, meaning that a set of geometry axioms were assumed and the whole theory was developed by deriving new facts from those axioms via modus ponens basically1. Hilbert believed this was the right way to do mathematics, so in 1920 he proposed that all math- ematics be formalized using a set of axioms which needed to be proved consistent (i.e. ? cannot be derived from them). All further mathematical knowledge would follow from this set. Shortly after that, it was clear that the task would not be so easy. Several attempts were proposed, but mathematicians and philosophers alike kept running into paradoxes. The foundations of mathematics was in crisis: to have something so fundamental and powerful be inconsistent seemed like a disaster. Many scientists were compelled to prove consistency of at least some part of math, and this was the case for Gentzen (another German mathematician and logician). In 1932 he set as a personal goal to show con- sistency of logical deduction in arithmetic. Given a formal system of logical deduction, the consistency proof becomes a purely mathematical problem, so that is what he did first. He thought that later on arithmetic could be added (turns out that was not so straightforward, and he ended up devel- oping yet another system, but ultimately Gentzen did prove consistency of arithmetic).
    [Show full text]
  • Some Basics of Classical Logic
    Some Basics of Classical Logic Theodora Achourioti AUC/ILLC, University of Amsterdam [email protected] PWN Vakantiecursus 2014 Eindhoven, August 22 Amsterdam, August 29 Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 1 / 47 Intro Classical Logic Propositional and first order predicate logic. A way to think about a logic (and no more than that) is as consisting of: Expressive part: Syntax & Semantics Normative part: Logical Consequence (Validity) Derivability (Syntactic Validity) Entailment (Semantic Validity) Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 2 / 47 Outline 1 Intro 2 Syntax 3 Derivability (Syntactic Validity) 4 Semantics 5 Entailment (Semantic Validity) 6 Meta-theory 7 Further Reading Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 3 / 47 Syntax Propositional Logic The language of classical propositional logic consists of: Propositional variables: p; q; r::: Logical operators: :, ^, _, ! Parentheses: ( , ) Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 4 / 47 Syntax Propositional Logic Recursive definition of a well-formed formula (wff) in classical propositional logic: Definition 1 A propositional variable is a wff. 2 If φ and are wffs, then :φ,(φ ^ ), (φ _ ), (φ ! ) are wffs. 3 Nothing else is a wff. The recursive nature of the above definition is important in that it allows inductive proofs on the entire logical language. Exercise: Prove that all wffs have an even number of brackets. Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 5 / 47 Syntax Predicate Logic The language of classical predicate logic consists of: Terms: a) constants (a; b; c:::) , b) variables (x; y; z:::)1 n-place predicates P; Q; R::: Logical operators: a) propositional connectives, b) quantifiers 8, 9 Parentheses: ( , ) 1Functions are also typically included in first order theories.
    [Show full text]
  • Natural Deduction, Inference, and Consistency
    NATURAL DEDUCTION, INFERENCE, AND CONSISTENCY ROBERT L. STANLEY 1. Introduction. This paper presents results in Quine's element- hood-restricted foundational system ML,1 which parallel the im- portant conjecture and theorem which Takeuti developed in his GLC, for type-theoretic systems, and which recur in Schütte's STT. Takeuti conjectured in GLC that his natural deduction subsystem, which does not postulate Modus Ponens or any clearly equivalent inference-rule, is nevertheless ponentially closed.2 Further, he showed that if it is closed, then the whole system, including classical arith- metic, must be absolutely consistent. It is especially easy to establish the "coherence" of suitable, very strong subsystems of ML—that is, that certain (false) formulae cannot be proven under the systems' simple, natural deduction rules. Presumably it is not known whether or not these subsystems are ponentially closed.8 If, in fact, they are closed, then they are all equivalent to ML, their coherence implies its coherence, and that in turn implies the consistency of ML and of arithmetic. Proof of such consistency, though much sought, is still lacking. Further, and most important, this inference-consistency relationship suggests the pos- sibility that suitably weakened varieties of these subsystems may be provably consistent, yet usefully strong foundations for mathe- matics. 2. General argument. ML and other foundational systems for mathematics can be developed by natural deduction, in the same manner as SF. ML conveniently displays the inference-consistency property under examination, which emerges typically in such de- velopments.4 Presented to the Society, August 29, 1963; received by the editors February 4, 1964.
    [Show full text]
  • Teaching Natural Deduction to Improve Text Argumentation Analysis in Engineering Students
    Teaching Natural Deduction to improve Text Argumentation Analysis in Engineering Students Rogelio Dávila1 , Sara C. Hernández2, Juan F. Corona3 1 Department of Computer Science and Engineering, CUValles, Universidad de Guadalajara, México, [email protected] 2 Department of Information Systems, CUCEA, Universidad de Guadalajara, México [email protected] 3 Department of Mechanical and Industrial Engineering, ITESM, Campus Guadalajara, México [email protected] Abstract. Teaching engineering students courses such as computer science theory, automata theory and discrete mathematics took us to realize that introducing basic notions of logic, especially following Gentzen’s natural deduction approach [6], to students in their first year, brought a benefit in the ability of those students to recover the structure of the arguments presented in text documents, cartoons and newspaper articles. In this document we propose that teaching logic help the students to better justify their arguments, enhance their reasoning, to better express their ideas and will help them in general to be more consistent in their presentations and proposals. Keywords: education, learning, logic, deductive reasoning, natural deduction. 1 Introduction One of the interesting capabilities when reading a paper or analysing a document is the ability to recover the underlying structure of the arguments presented. Studying logic and specially Gentzen’s natural deduction approach gives the student that reads a paper, the news, or an article in a magazine, the ability to recover those facts that support the conclusions, the arguments, the basic statements and the distinction among them. That is, to distinguish between the ones that the author assumes to be true and the ones proposed and justified by himself/herself.
    [Show full text]