Natural Deduction and the Curry-Howard-Isomorphism

Total Page:16

File Type:pdf, Size:1020Kb

Natural Deduction and the Curry-Howard-Isomorphism Natural Deduction and the Curry-Howard-Isomorphism Andreas Abel August 2016 Abstract We review constructive propositional logic and natural deduction and connect it to the simply-typed lambda-calculus with the Curry- Howard Isomorphism. Constructive Logic A fundamental property of constructive logic is the disjunction property: If the disjunction A _ B is provable, then either A is provable or B is provable. This property is not compatible with the principle of the excluded middle (tertium non datur), which states that A _:A holds for any proposition A. While each fool can state the classical tautology \aliens exist or don't", this certainly does not give us a means to decide whether aliens exist or not. A constructive proof of the fool's statement would require either showcasing an alien or a stringent argument for the impossibility of their existence.1 1 Natural deduction for propositional logic The proof calculus of natural deduction goes back to Gentzen[1935]. 1 For a more mathematical example of undecidability, refer to the continuum hypothesis CH which states that no cardinal exists between the set of the natural numbers and the set of reals. It is independent of ZFC, Zermelo-Fr¨ankel set theory with the axiom of choice, meaning that in ZFC, neither CH nor :CH is provable. 1 1.1 Propositions Formulæ of propositional logic are given by the following grammar: P; Q atomic proposition A; B; C ::= P j A ) B implication j A ^ B j > conjunction, truth j A _ B j ? disjunction, absurdity Even though we write formulas in linearized (string) form, we think of them as (abstract syntax) trees. Formulas of propositional logic are binary trees where the inner nodes are labelled with one of the connectives ) or ^ or _, and leafs with a generic atomic proposition P or with > or ?. The label of the root of such a tree is also called the principal connective of the formula. Any subtree of a formula A is called a subformula of A. The children of the root node are called the immediate subformulæ. Example 1 (Propositions). Given SH := \Sokrates is a human" FL := \Sokrates has four legs" we can form the implication SH ) FL verbalized as \if Sokrates is a human, he has four legs". This example presents a well-formed proposition, but its truth is not without conditions. It is important to clearly separate between well-formedness and truth. However, speaking about truth usually presupposes well-formedness; discussions about the truth of ill-formed propositions maybe be entertaining after sufficient consumption of good wine, though. While well-formedness of propositional formulas is simply adherence to a simple grammar, truth requires more sophisticated tools, such as proof. 1.2 Judgements and derivations Definition 1. A proof of a proposition A is a derivation of the judgement A true by the proof rules of propositional logic (to follow). In general, we refer to judgements such as A true with letter J.A rule r has the following form J1 :::Jn r J 2 where J1::n (with n ≥ 0) are the premises and J is the conclusion. Informally, it means that if we can make all the judgements Ji, then we can also make judgements J, by virtue of applying rule r. Rules without premises n = 0 are called axioms. Subsequent rule applications are organized into a tree, for instance: r3 J3 J4 J5 r1 r2 J1 J2 r0 J0 In this case, the final judgement J0 is justified by a binary rule r0 whose first premise J1 is justified by axiom r1. Its second premise J2 is justified by a ternary rule r2. The first of these new three premises J3 is justified by axiom r3, the other two premises J4 and J5 are missing justification. The tree leading to judgement J0 is called the derivation of J0. We use letters D and E, to refer to derivations and write D :: J for \tree D is a derivation of judgement J". If all leaves of the derivation trees are axioms, we speak of a closed derivation, otherwise of an open derivation, like the example above. Suppose we find justifications D4 :: J4 and D5 :: J5 of the remaining open premises, then we can close the derivation of J0 like this: D4 D5 r3 J3 J4 J5 r1 r2 J1 J2 r0 J0 The whole derivation D0 :: J0 can now be written in prefix notation as J0 J1 J2 J3 D0 = r0 (r1 ; r2 (r3 ; D4; D5)) where the superscripts annotate each rule with the judgement it derives, or, dropping annotations, briefly as D0 = r0(r1; r2(r3; D4; D5)). 1.3 Introduction and elimination rules The judgement A true for a propositional formula A is established by rules that fall into two classes: 1. Introduction rules. These establish the truth of a compound proposi- tion, for instance conjuction A ^ B from the truth of its subformulas A and B. A true B true ^I A ^ B true 3 2. Elimination rules. These allow us to use (parts of) a true compound proposition to derive the truth of other propositions. For instance, the truth of A ^ B entails the truth of its subformulas A and B. A ^ B true A ^ B true ^E ^E A true 1 B true 2 Connectives that can be defined with introduction and elimination rules referring only to subformulas (or generic new formulas) are called orthogonal. Introduction and elimination rules need to fit together. Introduction rules allow us to assemble evidence, and elimination rules allow us to use it, or to disassemble it. The elimination rules for a connective are adequate if we can retrieve by the elimination rules every piece of information we put in by introduction rules, and only the information we put in. Adequacy can be formulated as local soundness and local completeness, i. e., the presence of certain proof transformations detailed as follows. Local soundness states that we can simplify any proof detour that con- sists of the introduction of a connective immediately followed by the con- nective. There are n · m different detour shapes for a connective with n introduction rules and m elimination rules. The removal of such a detour is also called β-reduction, sometimes qualified by the connective, e. g., β^- reduction. In case of conjunction, there are two possible detours that qualify for β-reduction: D1 D2 D1 D2 A true B true A true B true ^I ^I A ^ B true A ^ B true ^E1 ^E2 A true B true Both pieces of evidence (A true and B true) fed into the introduction rule ^I can be recovered by the elimination rules ^E1 and ^E2. The β-reductions simplify these proof trees as follows: D1 D2 A true B true D1 ^I −!β A true A ^ B true ^E1 A true D1 D2 A true B true D2 ^I −!β B true A ^ B true ^E2 B true 4 These reductions show that the eliminations ^E1=2 are not too strong: they do not produce more than we put in with the introduction rule ^I. This is why we can cut the detours out. Exercise 1. Invent a (locally) unsound elimination rule for conjunction, which does not allow the reduction of detours. Local completeness in contrast expresses that the elimination rules are not too weak. The elimination rules for a connective should give us all the evindence such that we can reconstruct a proof for the formula we elimi- nated. This means that any proof for a proposition should be presentable by introducing the principal connective of the proposition. The premises for the introduction rule(s) are constructed via the elimination rule. Lo- cal completeness is witnesses by η-expansion, a proof transformation that introduces a detour of the form elimination rule followed by introduction rules. For conjunction, η-expansion creates the following detour: D D D A ^ B true A ^ B true A ^ B true −!η− ^E1 ^E2 A true B true ^I A ^ B true Exercise 2. Reflect on the fate of local completeness for conjunction when we drop one of the elimination rules. 1.4 Trivial proposition With binary conjunction, we can express any finite conjunction except for the empty conjunction >. A proof of the empty conjunctions asks us to provide proofs for all of the 0 conjuncts. Thus, we get an introduction rule without premises. >I > true There is no elimination rule for >: A proof of > is constructed from nothing, so there is no information contained in it we could retrieve by elimination. As a consequence, there is no β-reduction. Local completeness is wit- nessed by the η-expansion: D > true −! − >I η > true This proof transformation replaces any derivation of the trivial proposition by the trivial derivation. 5 1.5 Hypothetical judgements The elimination rule for implication is the well-known modes ponens: If A implies B and A holds, then B must hold as well. A ) B true A true )E B true While, classically, implication A ) B can be reduced to disjunction and negation :A _ B, the disjunction property would be immediately violated by such an interpretation. A universal truth such as \if aliens exist, there must a be habitable planet" does not allow us to decide whether there is a habitable planet or no aliens exist. Instead the implication A ) B allows us to conclude B from the hypoth- esis A. This means that we can assume the truth of A to derive the truth of B. Let us consider the tautology (A ^ B) ) (B ^ A). Our proof would first derive the conclusion B ^ A from an assumed truth of A ^ B, by the following open derivation: A ^ B true A ^ B true B true A true B ^ A true Then, the introduction of the implication (A ^ B) ) (B ^ A) enables us to discharge the hypothesis A ^ B and close the derivation.
Recommended publications
  • Proof Theory Can Be Viewed As the General Study of Formal Deductive Systems
    Proof Theory Jeremy Avigad December 19, 2017 Abstract Proof theory began in the 1920’s as a part of Hilbert’s program, which aimed to secure the foundations of mathematics by modeling infinitary mathematics with formal axiomatic systems and proving those systems consistent using restricted, finitary means. The program thus viewed mathematics as a system of reasoning with precise linguistic norms, governed by rules that can be described and studied in concrete terms. Today such a viewpoint has applications in mathematics, computer science, and the philosophy of mathematics. Keywords: proof theory, Hilbert’s program, foundations of mathematics 1 Introduction At the turn of the nineteenth century, mathematics exhibited a style of argumentation that was more explicitly computational than is common today. Over the course of the century, the introduction of abstract algebraic methods helped unify developments in analysis, number theory, geometry, and the theory of equations, and work by mathematicians like Richard Dedekind, Georg Cantor, and David Hilbert towards the end of the century introduced set-theoretic language and infinitary methods that served to downplay or suppress computational content. This shift in emphasis away from calculation gave rise to concerns as to whether such methods were meaningful and appropriate in mathematics. The discovery of paradoxes stemming from overly naive use of set-theoretic language and methods led to even more pressing concerns as to whether the modern methods were even consistent. This led to heated debates in the early twentieth century and what is sometimes called the “crisis of foundations.” In lectures presented in 1922, Hilbert launched his Beweistheorie, or Proof Theory, which aimed to justify the use of modern methods and settle the problem of foundations once and for all.
    [Show full text]
  • Systematic Construction of Natural Deduction Systems for Many-Valued Logics
    23rd International Symposium on Multiple Valued Logic. Sacramento, CA, May 1993 Proceedings. (IEEE Press, Los Alamitos, 1993) pp. 208{213 Systematic Construction of Natural Deduction Systems for Many-valued Logics Matthias Baaz∗ Christian G. Ferm¨ullery Richard Zachy Technische Universit¨atWien, Austria Abstract sion.) Each position i corresponds to one of the truth values, vm is the distinguished truth value. The in- A construction principle for natural deduction sys- tended meaning is as follows: Derive that at least tems for arbitrary finitely-many-valued first order log- one formula of Γm takes the value vm under the as- ics is exhibited. These systems are systematically ob- sumption that no formula in Γi takes the value vi tained from sequent calculi, which in turn can be au- (1 i m 1). ≤ ≤ − tomatically extracted from the truth tables of the log- Our starting point for the construction of natural ics under consideration. Soundness and cut-free com- deduction systems are sequent calculi. (A sequent is pleteness of these sequent calculi translate into sound- a tuple Γ1 ::: Γm, defined to be satisfied by an ness, completeness and normal form theorems for the interpretationj iffj for some i 1; : : : ; m at least one 2 f g natural deduction systems. formula in Γi takes the truth value vi.) For each pair of an operator 2 or quantifier Q and a truth value vi 1 Introduction we construct a rule introducing a formula of the form 2(A ;:::;A ) or (Qx)A(x), respectively, at position i The study of natural deduction systems for many- 1 n of a sequent.
    [Show full text]
  • Complexities of Proof-Theoretical Reductions
    Complexities of Proof-Theoretical Reductions Michael Toppel Submitted in accordance with the requirements for the degree of Doctor of Philosophy The University of Leeds Department of Pure Mathematics September 2016 iii The candidate confirms that the work submitted is his own and that appropriate credit has been given where reference has been made to the work of others. This copy has been supplied on the understanding that it is copyright material and that no quotation from the thesis may be published without proper acknowledgement. c 2016 The University of Leeds and Michael Toppel iv v Abstract The present thesis is a contribution to a project that is carried out by Michael Rathjen 0 and Andreas Weiermann to give a general method to study the proof-complexity of Σ1- sentences. This general method uses the generalised ordinal-analysis that was given by Buchholz, Ruede¨ and Strahm in [5] and [44] as well as the generalised characterisation of provable-recursive functions of PA + TI(≺ α) that was given by Weiermann in [60]. The present thesis links these two methods by giving an explicit elementary bound for the i proof-complexity increment that occurs after the transition from the theory IDc ! + TI(≺ α), which was used by Ruede¨ and Strahm, to the theory PA + TI(≺ α), which was analysed by Weiermann. vi vii Contents Abstract . v Contents . vii Introduction 1 1 Justifying the use of Multi-Conclusion Sequents 5 1.1 Introduction . 5 1.2 A Gentzen-semantics . 8 1.3 Multi-Conclusion Sequents . 15 1.4 Comparison with other Positions and Responses to Objections .
    [Show full text]
  • An Integration of Resolution and Natural Deduction Theorem Proving
    From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. An Integration of Resolution and Natural Deduction Theorem Proving Dale Miller and Amy Felty Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Abstract: We present a high-level approach to the integra- ble of translating between them. In order to achieve this goal, tion of such different theorem proving technologies as resolution we have designed a programming language which permits proof and natural deduction. This system represents natural deduc- structures as values and types. This approach builds on and ex- tion proofs as X-terms and resolution refutations as the types of tends the LCF approach to natural deduction theorem provers such X-terms. These type structures, called ezpansion trees, are by replacing the LCF notion of a uakfation with explicit term essentially formulas in which substitution terms are attached to representation of proofs. The terms which represent proofs are quantifiers. As such, this approach to proofs and their types ex- given types which generalize the formulas-as-type notion found tends the formulas-as-type notion found in proof theory. The in proof theory [Howard, 19691. Resolution refutations are seen LCF notion of tactics and tacticals can also be extended to in- aa specifying the type of a natural deduction proofs. This high corporate proofs as typed X-terms. Such extended tacticals can level view of proofs as typed terms can be easily combined with be used to program different interactive and automatic natural more standard aspects of LCF to yield the integration for which Explicit representation of proofs deduction theorem provers.
    [Show full text]
  • On Synthetic Undecidability in Coq, with an Application to the Entscheidungsproblem
    On Synthetic Undecidability in Coq, with an Application to the Entscheidungsproblem Yannick Forster Dominik Kirst Gert Smolka Saarland University Saarland University Saarland University Saarbrücken, Germany Saarbrücken, Germany Saarbrücken, Germany [email protected] [email protected] [email protected] Abstract like decidability, enumerability, and reductions are avail- We formalise the computational undecidability of validity, able without reference to a concrete model of computation satisfiability, and provability of first-order formulas follow- such as Turing machines, general recursive functions, or ing a synthetic approach based on the computation native the λ-calculus. For instance, representing a given decision to Coq’s constructive type theory. Concretely, we consider problem by a predicate p on a type X, a function f : X ! B Tarski and Kripke semantics as well as classical and intu- with 8x: p x $ f x = tt is a decision procedure, a function itionistic natural deduction systems and provide compact д : N ! X with 8x: p x $ ¹9n: д n = xº is an enumer- many-one reductions from the Post correspondence prob- ation, and a function h : X ! Y with 8x: p x $ q ¹h xº lem (PCP). Moreover, developing a basic framework for syn- for a predicate q on a type Y is a many-one reduction from thetic computability theory in Coq, we formalise standard p to q. Working formally with concrete models instead is results concerning decidability, enumerability, and reducibil- cumbersome, given that every defined procedure needs to ity without reference to a concrete model of computation. be shown representable by a concrete entity of the model.
    [Show full text]
  • Generating Hints and Feedback for Hilbert-Style Axiomatic Proofs
    Generating hints and feedback for Hilbert-style axiomatic proofs Josje Lodder Bastiaan Heeren Johan Jeuring Technical Report UU-CS-2016-009 December 2016 Department of Information and Computing Sciences Utrecht University, Utrecht, The Netherlands www.cs.uu.nl ISSN: 0924-3275 Department of Information and Computing Sciences Utrecht University P.O. Box 80.089 3508 TB Utrecht The Netherlands Generating hints and feedback for Hilbert-style axiomatic proofs Josje Lodder Bastiaan Heeren Johan Jeuring Open University of the Open University of the Open University of the Netherlands Netherlands Netherlands and Utrecht [email protected] [email protected] University [email protected] ABSTRACT From these students, 22 had to redo their homework exer- This paper describes an algorithm to generate Hilbert-style cise on axiomatic proofs. This is significantly higher than, axiomatic proofs. Based on this algorithm we develop lo- for example, the number of students in the same group who gax, a new interactive tutoring tool that provides hints and had to redo the exercise on semantic tableaux: 5 out of 65. feedback to a student who stepwise constructs an axiomatic A student practices axiomatic proofs by solving exercises. proof. We compare the generated proofs with expert and Since it is not always possible to have a human tutor avail- student solutions, and conclude that the quality of the gen- able, an intelligent tutoring system (ITS) might be of help. erated proofs is comparable to that of expert proofs. logax There are several ITSs supporting exercises on natural de- recognizes most steps that students take when constructing duction systems [23, 22, 6].
    [Show full text]
  • Types of Proof System
    Types of proof system Peter Smith October 13, 2010 1 Axioms, rules, and what logic is all about 1.1 Two kinds of proof system There are at least two styles of proof system for propositional logic other than trees that beginners ought to know about. The real interest here, of course, is not in learning yet more about classical proposi- tional logic per se. For how much fun is that? Rather, what we are doing { as always { is illustrating some Big Ideas using propositional logic as a baby example. The two new styles are: 1. Axiomatic systems The first system of formal logic in anything like the contempo- rary sense { Frege's system in his Begriffsschrift of 1879 { is an axiomatic one. What is meant by `axiomatic' in this context? Think of Euclid's geometry for example. In such an axiomatic theory, we are given a \starter pack" of basic as- sumptions or axioms (we are often given a package of supplementary definitions as well, enabling us to introduce new ideas as abbreviations for constructs out of old ideas). And then the theorems of the theory are those claims that can deduced by allowed moves from the axioms, perhaps invoking definitions where appropriate. In Euclid's case, the axioms and definitions are of course geometrical, and he just helps himself to whatever logical inferences he needs to execute the deductions. But, if we are going to go on to axiomatize logic itself, we are going to need to be explicit not just about the basic logical axioms we take as given starting points for deductions, but also about the rules of inference that we are permitted to use.
    [Show full text]
  • The Natural Deduction Pack
    The Natural Deduction Pack Alastair Carr March 2018 Contents 1 Using this pack2 2 Summary of rules3 3 Worked examples5 3.1 Implication..............................5 3.2 Universal quantifier..........................6 3.3 Existential quantifier.........................8 4 Practice problems 11 4.1 Core.................................. 11 4.2 Conjunction.............................. 13 4.3 Implication.............................. 14 4.4 Disjunction.............................. 15 4.5 Biconditional............................. 16 4.6 Negation................................ 17 4.7 Universal quantifier.......................... 19 4.8 Existential quantifier......................... 21 4.9 Identity................................ 23 4.10 Additional challenges......................... 24 5 Solutions 26 5.1 Core.................................. 26 5.2 Conjunction.............................. 35 5.3 Implication.............................. 36 5.4 Disjunction.............................. 42 5.5 Biconditional............................. 50 5.6 Negation................................ 55 5.7 Universal quantifier.......................... 71 5.8 Existential quantifier......................... 86 5.9 Identity................................ 106 5.10 Additional challenges......................... 117 1 1 Using this pack This pack consists of Natural Deduction problems, intended to be used alongside The Logic Manual by Volker Halbach. The pack covers Natural Deduction proofs in propositional logic (L1), predicate logic (L2) and predicate logic with
    [Show full text]
  • Natural Deduction
    Natural deduction Constructive Logic (15-317) Instructor: Giselle Reis Lecture 02 1 Some history David Hilbert was a German mathematician who published, in 1899, a very influential book called Foundations of Geometry. The thing that was so revo- lutionary about this book was the way theorems were proven. It followed very closely an axiomatic approach, meaning that a set of geometry axioms were assumed and the whole theory was developed by deriving new facts from those axioms via modus ponens basically1. Hilbert believed this was the right way to do mathematics, so in 1920 he proposed that all math- ematics be formalized using a set of axioms which needed to be proved consistent (i.e. ? cannot be derived from them). All further mathematical knowledge would follow from this set. Shortly after that, it was clear that the task would not be so easy. Several attempts were proposed, but mathematicians and philosophers alike kept running into paradoxes. The foundations of mathematics was in crisis: to have something so fundamental and powerful be inconsistent seemed like a disaster. Many scientists were compelled to prove consistency of at least some part of math, and this was the case for Gentzen (another German mathematician and logician). In 1932 he set as a personal goal to show con- sistency of logical deduction in arithmetic. Given a formal system of logical deduction, the consistency proof becomes a purely mathematical problem, so that is what he did first. He thought that later on arithmetic could be added (turns out that was not so straightforward, and he ended up devel- oping yet another system, but ultimately Gentzen did prove consistency of arithmetic).
    [Show full text]
  • Some Basics of Classical Logic
    Some Basics of Classical Logic Theodora Achourioti AUC/ILLC, University of Amsterdam [email protected] PWN Vakantiecursus 2014 Eindhoven, August 22 Amsterdam, August 29 Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 1 / 47 Intro Classical Logic Propositional and first order predicate logic. A way to think about a logic (and no more than that) is as consisting of: Expressive part: Syntax & Semantics Normative part: Logical Consequence (Validity) Derivability (Syntactic Validity) Entailment (Semantic Validity) Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 2 / 47 Outline 1 Intro 2 Syntax 3 Derivability (Syntactic Validity) 4 Semantics 5 Entailment (Semantic Validity) 6 Meta-theory 7 Further Reading Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 3 / 47 Syntax Propositional Logic The language of classical propositional logic consists of: Propositional variables: p; q; r::: Logical operators: :, ^, _, ! Parentheses: ( , ) Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 4 / 47 Syntax Propositional Logic Recursive definition of a well-formed formula (wff) in classical propositional logic: Definition 1 A propositional variable is a wff. 2 If φ and are wffs, then :φ,(φ ^ ), (φ _ ), (φ ! ) are wffs. 3 Nothing else is a wff. The recursive nature of the above definition is important in that it allows inductive proofs on the entire logical language. Exercise: Prove that all wffs have an even number of brackets. Theodora Achourioti (AUC/ILLC, UvA) Some Basics of Classical Logic PWN Vakantiecursus 2014 5 / 47 Syntax Predicate Logic The language of classical predicate logic consists of: Terms: a) constants (a; b; c:::) , b) variables (x; y; z:::)1 n-place predicates P; Q; R::: Logical operators: a) propositional connectives, b) quantifiers 8, 9 Parentheses: ( , ) 1Functions are also typically included in first order theories.
    [Show full text]
  • Natural Deduction, Inference, and Consistency
    NATURAL DEDUCTION, INFERENCE, AND CONSISTENCY ROBERT L. STANLEY 1. Introduction. This paper presents results in Quine's element- hood-restricted foundational system ML,1 which parallel the im- portant conjecture and theorem which Takeuti developed in his GLC, for type-theoretic systems, and which recur in Schütte's STT. Takeuti conjectured in GLC that his natural deduction subsystem, which does not postulate Modus Ponens or any clearly equivalent inference-rule, is nevertheless ponentially closed.2 Further, he showed that if it is closed, then the whole system, including classical arith- metic, must be absolutely consistent. It is especially easy to establish the "coherence" of suitable, very strong subsystems of ML—that is, that certain (false) formulae cannot be proven under the systems' simple, natural deduction rules. Presumably it is not known whether or not these subsystems are ponentially closed.8 If, in fact, they are closed, then they are all equivalent to ML, their coherence implies its coherence, and that in turn implies the consistency of ML and of arithmetic. Proof of such consistency, though much sought, is still lacking. Further, and most important, this inference-consistency relationship suggests the pos- sibility that suitably weakened varieties of these subsystems may be provably consistent, yet usefully strong foundations for mathe- matics. 2. General argument. ML and other foundational systems for mathematics can be developed by natural deduction, in the same manner as SF. ML conveniently displays the inference-consistency property under examination, which emerges typically in such de- velopments.4 Presented to the Society, August 29, 1963; received by the editors February 4, 1964.
    [Show full text]
  • Teaching Natural Deduction to Improve Text Argumentation Analysis in Engineering Students
    Teaching Natural Deduction to improve Text Argumentation Analysis in Engineering Students Rogelio Dávila1 , Sara C. Hernández2, Juan F. Corona3 1 Department of Computer Science and Engineering, CUValles, Universidad de Guadalajara, México, [email protected] 2 Department of Information Systems, CUCEA, Universidad de Guadalajara, México [email protected] 3 Department of Mechanical and Industrial Engineering, ITESM, Campus Guadalajara, México [email protected] Abstract. Teaching engineering students courses such as computer science theory, automata theory and discrete mathematics took us to realize that introducing basic notions of logic, especially following Gentzen’s natural deduction approach [6], to students in their first year, brought a benefit in the ability of those students to recover the structure of the arguments presented in text documents, cartoons and newspaper articles. In this document we propose that teaching logic help the students to better justify their arguments, enhance their reasoning, to better express their ideas and will help them in general to be more consistent in their presentations and proposals. Keywords: education, learning, logic, deductive reasoning, natural deduction. 1 Introduction One of the interesting capabilities when reading a paper or analysing a document is the ability to recover the underlying structure of the arguments presented. Studying logic and specially Gentzen’s natural deduction approach gives the student that reads a paper, the news, or an article in a magazine, the ability to recover those facts that support the conclusions, the arguments, the basic statements and the distinction among them. That is, to distinguish between the ones that the author assumes to be true and the ones proposed and justified by himself/herself.
    [Show full text]