Proving Theorems

Total Page:16

File Type:pdf, Size:1020Kb

Proving Theorems Proving Theorems In unit one, we called a statement that is true on every interpretation a tautology. In this unit, we will call such a statement a theorem, where a theorem is a wff that relies on NO assumptions. We will now learn how to construct proofs for theorems. 1. Proving Theorems: The statement “¬¬P P” is a tautology. So, we should be able to construct a derivation of this statement which relies on ZERO assumptions. Such a derivation is called a proof. Here it is: 1 (1) ¬¬P A 1 (2) P 1, ¬E - (3) ¬¬P P 1, 2, I Using arrow-introduction after deriving “P” from “¬¬P”, we have derived a statement in line (3) which relies on no assumptions. We have proved our first theorem: T1: Ⱶ ¬¬P P Let’s do another one: T2: Ⱶ (¬Q ¬P) (P Q) To prove this theorem, we need to show that the antecedent entails the consequent. So, we’ll need to assume the antecedent, and hope to use “I” in the final line. Like this: 1 (1) ¬Q ¬P A ??? ? (n-1) P Q ? - (n) (¬Q ¬P) (P Q) ?, I Because we want to derive “P Q” in line (n-1), we’ll want to assume its antecedent “P”, as well as the opposite of its consequent “¬Q” and hope for a contradiction. Like this: 1 (1) ¬Q ¬P A 2 (2) P Ass. (I) 3 (3) ¬Q Ass. (Red.) ??? ? (n-1) P Q ? - (n) (¬Q ¬P) (P Q) ?, I 1 Let’s work through the steps to see where the derivation above leads: 1 (1) ¬Q ¬P A 2 (2) P Ass. (I) 3 (3) ¬Q Ass. (Red.) 1, 3 (4) ¬P 1, 3, E 1, 2, 3 (5) P ¬P 2, 4, I 1, 2 (6) ¬¬Q 3, 5, ¬I 1, 2 (7) Q 6, ¬E 1 (8) P Q 2, 7, I - (9) (¬Q ¬P) (P Q) 1, 8, I Here, by using a reductio, we were able to show that P entails Q, so that we could derive “P Q” by “I”. Then, since we have shown that (1) entails (8), we were able to obtain the theorem in (9) by another “I”. Let’s do one more: T3: Ⱶ P ¬P This is known as the Law of Excluded Middle (LEM). Obviously, since this theorem is not a conditional, we’re not going to be able to use “I” to obtain it. So, we’re going to need to try for a reductio. Let’s begin by assuming the negation of the theorem, then: 1 (1) ¬(P ¬P) Ass. (Red.) ??? ? (n-1) ¬¬(P ¬P) 1, ?, ¬I - (n) P ¬P ?, ¬E It is not obvious how we are supposed to derive a contradiction from (1). It turns out that we need to ALSO assume “P” for a reductio, deriving a contradiction from “P” in order to obtain “¬P”. And THEN we can derive ANOTHER contradiction from “¬P” in order to obtain line (n-1). Like this: 1 (1) ¬(P ¬P) Ass. (Red.) 2 (2) P Ass. (Red.) 2 (3) P ¬P 2, I 1, 2 (4) (P ¬P) ¬(P ¬P) 1, 3, I 1 (5) ¬P 2, 4, ¬I (discharging line 2) 1 (6) P ¬P 5, I 1 (7) (P ¬P) ¬(P ¬P) 1, 6, I - (8) ¬¬(P ¬P) 1, 7, ¬I (discharging line 1) - (9) P ¬P 8, ¬E 2 2. Strategies for Proving Theorems: The route that we took to prove T3 was probably not very obvious. Here are some tips for proving theorems: (a) If the main operator in the theorem to be proved is a “¬” or a “”, the proof will use a reductio. Assume the theorem’s opposite and try to derive a contradiction. (b) If the main operator of the theorem to be proved is a “”, the proof will use arrow-introduction. Assume the antecedent of the theorem, and try to derive the theorem’s consequent. (c) If the main operator of the theorem to be proved is a “”, the proof will use TWO arrow-introductions. One at a time, assume EACH of the two antecedents which compose the two conditionals of the bi-conditional, and try to derive the consequent of each. (d) If the main operator of the theorem to be proved is a “”, then each conjunct is itself a theorem. One at a time, prove each of the conjunctions two conjuncts, and then use “I” at the end to obtain the target theorem. 3 Theorem Introduction Knowing which statements are theorems can be helpful. We can make use of them in our other derivations as a sort of “shortcut”. For, we are now permitted to introduce any theorem that we have proved as an assumption in our derivations, which rests on NO other premises. For instance, let’s prove this sequent: S86: P Q , ¬P R Ⱶ Q R Now, it may not be immediately obvious how to obtain “Q R” from the premises, EXCEPT that we have just proved that “P ¬P” is a theorem. These two disjuncts just happen to be the antecedents of our two conditionals above. So, we can actually introduce that theorem as an assumption to help us. Like this: 1 (1) P Q A 2 (2) ¬P R A - (3) P ¬P TI (LEM) (T3 is the “Law of Excluded Middle”) ??? 1, 2 (n) Q R ? On line (3), we’ve introduce the theorem “T3”. We make a note of this by writing “TI” for “Theorem Introduction” and cite the number of the theorem. To the left, line (3) rests on NO assumptions, because T3 is a theorem. It turns out that we can assume each of the disjuncts of line (3), “P” and “¬P”, and see that they BOTH entail “Q R”. Like this: 1 (1) P Q A 2 (2) ¬P R A - (3) P ¬P TI (LEM) 4 (4) P Ass. (I) 1, 4 (5) Q 1, 4, E 1, 4 (6) Q R 5, I 1 (7) P (Q R) 4, 6, I 8 (8) ¬P Ass. (I) 2, 8 (9) R 2, 8, E 2, 8 (10) Q R 9, I 2 (11) ¬P (Q R) 8, 10, I 1, 2 (12) Q R 3, 7, 11, E Here, we’ve introduced a theorem, and shown that—based on premises (1) and (2)— both disjuncts of the theorem (LEM) entail “Q R”. We then used disjunction- elimination to obtain the conclusion in line (12). 4 Sequent Introduction We will also now be permitted to use a similar shortcut to introduce previously proved SEQUENTS rather than THEOREMS. To illustrate, let’s construct a derivation of the following sequent: S87: ¬¬{[(P Q) R] S} Ⱶ [P (Q R)] T It turns out that recalling a previous sequent that we already proved last week (in the lesson, “Derivations, part 2”) will be helpful. Here is that previous sequent: S20: (P Q) R Ⱶ P (Q R) (This sequent is generally called “Exportation”) The derivation we’re aiming for begins like this: 1 (1) ¬¬{[(P Q) R] S} A 1 (2) [(P Q) R] S 1, ¬E 1 (3) (P Q) R 2, E ??? 1 (n) [P (Q R)] T ? So far, we’ve just used “¬E” and “E” to obtain (3) from (1). Now look at sequent S20. Now look at the conclusion to be derived in line (n). Notice any similarities? Now, we COULD re-do ALL of the work that we did in our derivation of S20. But, as a shortcut, we can just skip straight to the conclusion of that derivation, and cite the sequent. Like this: 1 (1) ¬¬{[(P Q) R] S} A 1 (2) [(P Q) R] S 1, ¬E 1 (3) (P Q) R 2, E 1 (4) P (Q R) 3, SI (Exp) (S20 is called “Exportation”) 1 (5) [P (Q R)] T 4, I On line (4), we just write “SI” for “Sequent Introduction” and cite the name or number of the sequent we’re making use of. This way, rather than re-going through all of the 6 lines that already went through to derive (4) from (3) when we were proving S20/Exp, we can just cite S20/Exp and be done with it, going straight from (3) to (4) in one line. Nice! 5 Substitution Instances 1. Theorem Introduction – Substitution Instances: When making use of theorem or sequent introduction, often the statements that you are using will not look EXACTLY like those of the theorem or sequent you want to use—though they will have the same FORM. For instance, it should be obvious that all of the following statements have the same form as “P ¬P”: Original Statement P ¬P New Statement Substitution Q ¬ Q P = Q (P Q) ¬(P Q) P = P Q (S T) ¬(S T) P = S T It turns out that we are permitted to introduce ANY of the statements above as theorems which rely on no other premises. But, since they do not look exactly like “P ¬P”, we will introduce them as “TISI” for “Theorem Introduction – Substitution Instance” rather than simply “TI”. This is because each of the statements above takes “P ¬P” and substitutes in the wff’s listed under “substitution” for “P”. For instance, take the following theorem, which we have just proved, above: T2: Ⱶ (¬Q ¬P) (P Q) Suppose that you want to introduce a theorem which has the same FORM as the above, but actually looks like this: [¬(C D) ¬( A B)] [(A B) (C D)] The statement above IS actually a substitution-instance of theorem T2.
Recommended publications
  • Rules of Replacement II, §7.4
    Philosophy 109, Modern Logic, Queens College Russell Marcus, Instructor email: [email protected] website: http://philosophy.thatmarcusfamily.org Office phone: (718) 997-5287 Rules of Replacement II, §7.4 I. The Last Five Rules of Replacement See the appendix at the end of the lesson for truth tables proving equivalence for each. Transposition (Trans) P e Q :: -Q e -P You may switch the antecedent and consequent of a conditional statement, as long as you negate (or un-negate) both. Often used with (HS). Also, traditionally, called the ‘contrapositive’. Sample Derivation: 1. A e B 2. D e -B / A e -D 3. --B e -D 2, Trans 4. A e -D 1, 3, DN, HS QED Transposition can be tricky when only one term is negated: A e -B becomes, by Trans: --B e -A which becomes, by DN B e -A Equivalently, but doing the double negation first: A e -B becomes, by DN: --A e -B becomes, by Trans: B e -A Either way, you can include the DN on the line with Trans. Material Implication (Impl) P e Q :: -P w Q Implication allows you to change a statement from a disjunction to a conditional, or vice versa. It’s often easier to work with disjunctions. You can use (DM) to get conjunctions. You may be able to use distribution, which doesn’t apply to conditionals. On the other hand, sometimes, you just want to work with conditionals. You can use (HS) and (MP). Proofs are overdetermined by our system - there are many ways to do them.
    [Show full text]
  • 7.1 Rules of Implication I
    Natural Deduction is a method for deriving the conclusion of valid arguments expressed in the symbolism of propositional logic. The method consists of using sets of Rules of Inference (valid argument forms) to derive either a conclusion or a series of intermediate conclusions that link the premises of an argument with the stated conclusion. The First Four Rules of Inference: ◦ Modus Ponens (MP): p q p q ◦ Modus Tollens (MT): p q ~q ~p ◦ Pure Hypothetical Syllogism (HS): p q q r p r ◦ Disjunctive Syllogism (DS): p v q ~p q Common strategies for constructing a proof involving the first four rules: ◦ Always begin by attempting to find the conclusion in the premises. If the conclusion is not present in its entirely in the premises, look at the main operator of the conclusion. This will provide a clue as to how the conclusion should be derived. ◦ If the conclusion contains a letter that appears in the consequent of a conditional statement in the premises, consider obtaining that letter via modus ponens. ◦ If the conclusion contains a negated letter and that letter appears in the antecedent of a conditional statement in the premises, consider obtaining the negated letter via modus tollens. ◦ If the conclusion is a conditional statement, consider obtaining it via pure hypothetical syllogism. ◦ If the conclusion contains a letter that appears in a disjunctive statement in the premises, consider obtaining that letter via disjunctive syllogism. Four Additional Rules of Inference: ◦ Constructive Dilemma (CD): (p q) • (r s) p v r q v s ◦ Simplification (Simp): p • q p ◦ Conjunction (Conj): p q p • q ◦ Addition (Add): p p v q Common Misapplications Common strategies involving the additional rules of inference: ◦ If the conclusion contains a letter that appears in a conjunctive statement in the premises, consider obtaining that letter via simplification.
    [Show full text]
  • Tables of Implications and Tautologies from Symbolic Logic
    Tables of Implications and Tautologies from Symbolic Logic Dr. Robert B. Heckendorn Computer Science Department, University of Idaho March 17, 2021 Here are some tables of logical equivalents and implications that I have found useful over the years. Where there are classical names for things I have included them. \Isolation by Parts" is my own invention. By tautology I mean equivalent left and right hand side and by implication I mean the left hand expression implies the right hand. I use the tilde and overbar interchangeably to represent negation e.g. ∼x is the same as x. Enjoy! Table 1: Properties of All Two-bit Operators. The Comm. is short for commutative and Assoc. is short for associative. Iff is short for \if and only if". Truth Name Comm./ Binary And/Or/Not Nands Only Table Assoc. Op 0000 False CA 0 0 (a " (a " a)) " (a " (a " a)) 0001 And CA a ^ b a ^ b (a " b) " (a " b) 0010 Minus b − a a ^ b (b " (a " a)) " (a " (a " a)) 0011 B A b b b 0100 Minus a − b a ^ b (a " (a " a)) " (a " (a " b)) 0101 A A a a a 0110 Xor/NotEqual CA a ⊕ b (a ^ b) _ (a ^ b)(b " (a " a)) " (a " (a " b)) (a _ b) ^ (a _ b) 0111 Or CA a _ b a _ b (a " a) " (b " b) 1000 Nor C a # b a ^ b ((a " a) " (b " b)) " ((a " a) " a) 1001 Iff/Equal CA a $ b (a _ b) ^ (a _ b) ((a " a) " (b " b)) " (a " b) (a ^ b) _ (a ^ b) 1010 Not A a a a " a 1011 Imply a ! b a _ b (a " (a " b)) 1100 Not B b b b " b 1101 Imply b ! a a _ b (b " (a " a)) 1110 Nand C a " b a _ b a " b 1111 True CA 1 1 (a " a) " a 1 Table 2: Tautologies (Logical Identities) Commutative Property: p ^ q $ q
    [Show full text]
  • Chapter 10: Symbolic Trails and Formal Proofs of Validity, Part 2
    Essential Logic Ronald C. Pine CHAPTER 10: SYMBOLIC TRAILS AND FORMAL PROOFS OF VALIDITY, PART 2 Introduction In the previous chapter there were many frustrating signs that something was wrong with our formal proof method that relied on only nine elementary rules of validity. Very simple, intuitive valid arguments could not be shown to be valid. For instance, the following intuitively valid arguments cannot be shown to be valid using only the nine rules. Somalia and Iran are both foreign policy risks. Therefore, Iran is a foreign policy risk. S I / I Either Obama or McCain was President of the United States in 2009.1 McCain was not President in 2010. So, Obama was President of the United States in 2010. (O v C) ~(O C) ~C / O If the computer networking system works, then Johnson and Kaneshiro will both be connected to the home office. Therefore, if the networking system works, Johnson will be connected to the home office. N (J K) / N J Either the Start II treaty is ratified or this landmark treaty will not be worth the paper it is written on. Therefore, if the Start II treaty is not ratified, this landmark treaty will not be worth the paper it is written on. R v ~W / ~R ~W 1 This or statement is obviously exclusive, so note the translation. 427 If the light is on, then the light switch must be on. So, if the light switch in not on, then the light is not on. L S / ~S ~L Thus, the nine elementary rules of validity covered in the previous chapter must be only part of a complete system for constructing formal proofs of validity.
    [Show full text]
  • Philosophy 109, Modern Logic Russell Marcus
    Philosophy 240: Symbolic Logic Hamilton College Fall 2014 Russell Marcus Reference Sheeet for What Follows Names of Languages PL: Propositional Logic M: Monadic (First-Order) Predicate Logic F: Full (First-Order) Predicate Logic FF: Full (First-Order) Predicate Logic with functors S: Second-Order Predicate Logic Basic Truth Tables - á á @ â á w â á e â á / â 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 0 1 0 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 Rules of Inference Modus Ponens (MP) Conjunction (Conj) á e â á á / â â / á A â Modus Tollens (MT) Addition (Add) á e â á / á w â -â / -á Simplification (Simp) Disjunctive Syllogism (DS) á A â / á á w â -á / â Constructive Dilemma (CD) (á e â) Hypothetical Syllogism (HS) (ã e ä) á e â á w ã / â w ä â e ã / á e ã Philosophy 240: Symbolic Logic, Prof. Marcus; Reference Sheet for What Follows, page 2 Rules of Equivalence DeMorgan’s Laws (DM) Contraposition (Cont) -(á A â) W -á w -â á e â W -â e -á -(á w â) W -á A -â Material Implication (Impl) Association (Assoc) á e â W -á w â á w (â w ã) W (á w â) w ã á A (â A ã) W (á A â) A ã Material Equivalence (Equiv) á / â W (á e â) A (â e á) Distribution (Dist) á / â W (á A â) w (-á A -â) á A (â w ã) W (á A â) w (á A ã) á w (â A ã) W (á w â) A (á w ã) Exportation (Exp) á e (â e ã) W (á A â) e ã Commutativity (Com) á w â W â w á Tautology (Taut) á A â W â A á á W á A á á W á w á Double Negation (DN) á W --á Six Derived Rules for the Biconditional Rules of Inference Rules of Equivalence Biconditional Modus Ponens (BMP) Biconditional DeMorgan’s Law (BDM) á / â -(á / â) W -á / â á / â Biconditional Modus Tollens (BMT) Biconditional Commutativity (BCom) á / â á / â W â / á -á / -â Biconditional Hypothetical Syllogism (BHS) Biconditional Contraposition (BCont) á / â á / â W -á / -â â / ã / á / ã Philosophy 240: Symbolic Logic, Prof.
    [Show full text]
  • Section 1.2: Propositional Logic
    Section 1.2: Propositional Logic January 14, 2021 Abstract Now we’re going to use the tools of formal logic to reach logical conclusions (“prove theorems”) based on wffs formed by some given statements. This is the domain of propositional logic. • Propositional wff: represent some sort of argument, to be tested, or proven, by propositional logic. • valid arguments, e.g. P1 ∧ P2 ∧ . ∧ Pn → Q have hypotheses (we suppose – assert – that the Pi are true), and a conclusion (Q). To be valid, this argument must be a tautology (always true). To be an argument, Q must not be identically true (i.e. a fact, in which case the hypotheses are irrelevant, by the truth table of implication). Examples a. Valid argument: If logic is hard, then I am a monkey’s uncle. I am not a monkey’s uncle. Therefore, logic is not hard. b. Invalid arguments: If people are crazy, then they should be in an asylum. I am in an asylum. i. Therefore 1 + 1 = 2. ii. Therefore I am crazy. • Proof Sequence: a sequence of wffs in which every wff is an hypothesis or the result of applying the formal system’s derivation rules (truth-preserving rules) in sequence. Our Objective: to reach the conclusion Q from the hypotheses P1, P2, . ., Pn (with reasons!). The famous cartoonist Sidney Harris knows that you need to fol- low the rules (equivalence and inference rules) in a proof sequence: • Types of derivation rules: – Equivalence rules (see Table 1.11, p. 28): we can substi- tute equivalent wffs in a proof sequence by invoking equiv- alence rules.
    [Show full text]
  • Implicational Propositional Calculus: Tableaux and Completeness
    IMPLICATIONAL PROPOSITIONAL CALCULUS TABLEAUX AND COMPLETENESS P.L. ROBINSON Abstract. We discuss tableaux for the Implicational Propositional Calculus and show how they may be used to establish its completeness. 0. Introduction Tableaux [5] and dual tableaux [1] facilitate not only an intuitive analysis of the logical structure of propositional formulas but also an elegant proof that the classical Propositional Calculus is complete in the sense that all tautologies are theorems. The Implicational Proposi- tional Calculus (IPC) has the conditional (⊃) as its only primitive connective and modus ponens (MP) as its only inference rule; it is traditionally based on the three axiom schemes [(A ⊃ B) ⊃ A] ⊃ A A ⊃ (B ⊃ A) [A ⊃ (B ⊃ C)] ⊃ [(A ⊃ B) ⊃ (A ⊃ C)] of which the first scheme is due to Peirce. It is well-known that IPC is complete: a Kalm`ar approach is outlined in Exercises 6.3 - 6.5 of [2]; a Lindenbaum approach is offered in [3]. Our purpose in this paper is to adapt the signed dual tableaux of [6] and thereby present another proof of IPC completeness. 1. IPC: Tableaux and Completeness Throughout, we shall make free use of the fact that the Deduction Theorem (DT) holds in IPC: if A and the set Γ of (well-formed IPC) formulas furnish a deduction of B, then Γ alone furnishes a deduction of A ⊃ B; with standard symbolism, if Γ, A ⊢ B then Γ ⊢ A ⊃ B. A special case is usually proved en route to DT: namely, A ⊃ A is a theorem scheme of IPC. A particularly useful consequence of DT is Hypothetical Syllogism (HS): A ⊃ B,B ⊃ C ⊢ A ⊃ C.
    [Show full text]
  • Section 1.2: Propositional Logic
    Section 1.2: Propositional Logic January 15, 2020 Abstract Now we’re going to use the tools of formal logic to reach logical conclusions (“prove theorems”) based on wffs formed by some given statements. This is the domain of propositional logic. • Propositional wff: represent some sort of argument, to be tested, or proven, by propositional logic. • valid arguments, e.g. P1 ∧ P2 ∧ . ∧ Pn → Q have hypotheses (we suppose – assert – that the Pi are true), and a conclusion (Q). To be valid, this argument must be a tautology (always true). To be an argument, Q must not be identically true (i.e. a fact, in which case the hypotheses are irrelevant, by the truth table of implication). Examples a. Valid argument: If logic is hard, then I am a monkey’s uncle. I am not a monkey’s uncle. Therefore, logic is not hard. b. Invalid arguments: If people are crazy, then they should be in an asylum. I am in an asylum. i. Therefore 1 + 1 = 2. ii. Therefore I am crazy. • Proof Sequence: a sequence of wffs in which every wff is an hypothesis or the result of applying the formal system’s derivation rules (truth-preserving rules) in sequence. Our Objective: to reach the conclusion Q from the hypotheses P1, P2, . ., Pn (with reasons!). The famous cartoonist Sidney Harris knows that you need to fol- low the rules (equivalence and inference rules) in a proof sequence: • Types of derivation rules: – Equivalence rules (see Table 1.11, p. 28): we can substi- tute equivalent wffs in a proof sequence by invoking equiv- alence rules.
    [Show full text]
  • 9.6 Expanding the Rules of Inference: Replacement Rules 393
    M09_COPI1396_13_SE_C09.QXD 10/19/07 3:46 AM Page 393 9.6 Expanding the Rules of Inference: Replacement Rules 393 is the manager’s next-door neighbor; E—Mr. Jones’s annual earnings are exactly divisible by 3; T—$40,000 is exactly divisible by 3; R— Mr. Robinson is the manager’s next-door neighbor; H—Mr. Robinson lives halfway between Detroit and Chicago; D—Mr. Robinson lives in Detroit; S—Mr. Smith is the manager’s next-door neighbor.) 9. If Mr. Smith is the manager’s next-door neighbor, then Mr. Smith lives halfway between Detroit and Chicago. If Mr. Smith lives halfway between Detroit and Chicago, then he does not live in Chicago. Mr. Smith is the manager’s next-door neighbor. If Mr. Robinson lives in Detroit, then he does not live in Chicago. Mr. Robinson lives in Detroit. Mr. Smith lives in Chicago or else either Mr. Robinson or Mr. Jones lives in Chicago. If Mr. Jones lives in Chicago, then the manager is Jones. Therefore the manager is Jones. (S—Mr. Smith is the manager’s next-door neighbor; W—Mr. Smith lives halfway between Detroit and Chicago; L—Mr. Smith lives in Chicago; D—Mr. Robinson lives in Detroit; I—Mr. Robinson lives in Chicago; C—Mr. Jones lives in Chicago; B—The manager is Jones.) *10. If Smith once beat the editor at billiards, then Smith is not the editor. Smith once beat the editor at billiards. If the manager is Jones, then Jones is not the editor. The manager is Jones.
    [Show full text]
  • J. Michael Dunn and Greg Restall
    J. MICHAEL DUNN AND GREG RESTALL RELEVANCE LOGIC 1 INTRODUCTION 1.1 Delimiting the topic The title of this piece is not `A Survey of Relevance Logic'. Such a project was impossible in the mid 1980s when the first version of this article was published, due to the development of the field and even the space limitations of the Handbook. The situation is if anything, more difficult now. For example Anderson and Belnap and Dunn's two volume [1975, 1992] work Entailment: The Logic of Relevance and Necessity, runs to over 1200 pages, and is their summary of just some of the work done by them and their co- workers up to about the late 1980s. Further, the comprehensive bibliography (prepared by R. G. Wolf) contains over 3000 entries in work on relevance logic and related fields. So, we need some way of delimiting our topic. To be honest the fact that we are writing this is already a kind of delimitation. It is natural that you shall find emphasised here the work that we happen to know best. But still rationality demands a less subjective rationale, and so we will proceed as follows. Anderson [1963] set forth some open problems for his and Belnap's sys- tem E that have given shape to much of the subsequent research in relevance logic (even much of the earlier work can be seen as related to these open problems, e.g. by giving rise to them). Anderson picks three of these prob- lems as major: (1) the admissibility of Ackermann's rule γ (the reader should not worry that he is expected to already know what this means), (2) the decision problems, (3) the providing of a semantics.
    [Show full text]
  • Baronett, Logic (4Th Ed.) Chapter Guide
    Baronett, Logic (4th ed.) Chapter Guide Chapter 8: Natural Deduction A. Natural Deduction Natural deduction is a proof procedure based on the truth definitions of the logical operators, ~, •, v, , and . This system uses implication rules and replacement rules to derive the conclusion of a valid argument. B. Implication Rules I Implication rules are valid argument forms. They are validly applied only to an entire line of a proof. Modus Ponens (MP), Modus Tollens (MT), Hypothetical Syllogism (HS), and Disjunctive Syllogism (DS) make up the first four implication rules in the system of natural deduction: C. Tactics and Strategy Natural deduction allows for creativity because sometimes more than one equally correct proof is possible for a given problem. This means that it is important to approach a proof with constructive ideas about how to reach the goal of deriving the conclusion from the premises. Strategy is the overall approach you take to setting up a proof, such as initially locating the conclusion somewhere “inside” the premises. Tactics are small scale maneuvers as you work your way through a proof, using each of the rules. D. Implication Rules II Simplification (Simp), Conjunction (Conj), Addition (Add) and Constructive Dilemma (CD) make up the second four implication rules in the system of natural deduction: E. Replacement Rules I Both sets of replacements rules enlist the principle of replacement, which states that logically equivalent expressions may replace each other within the context of a proof. Replacement rules can be used either on a part of a line or an entire line in a proof. De Morgan (DM), Double Negation (DN), Commutation (Com), Association (Assoc), Distribution (Dist) and are sets of equivalent statements, one of which can be substituted for the other in a proof.
    [Show full text]
  • About the Class: Some General Course Information Who Should Take This Class? Course Mechanics What You Need to Do About Logic: Why Is It Important?
    Phil. 2440 Course Requirements, What is logic? To discuss today: About the class: Some general course information Who should take this class? Course mechanics What you need to do About logic: Why is it important? About the course: Some general course information Professor: Michael Huemer <[email protected]> Office hours: MWF, 1-2, in Prufrock’s. Web page: http://home.sprynet.com/~owl1/244.htm Subject matter of the course: Propositional logic Predicate logic Set theory Metalogic + Gödel’s Theorem Course requirements: Tests. Homework problems. Guidelines (see syllabus): May discuss, do not copy Lateness: 2/3 credit Sending by email Grading Miscellaneous guidelines for the course: Come on time. Come to office hours. Question. Grading: the curve: (Adjusted grade) = (Raw score)( n) + 100 (1- n) What do you need to do now? Get the course reader. Read the syllabus. Read chapter 1. For Friday: do questions on chapter 1 About logic: Why is logic important for philosophers? The importance of arguments in philosophy Logic teaches us about the structure of propositions. Many philosophical theses/issues could not be formulated without modern, formal logic. You should be able to understand modern philosophers. Can logic help us make progress in philosophy? To think about: how did modern science make progress? The role of mathematics in modern science. Phil. 2440 Chapter 1: General Introduction To discuss today: What logic is Arguments Basic concepts used in logic Some silly-sounding principles of logic The subject matter of logic: What is logic? What is reasoning? The importance of reasoning ‘Correct’ vs. ‘incorrect’ reasoning. Two kinds of mistakes: False premises Invalid reasoning Logical vs.
    [Show full text]