<<

From Wikipedia, the free encyclopedia Contents

1 () 1 1.1 Formal notation ...... 1 1.2 Examples ...... 1 1.3 by ...... 2 1.4 ...... 2 1.5 References ...... 2

2 3 2.1 Definition ...... 3 2.2 Generalized associative law ...... 4 2.3 Examples ...... 4 2.4 Propositional logic ...... 7 2.4.1 ...... 7 2.4.2 Truth functional connectives ...... 7 2.5 Non-associativity ...... 7 2.5.1 Nonassociativity of floating point calculation ...... 8 2.5.2 Notation for non-associative operations ...... 8 2.6 See also ...... 10 2.7 References ...... 10

3 11 3.1 Etymology ...... 11 3.2 Historical development ...... 12 3.2.1 Early Greeks ...... 12 3.2.2 Modern development ...... 12 3.2.3 Other ...... 13 3.3 Mathematical logic ...... 14 3.3.1 Logical ...... 14 3.3.2 Non-logical axioms ...... 15 3.3.3 Role in mathematical logic ...... 16 3.3.4 Further discussion ...... 17 3.4 See also ...... 17 3.5 References ...... 17

i ii CONTENTS

3.6 Further reading ...... 17 3.7 External links ...... 18

4 Axiom schema 19 4.1 Formal definition ...... 19 4.2 Finite axiomatization ...... 19 4.3 Examples ...... 19 4.4 Finitely axiomatized theoreies ...... 19 4.5 In higher-order logic ...... 20 4.6 See also ...... 20 4.7 References ...... 20

5 Axiomatic system 21 5.1 Properties ...... 21 5.2 Relative consistency ...... 21 5.3 Models ...... 21 5.4 Axiomatic method ...... 22 5.4.1 History ...... 22 5.4.2 Issues ...... 22 5.4.3 Example: The Peano axiomatization of natural numbers ...... 23 5.4.4 Axiomatization ...... 23 5.5 See also ...... 23 5.6 References ...... 23

6 Biconditional elimination 24 6.1 Formal notation ...... 24 6.2 See also ...... 25 6.3 References ...... 25

7 Biconditional introduction 26 7.1 Formal notation ...... 26 7.2 References ...... 26

8 27 8.1 Common uses ...... 27 8.2 Mathematical definitions ...... 28 8.3 Examples ...... 28 8.3.1 Commutative operations in everyday life ...... 28 8.3.2 Commutative operations in ...... 28 8.3.3 Noncommutative operations in everyday life ...... 29 8.3.4 Noncommutative operations in mathematics ...... 30 8.4 History and etymology ...... 30 8.5 Propositional logic ...... 31 CONTENTS iii

8.5.1 Rule of replacement ...... 31 8.5.2 Truth functional connectives ...... 31 8.6 Set theory ...... 31 8.7 Mathematical structures and commutativity ...... 32 8.8 Related properties ...... 32 8.8.1 Associativity ...... 32 8.8.2 Symmetry ...... 32 8.9 Non-commuting operators in quantum mechanics ...... 32 8.10 See also ...... 33 8.11 Notes ...... 34 8.12 References ...... 34 8.12.1 Books ...... 34 8.12.2 Articles ...... 35 8.12.3 Online resources ...... 35

9 36 9.1 Formal notation ...... 36 9.2 References ...... 37

10 38 10.1 Formal notation ...... 38 10.2 References ...... 38

11 39 11.1 Formal notation ...... 39 11.2 Variable English ...... 39 11.3 Natural language example ...... 39 11.4 References ...... 40

12 De Morgan’s laws 41 12.1 Formal notation ...... 41 12.1.1 form ...... 43 12.1.2 Set theory and Boolean algebra ...... 43 12.1.3 Engineering ...... 44 12.1.4 Text searching ...... 44 12.2 History ...... 44 12.3 Informal proof ...... 45 12.3.1 of a disjunction ...... 45 12.3.2 Negation of a conjunction ...... 45 12.4 Formal proof ...... 46 12.5 Extensions ...... 46 12.6 See also ...... 47 12.7 References ...... 48 iv CONTENTS

12.8 External links ...... 48

13 Deduction 49 13.1 Examples of deduction ...... 49 13.2 Virtual rules of ...... 50 13.3 Conversion from proof using the deduction meta-theorem to axiomatic proof ...... 50 13.4 The deduction theorem in predicate logic ...... 51 13.5 Example of conversion ...... 52 13.6 Paraconsistent deduction theorem ...... 53 13.7 The resolution theorem ...... 53 13.8 See also ...... 53 13.9 Notes ...... 53 13.10References ...... 53 13.11External links ...... 54

14 55 14.1 Formal notation ...... 55 14.2 Natural language example ...... 55 14.3 Proof ...... 56 14.4 Example proof ...... 56 14.5 References ...... 56 14.6 Bibliography ...... 56 14.7 External links ...... 56

15 57 15.1 Formal notation ...... 57 15.2 See also ...... 58 15.3 References ...... 58

16 59 16.1 Formal notation ...... 59 16.2 References ...... 59

17 60 17.1 Formal notation ...... 60 17.2 Natural language examples ...... 61 17.3 Inclusive and exclusive disjunction ...... 61 17.4 Related argument forms ...... 61 17.5 References ...... 62

18 63 18.1 Definition ...... 63 18.2 Meaning ...... 63 18.3 Examples ...... 64 CONTENTS v

18.3.1 Real numbers ...... 64 18.3.2 Matrices ...... 65 18.3.3 Other examples ...... 65 18.4 Propositional logic ...... 65 18.4.1 Rule of replacement ...... 65 18.4.2 Truth functional connectives ...... 66 18.5 Distributivity and rounding ...... 66 18.6 Distributivity in rings ...... 66 18.7 Generalizations of distributivity ...... 67 18.7.1 Notions of antidistributivity ...... 67 18.8 Notes ...... 67 18.9 References ...... 68 18.10External links ...... 68

19 69 19.1 Double negative elimination ...... 69 19.1.1 Formal notation ...... 70 19.2 See also ...... 70 19.3 Footnotes ...... 70 19.4 References ...... 71

20 Existential generalization 72 20.1 Quine ...... 72 20.2 See also ...... 72 20.3 References ...... 72

21 Existential instantiation 73 21.1 See also ...... 73 21.2 References ...... 73

22 (logic) 74 22.1 Formal notation ...... 74 22.2 Natural language ...... 74 22.2.1 Truth values ...... 74 22.2.2 Example ...... 75 22.3 Relation to functions ...... 75 22.4 References ...... 75

23 First principle 76 23.1 First principles in formal logic ...... 76 23.2 Philosophy in general ...... 76 23.3 Aristotle’s contribution ...... 76 23.4 Descartes ...... 77 vi CONTENTS

23.5 In physics ...... 77 23.6 Notes ...... 77 23.7 See also ...... 78 23.8 External links ...... 78

24 Formal ethics 79 24.1 Symbolic representation ...... 79 24.2 Axioms ...... 80 24.3 Notes ...... 81 24.4 Further reading ...... 81

25 Formal proof 82 25.1 Background ...... 82 25.1.1 Formal language ...... 82 25.1.2 Formal grammar ...... 82 25.1.3 Formal systems ...... 82 25.1.4 Interpretations ...... 83 25.2 See also ...... 83 25.3 References ...... 83 25.4 External links ...... 83

26 84 26.1 Related subjects ...... 84 26.1.1 Logical system ...... 84 26.1.2 Deductive system ...... 85 26.1.3 Formal proofs ...... 85 26.1.4 Formal language ...... 85 26.1.5 Formal grammar ...... 85 26.2 See also ...... 86 26.3 References ...... 86 26.4 Further reading ...... 86 26.5 External links ...... 86

27 87 27.1 Formal notation ...... 87 27.2 See also ...... 87 27.3 References ...... 88 27.4 External links ...... 88

28 List of formal systems 89 28.1 Mathematical ...... 89 28.2 Other formal systems ...... 89 28.3 See also ...... 89 CONTENTS vii

29 Material implication () 90 29.1 Formal notation ...... 90 29.2 Example ...... 90 29.3 References ...... 91

30 92 30.1 References ...... 92

31 93 31.1 Formal notation ...... 93 31.2 Explanation ...... 94 31.3 Justification via truth table ...... 94 31.4 See also ...... 94 31.5 References ...... 95 31.6 Sources ...... 95 31.7 External links ...... 95

32 96 32.1 Formal notation ...... 96 32.2 Explanation ...... 97 32.3 Relation to modus ponens ...... 97 32.4 Justification via truth table ...... 98 32.5 Formal proof ...... 98 32.5.1 Via disjunctive syllogism ...... 98 32.5.2 Via reductio ad absurdum ...... 98 32.6 See also ...... 98 32.7 Notes ...... 98 32.8 External link ...... 98

33 99 33.1 Formal notation ...... 99 33.2 External links ...... 99 33.3 References ...... 99

34 Physical system 100 34.1 Examples of physical symbol systems ...... 100 34.2 Arguments in favor of the physical symbol system hypothesis ...... 101 34.2.1 Newell and Simon ...... 101 34.2.2 Turing completeness ...... 101 34.3 Criticism ...... 101 34.3.1 Dreyfus and the primacy of unconscious skills ...... 102 34.3.2 Searle and his Chinese room ...... 102 34.3.3 Brooks and the roboticists ...... 102 viii CONTENTS

34.3.4 Connectionism ...... 102 34.3.5 Embodied philosophy ...... 103 34.4 See also ...... 103 34.5 Notes ...... 103 34.6 References ...... 103

35 Predicate logic 105 35.1 See also ...... 105 35.2 Footnotes ...... 105 35.3 References ...... 105

36 Proof (truth) 107 36.1 On proof ...... 107 36.2 See also ...... 108 36.3 References ...... 108

37 Rule of inference 109 37.1 The standard form of rules of inference ...... 109 37.2 Axiom schemas and axioms ...... 110 37.3 Example: Hilbert systems for two propositional ...... 110 37.4 Admissibility and derivability ...... 111 37.5 See also ...... 111 37.6 References ...... 112

38 Rule of replacement 113 38.1 References ...... 113

39 (rule of inference) 114 39.1 Relation to tautology ...... 114 39.2 Formal notation ...... 114 39.3 References ...... 115

40 (logic) 116 40.1 Formal notation ...... 116 40.2 Traditional logic ...... 116 40.2.1 Form of transposition ...... 117 40.2.2 Sufficient condition ...... 117 40.2.3 Necessary condition ...... 117 40.2.4 Grammatically speaking ...... 117 40.2.5 Relationship of ...... 117 40.2.6 Transposition and the method of ...... 118 40.2.7 Differences between transposition and contraposition ...... 118 40.3 Transposition in mathematical logic ...... 118 40.4 Proof ...... 118 CONTENTS ix

40.5 See also ...... 118 40.6 References ...... 118 40.7 Further reading ...... 119 40.8 External links ...... 119

41 Turnstile (symbol) 120 41.1 Interpretations ...... 120 41.2 See also ...... 121 41.3 Notes ...... 121 41.4 References ...... 122

42 Universal generalization 123 42.1 Generalization with hypotheses ...... 123 42.2 Example of a proof ...... 123 42.3 See also ...... 123 42.4 References ...... 124

43 125 43.1 Quine ...... 125 43.2 See also ...... 125 43.3 References ...... 126 43.4 Text and image sources, contributors, and licenses ...... 127 43.4.1 Text ...... 127 43.4.2 Images ...... 131 43.4.3 Content license ...... 132 Chapter 1

Absorption (logic)

Absorption is a valid argument form and rule of inference of propositional logic.[1][2] The rule states that if P implies Q , then P implies P and Q . The rule makes it possible to introduce conjunctions to proofs. It is called the law of absorption because the term Q is “absorbed” by the term P in the consequent.[3] The rule can be stated:

P → Q ∴ P → (P ∧ Q)

where the rule is that wherever an instance of " P → Q " appears on a line of a proof, " P → (P ∧ Q) " can be placed on a subsequent line.

1.1 Formal notation

The absorption rule may be expressed as a :

P → Q ⊢ P → (P ∧ Q) where ⊢ is a metalogical symbol meaning that P → (P ∧Q) is a syntactic consequences of (P → Q) in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P → Q) ↔ (P → (P ∧ Q))

where P , and Q are propositions expressed in some formal system.

1.2 Examples

If it will rain, then I will wear my coat. Therefore, if it will rain then it will rain and I will wear my coat.

1 2 CHAPTER 1. ABSORPTION (LOGIC)

1.3 Proof by truth table

1.4 Formal proof

1.5 References

[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 362.

[2] http://www.philosophypages.com/lg/e11a.htm

[3] Russell and Whitehead, Principia Mathematica Chapter 2

Associative property

This article is about associativity in mathematics. For associativity in the central processing unit memory cache, see CPU cache. For associativity in programming languages, see operator associativity. “Associative” and “non-associative” redirect here. For associative and non-associative learning, see Learning#Types.

In mathematics, the associative property[1] is a property of some binary operations. In propositional logic, associa- tivity is a valid rule of replacement for expressions in logical proofs. Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, rearranging the parentheses in such an expression will not change its value. Consider the following equations:

(2 + 3) + 4 = 2 + (3 + 4) = 9

2 × (3 × 4) = (2 × 3) × 4 = 24. Even though the parentheses were rearranged, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that “addition and multiplication of real numbers are associative operations”. Associativity is not to be confused with commutativity, which addresses whether a × b = b × a. Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative. However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation and the vector cross product. In contrast to the theoretical counterpart, the addition of floating point numbers in com- puter is not associative, and is an important source of rounding error.

2.1 Definition

Formally, a binary operation ∗ on a set S is called associative if it satisfies the associative law:

(x ∗ y) ∗ z = x ∗ (y ∗ z) for all x, y, z in S.

Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol like for the multiplication.

(xy)z = x(yz) = xyz for all x, y, z in S.

The associative law can also be expressed in functional notation thus: f(f(x, y), z) = f(x, f(y, z)).

3 4 CHAPTER 2. ASSOCIATIVE PROPERTY

Associative binary operation ∗ on the set S.

2.2 Generalized associative law

If a binary operation is associative, repeated application of the operation produces the same result regardless how valid pairs of parenthesis are inserted in the expression.[2] This is called the generalized associative law. For instance, a product of four elements may be written in five possible ways:

1. ((ab)c)d

2. (ab)(cd)

3. (a(bc))d

4. a((bc)d)

5. a(b(cd))

If the product operation is associative, the generalized associative law says that all these formulas will yield the same result, making the parenthesis unnecessary. Thus “the” product can be written unambiguously as

abcd.

As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.

2.3 Examples

Some examples of associative operations include the following.

• The concatenation of the three strings “hello”, " ", “world” can be computed by concatenating the first two strings (giving “hello ") and appending the third string (“world”), or by joining the second and third string (giving " world”) and concatenating the first string (“hello”) with the result. The two methods produce the same result; string concatenation is associative (but not commutative).

• In arithmetic, addition and multiplication of real numbers are associative; i.e., 2.3. EXAMPLES 5

a(b(c(de)))

a(b((cd)e)) (ab)(c(de))

a((bc)(de)) a((b(cd))e)

a(((bc)d)e) (ab)((cd)e)

(a(bc))(de) (a(b(cd)))e

(a((bc)d))e ((ab)(cd))e

((ab)c)(de) ((a(bc))d)e

(((ab)c)d)e

In the absence of the associative property, five factors a, b, c, d, e result in a Tamari lattice of order four, possibly different products.

} (x + y) + z = x + (y + z) = x + y + z for all x, y, z ∈ R. (x y)z = x(y z) = x y z Because of associativity, the grouping parentheses can be omitted without ambiguity. 6 CHAPTER 2. ASSOCIATIVE PROPERTY

(x + y) + z = x + (y + z)

The addition of real numbers is associative.

• Addition and multiplication of complex numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative.

• The greatest common divisor and least common multiple functions act associatively.

} gcd(gcd(x, y), z) = gcd(x, gcd(y, z)) = gcd(x, y, z) for all x, y, z ∈ Z. lcm(lcm(x, y), z) = lcm(x, lcm(y, z)) = lcm(x, y, z)

• Taking the intersection or the union of sets:

} (A ∩ B) ∩ C = A ∩ (B ∩ C) = A ∩ B ∩ C for all sets A, B, C. (A ∪ B) ∪ C = A ∪ (B ∪ C) = A ∪ B ∪ C

• If M is some set and S denotes the set of all functions from M to M, then the operation of functional composition on S is associative:

(f ◦ g) ◦ h = f ◦ (g ◦ h) = f ◦ g ◦ h for all f, g, h ∈ S.

• Slightly more generally, given four sets M, N, P and Q, with h: M to N, g: N to P, and f: P to Q, then

(f ◦ g) ◦ h = f ◦ (g ◦ h) = f ◦ g ◦ h

as before. In short, composition of maps is always associative.

• Consider a set with three elements, A, B, and C. The following operation:

is associative. Thus, for example, A(BC)=(AB)C = A. This mapping is not commutative.

• Because matrices represent linear transformation functions, with matrix multiplication representing functional composition, one can immediately conclude that matrix multiplication is associative. 2.4. PROPOSITIONAL LOGIC 7

2.4 Propositional logic

2.4.1 Rule of replacement

In standard truth-functional propositional logic, association,[3][4] or associativity[5] are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules are:

(P ∨ (Q ∨ R)) ⇔ ((P ∨ Q) ∨ R) and

(P ∧ (Q ∧ R)) ⇔ ((P ∧ Q) ∧ R), where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

2.4.2 Truth functional connectives

Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following are truth-functional tautologies. Associativity of disjunction:

(P ∨ (Q ∨ R)) ↔ ((P ∨ Q) ∨ R)

((P ∨ Q) ∨ R) ↔ (P ∨ (Q ∨ R)) Associativity of conjunction:

((P ∧ Q) ∧ R) ↔ (P ∧ (Q ∧ R))

(P ∧ (Q ∧ R)) ↔ ((P ∧ Q) ∧ R) Associativity of equivalence:

((P ↔ Q) ↔ R) ↔ (P ↔ (Q ↔ R))

(P ↔ (Q ↔ R)) ↔ ((P ↔ Q) ↔ R)

2.5 Non-associativity

A binary operation ∗ on a set S that does not satisfy the associative law is called non-associative. Symbolically,

(x ∗ y) ∗ z ≠ x ∗ (y ∗ z) for some x, y, z ∈ S.

For such an operation the order of evaluation does matter. For example:

• Subtraction

(5 − 3) − 2 ≠ 5 − (3 − 2) 8 CHAPTER 2. ASSOCIATIVE PROPERTY

• Division

(4/2)/2 ≠ 4/(2/2)

• Exponentiation

2 2(1 ) ≠ (21)2 Also note that infinite sums are not generally associative, for example:

(1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + ... = 0

whereas

1 + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + ... = 1

The study of non-associative structures arises from reasons somewhat different from the mainstream of classical algebra. One area within non-associative algebra that has grown very large is that of Lie algebras. There the associative law is replaced by the Jacobi identity. Lie algebras abstract the essential nature of infinitesimal transformations, and have become ubiquitous in mathematics. There are other specific types of non-associative structures that have been studied in depth; these tend to come from some specific applications or areas such as combinatorial mathematics. Other examples are Quasigroup, Quasifield, Non-associative ring, Non-associative algebra and Commutative non-associative magmas.

2.5.1 Nonassociativity of floating point calculation

In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the addition and multiplication of floating point numbers is not associative, as rounding errors are introduced when dissimilar-sized values are joined together.[6] To illustrate this, consider a floating point representation with a 4-bit mantissa: 0 0 4 1 4 4 (1.0002×2 + 1.0002×2 ) + 1.0002×2 = 1.0002×2 + 1.0002×2 = 1.0012×2 0 0 4 0 4 4 1.0002×2 + (1.0002×2 + 1.0002×2 ) = 1.0002×2 + 1.0002×2 = 1.0002×2 Even though most computers compute with a 24 or 53 bits of mantissa,[7] this is an important source of rounding error, and approaches such as the Kahan Summation Algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.[8] [9]

2.5.2 Notation for non-associative operations

Main article: Operator associativity

In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression. However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses. A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,

 x ∗ y ∗ z = (x ∗ y) ∗ z  ∗ ∗ ∗ ∗ ∗ ∗ ∈ w x y z = ((w x) y) z  for all w, x, y, z S etc.

while a right-associative operation is conventionally evaluated from right to left: 2.5. NON-ASSOCIATIVITY 9

 x ∗ y ∗ z = x ∗ (y ∗ z)  ∗ ∗ ∗ ∗ ∗ ∗ ∈ w x y z = w (x (y z))  for all w, x, y, z S etc. Both left-associative and right-associative operations occur. Left-associative operations include the following:

• Subtraction and division of real numbers:

x − y − z = (x − y) − z for all x, y, z ∈ R; x/y/z = (x/y)/z for all x, y, z ∈ R with y ≠ 0, z ≠ 0.

• Function application:

(f x y) = ((f x) y) This notation can be motivated by the currying isomorphism.

Right-associative operations include the following:

• Exponentiation of real numbers:

z z xy = x(y ).

The reason exponentiation is right-associative is that a repeated left-associative exponentiation operation would be less useful. Multiple appearances could (and would) be rewritten with multiplication:

(xy)z = x(yz).

• Function definition

Z → Z → Z = Z → (Z → Z) x 7→ y 7→ x − y = x 7→ (y 7→ x − y)

Using right-associative notation for these operations can be motivated by the Curry-Howard correspon- dence and by the currying isomorphism.

Non-associative operations for which no conventional evaluation order is defined include the following.

• Taking the Cross product of three vectors:

⃗a × (⃗b × ⃗c) ≠ (⃗a ×⃗b) × ⃗c for some ⃗a,⃗b,⃗c ∈ R3

• Taking the pairwise average of real numbers:

(x + y)/2 + z x + (y + z)/2 ≠ for all x, y, z ∈ R with x ≠ z. 2 2 • Taking the relative complement of sets (A\B)\C is not the same as A\(B\C) . (Compare material nonim- plication in logic.) 10 CHAPTER 2. ASSOCIATIVE PROPERTY

2.6 See also

• Light’s associativity test

• A semigroup is a set with a closed associative binary operation. • Commutativity and distributivity are two other frequently discussed properties of binary operations.

• Power associativity, alternativity and N-ary associativity are weak forms of associativity.

2.7 References

[1] Thomas W. Hungerford (1974). Algebra (1st ed.). Springer. p. 24. ISBN 0387905189. Definition 1.1 (i) a(bc) = (ab)c for all a, b, c in G.

[2] Durbin, John R. (1992). Modern Algebra: an Introduction (3rd ed.). New York: Wiley. p. 78. ISBN 0-471-51001-7. If a1, a2, . . . , an (n ≥ 2) are elements of a set with an associative operation, then the product a1a2 . . . an is unambiguous; this is, the same element will be obtained regardless of how parentheses are inserted in the product

[3] Moore and Parker

[4] Copi and Cohen

[5] Hurley

[6] Knuth, Donald, The Art of Computer Programming, Volume 3, section 4.2.2

[7] IEEE Computer Society (August 29, 2008). “IEEE Standard for Floating-Point Arithmetic”. IEEE. doi:10.1109/IEEESTD.2008.4610935. ISBN 978-0-7381-5753-5. IEEE Std 754-2008.

[8] Villa, Oreste; Chavarría-mir, Daniel; Gurumoorthi, Vidhya; Márquez, Andrés; Krishnamoorthy, Sriram, Effects of Floating- Point non-Associativity on Numerical Computations on Massively Multithreaded Systems (PDF), retrieved 2014-04-08

[9] Goldberg, David, “What Every Computer Scientist Should Know About Floating Point Arithmetic” (PDF), ACM Computing Surveys 23 (1): 5–48, doi:10.1145/103162.103163, retrieved 2014-04-08 Chapter 3

Axiom

This article is about logical propositions. For other uses, see Axiom (disambiguation). “Axiomatic” redirects here. For other uses, see Axiomatic (disambiguation). “Postulation” redirects here. For the term in algebraic , see Postulation (algebraic geometry).

An axiom or postulate is a premise or starting point of reasoning. As classically conceived, an axiom is a premise so evident as to be accepted as true without controversy.[1] The word comes from the Greek axíōma (ἀξίωμα) 'that which is thought worthy or fit' or 'that which commends itself as evident.'[2][3] As used in modern logic, an axiom is simply a premise or starting point for reasoning.[4] What it means for an axiom, or any mathematical , to be “true” is a central question in the philosophy of mathematics, with modern mathematicians holding a multitude of different opinions. In mathematics, the term axiom is used in two related but distinguishable senses: “logical axioms” and “non-logical axioms”. Logical axioms are usually statements that are taken to be true within the system of logic they define (e.g., (A and B) implies A), while non-logical axioms (e.g., a + b = b + a) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, “axiom,” “postulate”, and “assumption” may be used interchangeably. In general, a non-logical axiom is not a self- evident truth, but rather a formal logical expression used in deduction to build a mathematical theory. As modern mathematics admits multiple, equally “true” systems of logic, precisely the same thing must be said for logical axioms - they both define and are specific to the particular system of logic that is being invoked. To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain. In both senses, an axiom is any mathematical statement that serves as a starting point from which other statements are logically derived. Within the system they define, axioms (unless redundant) cannot be derived by principles of deduction, nor are they demonstrable by mathematical proofs, simply because they are starting points; there is nothing else from which they logically follow otherwise they would be classified as . However, an axiom in one system may be a theorem in another, and vice versa.

3.1 Etymology

The word “axiom” comes from the Greek word ἀξίωμα (axioma), a verbal noun from the verb ἀξιόειν (axioein), meaning “to deem worthy”, but also “to require”, which in turn comes from ἄξιος (axios), meaning “being in balance”, and hence “having (the same) value (as)", “worthy”, “proper”. Among the ancient Greek philosophers an axiom was a claim which could be seen to be true without any need for proof. The root meaning of the word 'postulate' is to 'demand'; for instance, demands of us that we agree that some things can be done, e.g. any two points can be joined by a straight line, etc.[5] Ancient geometers maintained some distinction between axioms and postulates. While commenting Euclid’s books remarks that " held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property”.[6] Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept.

11 12 CHAPTER 3. AXIOM

3.2 Historical development

3.2.1 Early Greeks

The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference), was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, if we are talking about mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid. The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle’s posterior analytics is a definitive exposition of the classical view. An “axiom”, in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that

When an equal amount is taken from equals, an equal amount results.

At the foundation of the various sciences lay certain additional hypotheses which were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their had to be established by means of real-world experience. Indeed, Aristotle warns that the content of a science cannot be successfully communicated, if the learner is in doubt about the truth of the postulates.[7] The classical approach is well-illustrated by Euclid’s Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of “common notions” (very basic, self-evident asser- tions).

Postulates

1. It is possible to draw a straight line from any point to any other point. 2. It is possible to extend a continuously in both directions. 3. It is possible to describe a circle with any center and any radius. 4. It is true that all right are equal to one another. 5. ("") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles.

Common notions

1. Things which are equal to the same thing are also equal to one another. 2. If equals are added to equals, the wholes are equal. 3. If equals are subtracted from equals, the remainders are equal. 4. Things which coincide with one another are equal to one another. 5. The whole is greater than the part.

3.2.2 Modern development

A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathemat- ical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive 3.2. HISTORICAL DEVELOPMENT 13

notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowl- edge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement. Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an “axiom” and a “postulate” disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid’s fifth postulate we get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like “line” and “parallel” with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience. When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all. It is not correct to say that the axioms of field theory are “propositions that are regarded as true without proof.” Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system. Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development. In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a from the axiom. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom. It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert’s formalization of , and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor’s set theory. Here the emergence of Russell’s paradox, and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent. The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano’s axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory. It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural num- bers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.

3.2.3 Other sciences

Axioms play a key role not only in mathematics, but also in other sciences, notably in theoretical physics. In particular, the monumental work of Isaac Newton is essentially based on Euclid's axioms, augmented by a postulate on the non- relation of spacetime and the physics taking place in it at any moment. In 1905, Newton’s axioms were replaced by those of Albert Einstein's special relativity, and later on by those of general relativity. Another paper of Albert Einstein and coworkers (see EPR paradox), almost immediately contradicted by Niels Bohr, concerned the interpretation of quantum mechanics. This was in 1935. According to Bohr, this new theory should be probabilistic, whereas according to Einstein it should be deterministic. Notably, the underlying quantum mechanical theory, i.e. the set of “theorems” derived by it, seemed to be identical. Einstein even assumed that it would be sufficient to add to quantum mechanics “hidden variables” to enforce determinism. However, thirty years later, in 14 CHAPTER 3. AXIOM

1964, John Bell found a theorem, involving complicated optical correlations (see Bell inequalities), which yielded measurably different results using Einstein’s axioms compared to using Bohr’s axioms. And it took roughly another twenty years until an experiment of Alain Aspect got results in favour of Bohr’s axioms, not Einstein’s. (Bohr’s axioms are simply: The theory should be probabilistic in the sense of the Copenhagen interpretation.) As a consequence, it is not necessary to explicitly cite Einstein’s axioms, the more so since they concern subtle points on the “reality” and “locality” of experiments. Regardless, the role of axioms in mathematics and in the above-mentioned sciences is different. In mathematics one neither “proves” nor “disproves” an axiom for a set of theorems; the point is simply that in the conceptual realm identified by the axioms, the theorems logically follow. In contrast, in physics a comparison with experiments always makes sense, since a falsified physical theory needs modification.

3.3 Mathematical logic

In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between “axioms” and “postulates” respectively).

3.3.1 Logical axioms

These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.

Examples

Propositional logic In propositional logic it is common to take as logical axioms all formulae of the following forms, where ϕ , χ , and ψ can be any formulae of the language and where the included primitive connectives are only " ¬ " for negation of the immediately following and " → " for implication from antecedent to consequent propositions:

1. ϕ → (ψ → ϕ) 2. (ϕ → (ψ → χ)) → ((ϕ → ψ) → (ϕ → χ)) 3. (¬ϕ → ¬ψ) → (ψ → ϕ).

Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if A , B , and C are propositional variables, then A → (B → A) and (A → ¬B) → (C → (A → ¬B)) are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the . It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens. Other axiom schemas involving the same or different sets of primitive connectives can be alternatively constructed.[8] These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus.[9]

First-order logic Axiom of Equality. Let L be a first-order language. For each variable x , the formula

x = x

is universally valid. This means that, for any variable symbol x , the formula x = x can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of “primitive notions”, either a precise notion of what we mean by x = x (or, for that matter, “to be equal”) has to be well established first, or a purely formal and syntactical 3.3. MATHEMATICAL LOGIC 15

usage of the symbol = has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that. Another, more interesting example axiom scheme, is that which provides us with what is known as Universal In- stantiation: Axiom scheme for Universal Instantiation. Given a formula ϕ in a first-order language L , a variable x and a term t that is substitutable for x in ϕ , the formula

∀ → x x ϕ ϕt

is universally valid. x Where the symbol ϕt stands for the formula ϕ with the term t substituted for x . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property P holds for every x and that t stands for a particular object in our structure, then we should be able to claim P (t) . Again, we are claiming that ∀ → x the formula xϕ ϕt is valid, that is, we must be able to give a “proof” of this fact, or more properly speaking, a metaproof. Actually, these examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization: Axiom scheme for Existential Generalization. Given a formula ϕ in a first-order language L , a variable x and a term t that is substitutable for x in ϕ , the formula

x → ∃ ϕt x ϕ is universally valid.

3.3.2 Non-logical axioms

Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non- logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate.[10] Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that in principle every theory could be axiomatized in this way and formalized down to the bare language of logical formulas. Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non- commutative groups. Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system.

Examples

This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms. Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo– Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse-Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe are used, but in fact most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic. The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory. 16 CHAPTER 3. AXIOM

This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry. Combinatorics is an example of a field of mathematics which does not, in general, follow the axiomatic method.

Arithmetic The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem.[11]

We have a language LNT = {0,S} where 0 is a constant symbol and S is a unary function and the following axioms:

1. ∀x.¬(Sx = 0) 2. ∀x.∀y.(Sx = Sy → x = y)

3. ((ϕ(0) ∧ ∀x. (ϕ(x) → ϕ(Sx))) → ∀x.ϕ(x) for any LNT formula ϕ with one free variable.

The standard structure is N = ⟨N, 0,S⟩ where N is the set of natural numbers, S is the successor function and 0 is naturally interpreted as the number 0.

Euclidean geometry Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid’s postulates of plane geometry. The axioms are referred to as “4 + 1” because for nearly two millennia the fifth (parallel) postulate (“through a point outside a line there is exactly one parallel”) was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. Indeed, one can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic . If one also removes the second postulate (“a line can be extended indefinitely”) then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees.

Real analysis The object of study is the real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires use of second-order logic. The Löwenheim-Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis.

3.3.3 Role in mathematical logic

Deductive systems and completeness

A deductive system consists of a set Λ of logical axioms, a set Σ of non-logical axioms, and a set {(Γ, ϕ)} of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas ϕ ,

ifΣ |= ϕ then Σ ⊢ ϕ

that is, for any statement that is a of Σ there actually exists a deduction of the statement from Σ . This is sometimes expressed as “everything that is true is provable”, but it must be understood that “true” here means “made true by the set of axioms”, and not, for example, “true in the intended interpretation”. Gödel’s completeness theorem establishes the completeness of a certain commonly used type of deductive system. Note that “completeness” has a different meaning here than it does in the context of Gödel’s first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms Σ of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement ϕ such that neither ϕ nor ¬ϕ can be proved from the given set of axioms. There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another. 3.4. SEE ALSO 17

3.3.4 Further discussion

Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details and modern algebra was born. In the modern view axioms may be any set of formulas, as long as they are not known to be inconsistent.

3.4 See also

• Axiomatic system

• Dogma

• List of axioms

• Model theory

• Regulæ Juris

• Theorem

3.5 References

[1] “A proposition that commends itself to general acceptance; a well-established or universally conceded principle; a maxim, rule, law” axiom, n., definition 1a. Oxford English Dictionary Online, accessed 2012-04-28. Cf. Aristotle, Posterior Analytics I.2.72a18-b4.

[2] Cf. axiom, n., etymology. Oxford English Dictionary, accessed 2012-04-28.

[3] Oxford American College Dictionary: “n. a statement or proposition that is regarded as being established, accepted, or self-evidently true. ORIGIN: late 15th cent.: ultimately from Greek axiōma 'what is thought fitting,' from axios 'worthy.' http://www.highbeam.com/doc/1O997-axiom.html (subscription required)

[4] “A proposition (whether true or )" axiom, n., definition 2. Oxford English Dictionary Online, accessed 2012-04-28.

[5] Wolff, P. Breakthroughs in Mathematics, 1963, New York: New American Library, pp 47–8

[6] Heath, T. 1956. The Thirteen Books of Euclid’s Elements. New York: Dover. p200

[7] Aristotle, Metaphysics Bk IV, Chapter 3, 1005b “Physics also is a kind of Wisdom, but it is not the first kind. – And the attempts of some of those who discuss the terms on which truth should be accepted, are due to want of training in logic; for they should know these things already when they come to a special study, and not be inquiring into them while they are listening to lectures on it.” W.D. Ross translation, in The Basic Works of Aristotle, ed. Richard McKeon, (Random House, New York, 1941)|date=June 2011

[8] Mendelson, “6. Other Axiomatizations” of Ch. 1

[9] Mendelson, “3. First-Order Theories” of Ch. 2

[10] Mendelson, “3. First-Order Theories: Proper Axioms” of Ch. 2

[11] Mendelson, “5. The Fixed Point Theorem. Gödel’s Incompleteness Theorem” of Ch. 2

3.6 Further reading

• Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks. ISBN 0-534-06624-0 18 CHAPTER 3. AXIOM

3.7 External links

• Axiom at PhilPapers

• Axiom at PlanetMath.org. • Metamath axioms page Chapter 4

Axiom schema

In mathematical logic, an axiom schema (plural: axiom schemata) generalizes the notion of axiom.

4.1 Formal definition

An axiom schema is a formula in the language of an axiomatic system, in which one or more schematic variables appear. These variables, which are metalinguistic constructs, stand for any term or subformula of the system, which may or may not be required to satisfy certain conditions. Often, such conditions require that certain variables be free, or that certain variables not appear in the subformula or term.

4.2 Finite axiomatization

Given that the number of possible subformulas or terms that can be inserted in place of a schematic variable is countably infinite, an axiom schema stands for a countably infinite set of axioms. This set can usually be defined recursively. A theory that can be axiomatized without schemata is said to be finitely axiomatized. Theories that can be finitely axiomatized are seen as a bit more metamathematically elegant, even if they are less practical for deductive work.

4.3 Examples

Two very well known instances of axiom schemata are the:

• induction schema that is part of Peano’s axioms for the arithmetic of the natural numbers;

• axiom schema of replacement that is part of the standard ZFC axiomatization of set theory.

It has been proved (first by Richard Montague) that these schemata cannot be eliminated. Hence Peano arithmetic and ZFC cannot be finitely axiomatized. This is also the case for quite a few other axiomatic theories in mathematics, philosophy, linguistics, etc.

4.4 Finitely axiomatized theoreies

All theorems of ZFC are also theorems of von Neumann–Bernays–Gödel set theory, but the latter is, quite surpris- ingly, finitely axiomatized. The set theory New Foundations can be finitely axiomatized, but only with some loss of elegance.

19 20 CHAPTER 4. AXIOM SCHEMA

4.5 In higher-order logic

Schematic variables in first-order logic are usually trivially eliminable in second-order logic, because a schematic variable is often a placeholder for any property or relation over the individuals of the theory. This is the case with the schemata of Induction and Replacement mentioned above. Higher-order logic allows quantified variables to range over all possible properties or relations.

4.6 See also

• Axiom schema of predicative separation

• Axiom schema of replacement • Axiom schema of specification

4.7 References

• Schema entry by John Corcoran in the Stanford Encyclopedia of Philosophy, 2008-09-21

• Corcoran, J. 2006. Schemata: the Concept of Schema in the History of Logic. Bulletin of Symbolic Logic 12: 219-40.

• Mendelson, Elliot, 1997. Introduction to Mathematical Logic, 4th ed. Chapman & Hall. • Potter, Michael, 2004. Set Theory and its Philosophy. Oxford Univ. Press. Chapter 5

Axiomatic system

In mathematics, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems.A mathematical theory consists of an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system; usually though, the effort towards complete formalisation brings diminishing returns in , and a lack of readability for humans. A formal theory typically means an axiomatic system, for example formulated within model theory.A formal proof is a complete rendition of a mathematical proof within a formal system.

5.1 Properties

An axiomatic system is said to be consistent if it lacks contradiction, i.e. the ability to derive both a statement and its denial from the system’s axioms. In an axiomatic system, an axiom is called independent if it is not a theorem that can be derived from other ax- ioms in the system. A system will be called independent if each of its underlying axioms is independent. Although independence is not a necessary requirement for a system, consistency is. An axiomatic system will be called complete if for every statement, either itself or its negation is derivable.

5.2 Relative consistency

Beyond consistency, relative consistency is also the mark of a worthwhile axiom system. This is when the undefined terms of a first axiom system are provided definitions from a second, such that the axioms of the first are theorems of the second. A good example is the relative consistency of neutral geometry or absolute geometry with respect to the theory of the system. Lines and points are undefined terms in absolute geometry, but assigned meanings in the theory of real numbers in a way that is consistent with both axiom systems.

5.3 Models

A model for an axiomatic system is a well-defined set, which assigns meaning for the undefined terms presented in the system, in a manner that is correct with the relations defined in the system. The existence of a concrete model proves the consistency of a system. A model is called concrete if the meanings assigned are objects and relations from the real world, as opposed to an abstract model which is based on other axiomatic systems. Models can also be used to show the independence of an axiom in the system. By constructing a valid model for a subsystem without a specific axiom, we show that the omitted axiom is independent if its correctness does not necessarily follow from the subsystem. Two models are said to be isomorphic if a one-to-one correspondence can be found between their elements, in a manner that preserves their relationship. An axiomatic system for which every model is isomorphic to another is

21 22 CHAPTER 5. AXIOMATIC SYSTEM

called categorial (sometimes categorical), and the property of categoriality (categoricity) ensures the completeness of a system.

5.4 Axiomatic method

Stating definitions and propositions in a way such that each new term can be formally eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid infinite regress. This way of doing mathematics is called the axiomatic method.[1] A common attitude towards the axiomatic method is . In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell attempted to show that all mathematical theory could be reduced to some collection of axioms. More generally, the reduction of a body of propositions to a particular collection of axioms underlies the mathematician’s research program. This was very prominent in the mathematics of the twentieth century, in particular in subjects based around homological algebra. The explication of the particular axioms used in a theory can help to clarify a suitable level of abstraction that the mathematician would like to work with. For example, mathematicians opted that rings need not be commutative, which differed from Emmy Noether's original formulation. Mathematicians decided to consider topological spaces more generally without the separation axiom which Felix Hausdorff originally formulated. The Zermelo-Fraenkel axioms, the result of the axiomatic method applied to set theory, allowed the “proper” for- mulation of set-theory problems and helped to avoid the paradoxes of naïve set theory. One such problem was the Continuum hypothesis.

5.4.1 History

Mathematical methods developed to some degree of sophistication in ancient Egypt, Babylon, India, and China, apparently without employing the axiomatic method. Euclid of Alexandria authored the earliest extant axiomatic presentation of Euclidean geometry and number theory. Many axiomatic systems were developed in the nineteenth century, including non-Euclidean geometry, the founda- tions of real analysis, Cantor's set theory, Frege's work on foundations, and Hilbert's 'new' use of axiomatic method as a research tool. For example, group theory was first put on an axiomatic basis towards the end of that century. Once the axioms were clarified (that inverse elements should be required, for example), the subject could proceed autonomously, without reference to the transformation group origins of those studies.

5.4.2 Issues

Not every consistent body of propositions can be captured by a describable collection of axioms. Call a collection of axioms recursive if a computer program can recognize whether a given proposition in the language is an axiom. Gödel’s First Incompleteness Theorem then tells us that there are certain consistent bodies of propositions with no recursive axiomatization. Typically, the computer can recognize the axioms and logical rules for deriving theorems, and the computer can recognize whether a proof is valid, but to determine whether a proof exists for a statement is only soluble by “waiting” for the proof or disproof to be generated. The result is that one will not know which propositions are theorems and the axiomatic method breaks down. An example of such a body of propositions is the theory of the natural numbers. The Peano Axioms (described below) thus only partially axiomatize this theory. In practice, not every proof is traced back to the axioms. At times, it is not clear which collection of axioms a proof appeals to. For example, a number-theoretic statement might be expressible in the language of arithmetic (i.e. the language of the Peano Axioms) and a proof might be given that appeals to topology or complex analysis. It might not be immediately clear whether another proof can be found that derives itself solely from the Peano Axioms. Any more-or-less arbitrarily chosen system of axioms is the basis of some mathematical theory, but such an arbitrary axiomatic system will not necessarily be free of , and even if it is, it is not likely to shed light on anything. Philosophers of mathematics sometimes assert that mathematicians choose axioms “arbitrarily”, but the truth is that although they may appear arbitrary when viewed only from the point of view of the canons of deductive logic, that is merely a limitation on the purposes that deductive logic serves. 5.5. SEE ALSO 23

5.4.3 Example: The Peano axiomatization of natural numbers

The mathematical system of natural numbers 0, 1, 2, 3, 4, ... is based on an axiomatic system first written down by the mathematician Peano in 1889. He chose the axioms (see Peano axioms), in the language of a single unary function symbol S (short for “successor”), for the set of natural numbers to be:

• There is a natural number 0.

• Every natural number a has a successor, denoted by Sa. • There is no natural number whose successor is 0.

• Distinct natural numbers have distinct successors: if a ≠ b, then Sa ≠ Sb. • If a property is possessed by 0 and also by the successor of every natural number it is possessed by, then it is possessed by all natural numbers ("Induction axiom").

5.4.4 Axiomatization

In mathematics, axiomatization is the formulation of a system of statements (i.e. axioms) that relate a number of primitive terms in order that a consistent body of propositions may be derived deductively from these statements. Thereafter, the proof of any proposition should be, in principle, traceable back to these axioms.

5.5 See also

• Axiom schema • Gödel’s incompleteness theorem

• Hilbert-style deduction system • Logicism

• Zermelo–Fraenkel set theory, an axiomatic system for set theory and today’s most common foundation for mathematics.

5.6 References

[1] "Set Theory and its Philosophy, a Critical Introduction” S.6; Michael Potter, Oxford, 2004

• Hazewinkel, Michiel, ed. (2001), “Axiomatic method”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 • Eric W. Weisstein, Axiomatic System, From MathWorld—A Wolfram Web Resource. Mathworld.wolfram.com & Answers.com Chapter 6

Biconditional elimination

Biconditional elimination is the name of two valid rules of inference of propositional logic. It allows for one to infer a conditional from a biconditional. If (P ↔ Q) is true, then one may infer that (P → Q) is true, and also that (Q → P ) is true.[1] For example, if it’s true that I'm breathing I'm alive, then it’s true that if I'm breathing, I'm alive; likewise, it’s true that if I'm alive, I'm breathing. The rules can be stated formally as:

(P ↔ Q) ∴ (P → Q)

and

(P ↔ Q) ∴ (Q → P )

where the rule is that wherever an instance of " (P ↔ Q) " appears on a line of a proof, either " (P → Q) " or " (Q → P ) " can be placed on a subsequent line;

6.1 Formal notation

The biconditional elimination rule may be written in sequent notation:

(P ↔ Q) ⊢ (P → Q)

and

(P ↔ Q) ⊢ (Q → P ) where ⊢ is a metalogical symbol meaning that (P → Q) , in the first case, and (Q → P ) in the other are syntactic consequences of (P ↔ Q) in some logical system; or as the statement of a truth-functional tautology or theorem of propositional logic:

(P ↔ Q) → (P → Q)

(P ↔ Q) → (Q → P ) where P , and Q are propositions expressed in some formal system.

24 6.2. SEE ALSO 25

6.2 See also

• Logical biconditional

6.3 References

[1] Cohen, S. Marc. “Chapter 8: The Logic of Conditionals” (PDF). University of Washington. Retrieved 8 October 2013. Chapter 7

Biconditional introduction

In propositional logic, biconditional introduction[1][2][3] is a valid rule of inference. It allows for one to infer a biconditional from two conditional statements. The rule makes it possible to introduce a biconditional statement into a logical proof. If P → Q is true, and if Q → P is true, then one may infer that P ↔ Q is true. For example, from the statements “if I'm breathing, then I'm alive” and “if I'm alive, then I'm breathing”, it can be inferred that “I'm breathing if and only if I'm alive”. Biconditional introduction is the of biconditional elimination. The rule can be stated formally as:

P → Q, Q → P ∴ P ↔ Q where the rule is that wherever instances of " P → Q " and " Q → P " appear on lines of a proof, " P ↔ Q " can validly be placed on a subsequent line.

7.1 Formal notation

The biconditional introduction rule may be written in sequent notation:

(P → Q), (Q → P ) ⊢ (P ↔ Q) where ⊢ is a metalogical symbol meaning that P ↔ Q is a syntactic consequence when P → Q and Q → P are both in a proof; or as the statement of a truth-functional tautology or theorem of propositional logic:

((P → Q) ∧ (Q → P )) → (P ↔ Q)

where P , and Q are propositions expressed in some formal system.

7.2 References

[1] Hurley

[2] Moore and Parker

[3] Copi and Cohen

26 Chapter 8

Commutative property

For other uses, see Commute (disambiguation). In mathematics, a binary operation is commutative if changing the order of the operands does not change the result.

=

= =

This image illustrates that addition is commutative.

It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says “3 + 4 = 4 + 3” or “2 × 5 = 5 × 2”, the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction that do not have it (for example, “3 − 5 ≠ 5 − 3”), such operations are not commutative, or noncommutative operations. The idea that simple operations, such as multiplication and addition of numbers, are commutative was for many years implicitly assumed and the property was not named until the 19th century when mathematics started to become formalized.

8.1 Common uses

The commutative property (or commutative law) is a property generally associated with binary operations and functions. If the commutative property holds for a pair of elements under a certain binary operation then the two elements are said to commute under that operation.

27 28 CHAPTER 8. COMMUTATIVE PROPERTY

8.2 Mathematical definitions

Further information: Symmetric function

The term “commutative” is used in several related senses.[1][2] 1. A binary operation ∗ on a set S is called commutative if: x ∗ y = y ∗ x for all x, y ∈ S

An operation that does not satisfy the above property is called noncommutative. 2. One says that x commutes with y under ∗ if: x ∗ y = y ∗ x

3. A binary function f : A × A → B is called commutative if: f(x, y) = f(y, x) for all x, y ∈ A

8.3 Examples

8.3.1 Commutative operations in everyday life

• Putting on socks resembles a commutative operation, since which sock is put on first is unimportant. Either way, the result (having both socks on), is the same.

• The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total.

8.3.2 Commutative operations in mathematics

Two well-known examples of commutative binary operations:[1]

• The addition of real numbers is commutative, since

y + z = z + y for all y, z ∈ R

For example 4 + 5 = 5 + 4, since both expressions equal 9.

• The multiplication of real numbers is commutative, since

yz = zy for all y, z ∈ R

For example, 3 × 5 = 5 × 3, since both expressions equal 15.

• Some binary truth functions are also commutative, since the truth tables for the functions are the same when one changes the order of the operands. 8.3. EXAMPLES 29

b

a a a+b

b

The addition of vectors is commutative, because ⃗a +⃗b = ⃗b + ⃗a .

For example, the logical biconditional function p ↔ q is equivalent to q ↔ p. This function is also written as p IFF q, or as p ≡ q, or as Epq. The last form is an example of the most concise notation in the article on truth functions, which lists the sixteen possible binary truth functions of which eight are commutative: Vpq = Vqp;Apq (OR) = Aqp; Dpq (NAND) = Dqp;Epq (IFF) = Eqp;Jpq = Jqp;Kpq (AND) = Kqp;Xpq (NOR) = Xqp;Opq = Oqp.

• Further examples of commutative binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors, and intersection and union of sets.

8.3.3 Noncommutative operations in everyday life

• Concatenation, the act of joining character strings together, is a noncommutative operation. For example

EA + T = EAT ≠ TEA = T + EA

• Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a markedly different result to drying and then washing. 30 CHAPTER 8. COMMUTATIVE PROPERTY

• Rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order.

• The twists of the Rubik’s Cube are noncommutative. This can be studied using group theory.

• Also thought processes are noncommutative: A person asked a question (A) and then a question (B) may give different answers to each question than a person asked first (B) and then (A), because asking a question may change the person’s state of mind.

8.3.4 Noncommutative operations in mathematics

Some noncommutative binary operations:[3]

• Subtraction is noncommutative, since 0 − 1 ≠ 1 − 0

• Division is noncommutative, since 1/2 ≠ 2/1

• Some truth functions are noncommutative, since the truth tables for the functions are different when one changes the order of the operands.

For example, the truth tables for f (A,B) = A Λ ¬B (A AND NOT B) and f (B,A) = B Λ ¬A are

• Matrix multiplication is noncommutative since [ ] [ ] [ ] [ ] [ ] [ ] 0 2 1 1 0 1 0 1 1 1 0 1 = · ≠ · = 0 1 0 1 0 1 0 1 0 1 0 1

• The vector product (or cross product) of two vectors in three dimensions is anti-commutative, i.e., b × a = −(a × b).

8.4 History and etymology

The first known use of the term was in a French Journal published in 1814

Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commuta- tive property of multiplication to simplify computing products.[4][5] Euclid is known to have assumed the commutative 8.5. PROPOSITIONAL LOGIC 31

property of multiplication in his book Elements.[6] Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics. The first recorded use of the term commutative was in a memoir by François Servois in 1814,[7][8] which used the word commutatives when describing functions that have what is now called the commutative property. The word is a combination of the French word commuter meaning “to substitute or switch” and the suffix -ative meaning “tending to” so the word literally means “tending to substitute or switch.” The term then appeared in English in Philosophical Transactions of the Royal Society in 1844.[7]

8.5 Propositional logic

8.5.1 Rule of replacement

In truth-functional propositional logic, commutation,[9][10] or commutativity[11] refer to two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are:

(P ∨ Q) ⇔ (Q ∨ P ) and

(P ∧ Q) ⇔ (Q ∧ P ) where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

8.5.2 Truth functional connectives

Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies. Commutativity of conjunction

(P ∧ Q) ↔ (Q ∧ P ) Commutativity of disjunction

(P ∨ Q) ↔ (Q ∨ P ) Commutativity of implication (also called the Law of permutation)

(P → (Q → R)) ↔ (Q → (P → R)) Commutativity of equivalence (also called the Complete commutative law of equivalence)

(P ↔ Q) ↔ (Q ↔ P )

8.6 Set theory

In group and set theory, many algebraic structures are called commutative when certain operands satisfy the com- mutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs.[12][13][14] 32 CHAPTER 8. COMMUTATIVE PROPERTY

8.7 Mathematical structures and commutativity

• A commutative semigroup is a set endowed with a total, associative and commutative operation. • If the operation additionally has an identity element, we have a commutative • An abelian group, or commutative group is a group whose group operation is commutative.[13] • A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.)[15] • In a field both addition and multiplication are commutative.[16]

8.8 Related properties

8.8.1 Associativity

Main article: Associative property

The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms doesn't change. In contrast, the commutative property states that the order of the terms does not affect the final result. Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function

x + y f(x, y) = , 2 which is clearly commutative (interchanging x and y does not affect the result), but it is not associative (since, for example, f(−4, f(0, +4)) = −1 but f(f(−4, 0), +4) = +1 ). More such examples may be found in Commutative non-associative magmas.

8.8.2 Symmetry

Main article: Symmetry in mathematics

Some forms of symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line y = x. As an example, if we let a function f represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function, which can be seen in the image on the right. For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then aRb ⇔ bRa .

8.9 Non-commuting operators in quantum mechanics

Main article: Canonical commutation relation

In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as x d (meaning multiply by x), and dx . These two operators do not commute as may be seen by considering the effect of d d their compositions x dx and dx x (also called products of operators) on a one-dimensional wave function ψ(x) :

d d x ψ = xψ′ ≠ xψ = ψ + xψ′ dx dx 8.10. SEE ALSO 33 10 8 6 4 2 0 -2 -4 -6 -8 -10 Graph showing the symmetry of the addition function

According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously mea- sured or known precisely. For example, the position and the linear momentum in the x-direction of a particle are − ℏ ∂ ℏ represented respectively by the operators x and i ∂x (where is the reduced Planck constant). This is the same example except for the constant −iℏ , so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary.

8.10 See also

• Anticommutativity

• Associative Property

• Binary operation

• Centralizer or Commutant

• Commutative diagram

• Commutative (neurophysiology)

• Commutator

• Distributivity

law

• Particle statistics (for commutativity in physics) 34 CHAPTER 8. COMMUTATIVE PROPERTY

• Quasi-commutative property • Trace monoid • • Truth table

8.11 Notes

[1] Krowne, p.1

[2] Weisstein, Commute, p.1

[3] Yark, p.1

[4] Lumpkin, p.11

[5] Gay and Shute, p.?

[6] O'Conner and Robertson, Real Numbers

[7] Cabillón and Miller, Commutative and Distributive

[8] O'Conner and Robertson, Servois

[9] Moore and Parker

[10] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[11] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[12] Axler, p.2

[13] Gallian, p.34

[14] p. 26,87

[15] Gallian p.236

[16] Gallian p.250

8.12 References

8.12.1 Books

• Axler, Sheldon (1997). Linear Algebra Done Right, 2e. Springer. ISBN 0-387-98258-2.

Abstract algebra theory. Covers commutativity in that context. Uses property throughout book.

• Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

• Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. Boston, Mass.: Houghton Mifflin. ISBN 0-618- 51471-6.

Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.

• Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN 0-13-067342-0.

Abstract algebra theory. Uses commutativity property throughout book.

• Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. 8.12. REFERENCES 35

8.12.2 Articles

• http://www.ethnomath.org/resources/lumpkin1997.pdf Lumpkin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished manuscript.

Article describing the mathematical ability of ancient civilizations.

• Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4

Translation and interpretation of the Rhind Mathematical Papyrus.

8.12.3 Online resources

• Hazewinkel, Michiel, ed. (2001), “Commutativity”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Krowne, Aaron, Commutative at PlanetMath.org., Accessed 8 August 2007.

Definition of commutativity and examples of commutative operations

• Weisstein, Eric W., “Commute”, MathWorld., Accessed 8 August 2007.

Explanation of the term commute

• Yark. Examples of non-commutative operations at PlanetMath.org., Accessed 8 August 2007

Examples proving some noncommutative operations

• O'Conner, J J and Robertson, E F. MacTutor history of real numbers, Accessed 8 August 2007

Article giving the history of the real numbers

• Cabillón, Julio and Miller, Jeff. Earliest Known Uses Of Mathematical Terms, Accessed 22 November 2008

Page covering the earliest uses of mathematical terms

• O'Conner, J J and Robertson, E F. MacTutor biography of François Servois, Accessed 8 August 2007

Biography of Francois Servois, who first used the term Chapter 9

Conjunction elimination

In propositional logic, conjunction elimination (also called and elimination, ∧ elimination,[1] or simplifica- tion)[2][3][4] is a valid immediate inference, argument form and rule of inference which makes the inference that, if the conjunction A and B is true, then A is true, and B is true. The rule makes it possible to shorten longer proofs by deriving one of the conjuncts of a conjunction on a line by itself. An example in English:

It’s raining and it’s pouring. Therefore it’s raining.

The rule consists of two separate sub-rules, which can be expressed in formal language as:

P ∧ Q ∴ P and

P ∧ Q ∴ Q The two sub-rules together mean that, whenever an instance of " P ∧ Q " appears on a line of a proof, either " P " or " Q " can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule.

9.1 Formal notation

The conjunction elimination sub-rules may be written in sequent notation:

(P ∧ Q) ⊢ P

and

(P ∧ Q) ⊢ Q

where ⊢ is a metalogical symbol meaning that P is a syntactic consequence of P ∧ Q and Q is also a syntactic consequence of P ∧ Q in logical system; and expressed as truth-functional tautologies or theorems of propositional logic:

36 9.2. REFERENCES 37

(P ∧ Q) → P and

(P ∧ Q) → Q where P and Q are propositions expressed in some formal system.

9.2 References

[1] David A. Duffy (1991). Principles of Automated Theorem Proving. New York: Wiley. Sect.3.1.2.1, p.46

[2] Copi and Cohen

[3] Moore and Parker

[4] Hurley Chapter 10

Conjunction introduction

Conjunction introduction (often abbreviated simply as conjunction and also called and introduction[1][2][3]) is a valid rule of inference of propositional logic. The rule makes it possible to introduce a conjunction into a logical proof. It is the inference that if the proposition p is true, and proposition q is true, then the of the two propositions p and q is true. For example, if it’s true that it’s raining, and it’s true that I'm inside, then it’s true that “it’s raining and I'm inside”. The rule can be stated:

P,Q ∴ P ∧ Q where the rule is that wherever an instance of " P " and " Q " appear on lines of a proof, a " P ∧ Q " can be placed on a subsequent line.

10.1 Formal notation

The conjunction introduction rule may be written in sequent notation:

P,Q ⊢ P ∧ Q

where ⊢ is a metalogical symbol meaning that P ∧ Q is a syntactic consequence if P and Q are each on lines of a proof in some logical system; where P and Q are propositions expressed in some formal system.

10.2 References

[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 346–51.

[2] Copi and Cohen

[3] Moore and Parker

38 Chapter 11

Constructive dilemma

Constructive dilemma[1][2][3] is a name of a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either P or R is true, then Q or S has to be true. In sum, if two conditionals are true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma is the disjunctive version of modus ponens, whereas, destructive dilemma is the disjunctive version of modus tollens. The rule can be stated:

P → Q, R → S, P ∨ R ∴ Q ∨ S where the rule is that whenever instances of " P → Q ", " R → S ", and " P ∨ R " appear on lines of a proof, " Q ∨ S " can be placed on a subsequent line.

11.1 Formal notation

The constructive dilemma rule may be written in sequent notation:

(P → Q), (R → S), (P ∨ R) ⊢ (Q ∨ S) where ⊢ is a metalogical symbol meaning that Q ∨ S is a syntactic consequence of P → Q , R → S , and Q ∨ S in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

(((P → Q) ∧ (R → S)) ∧ (P ∨ R)) → (Q ∨ S) where P , Q , R and S are propositions expressed in some formal system.

11.2 Variable English

If P then Q. If R then S. P or R. Therefore, Q or S.

11.3 Natural language example

If I win a million dollars, I will donate it to an orphanage. If my friend wins a million dollars, he will donate it to a wildlife fund. I win a million dollars or my friend wins a million dollars. Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars.

39 40 CHAPTER 11. CONSTRUCTIVE DILEMMA

The dilemma derives its name because of the transfer of disjunctive operator.

11.4 References

[1] Hurley, Patrick. A Concise Introduction to Logic With Ilrn Printed Access Card. Wadsworth Pub Co, 2008. Page 361

[2] Moore and Parker

[3] Copi and Cohen Chapter 12

De Morgan’s laws

In propositional logic and boolean algebra, De Morgan’s laws[1][2][3] are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. The rules can be expressed in English as:

The negation of a conjunction is the disjunction of the . The negation of a disjunction is the conjunction of the negations. or informally as:

"not (A and B)" is the same as "(not A) or (not B)" also, "not (A or B)" is the same as "(not A) and (not B)".

The rules can be expressed in formal language with two propositions P and Q as:

¬(P ∧ Q) ⇐⇒ (¬P ) ∨ (¬Q) and

¬(P ∨ Q) ⇐⇒ (¬P ) ∧ (¬Q) where:

• ¬ is the negation operator (NOT) • ∧ is the conjunction operator (AND) • ∨ is the disjunction operator (OR) • ⇔ is a metalogical symbol meaning “can be replaced in a logical proof with”

Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan’s laws are an example of a more general concept of mathematical duality.

12.1 Formal notation

The negation of conjunction rule may be written in sequent notation:

41 42 CHAPTER 12. DE MORGAN’S LAWS

De Morgan’s Laws represented with Venn diagrams

¬(P ∧ Q) ⊢ (¬P ∨ ¬Q) The negation of disjunction rule may be written as:

¬(P ∨ Q) ⊢ (¬P ∧ ¬Q) 12.1. FORMAL NOTATION 43

In rule form: negation of conjunction

¬(P ∧ Q) ∴ ¬P ∨ ¬Q and negation of disjunction

¬(P ∨ Q) ∴ ¬P ∧ ¬Q and expressed as a truth-functional tautology or theorem of propositional logic:

¬(P ∧ Q) → (¬P ∨ ¬Q) ¬(P ∨ Q) → (¬P ∧ ¬Q) where P and Q are propositions expressed in some formal system.

12.1.1 Substitution form

De Morgan’s laws are normally shown in the compact form above, with negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as:

(P ∧ Q) ≡ ¬(¬P ∨ ¬Q) (P ∨ Q) ≡ ¬(¬P ∧ ¬Q) This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a substitution.

12.1.2 Set theory and Boolean algebra

In set theory and Boolean algebra, it is often stated as “union and intersection interchange under complementation”,[4] which can be formally expressed as:

A ∪ B ≡ A ∩ B A ∩ B ≡ A ∪ B where:

• A is the negation of A, the overline being written above the terms to be negated • ∩ is the intersection operator (AND) • ∪ is the union operator (OR)

The generalized form is:

∩ ∪ Ai ≡ Ai i∈I i∈I ∪ ∩ Ai ≡ Ai i∈I i∈I where I is some, possibly uncountable, indexing set. In set notation, De Morgan’s laws can be remembered using the mnemonic “break the line, change the sign”.[5] 44 CHAPTER 12. DE MORGAN’S LAWS

12.1.3 Engineering

In electrical and computer engineering, De Morgan’s laws are commonly written as:

A · B ≡ A + B and

A + B ≡ A · B, where:

• · is a logical AND • + is a logical OR • the overbar is the logical NOT of what is underneath the overbar.

12.1.4 Text searching

De Morgan’s laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words “cars” and “trucks”. De Morgan’s laws hold that these two searches will return the same set of documents:

Search A: NOT (cars OR trucks) Search B: (NOT cars) AND (NOT trucks)

The corpus of documents containing “cars” or “trucks” can be represented by four documents:

Document 1: Contains only the word “cars”. Document 2: Contains only “trucks”. Document 3: Contains both “cars” and “trucks”. Document 4: Contains neither “cars” nor “trucks”.

To evaluate Search A, clearly the search “(cars OR trucks)” will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4. Evaluating Search B, the search “(NOT cars)” will hit on documents that do not contain “cars”, which is Documents 2 and 4. Similarly the search “(NOT trucks)” will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4. A similar evaluation can be applied to show that the following two searches will return the same set of documents (Documents 1, 2, 4):

Search C: NOT (cars AND trucks) Search D: (NOT cars) OR (NOT trucks)

12.2 History

The laws are named after Augustus De Morgan (1806–1871),[6] who introduced a formal version of the laws to classical propositional logic. De Morgan’s formulation was influenced by algebraization of logic undertaken by George Boole, which later cemented De Morgan’s claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down 12.3. INFORMAL PROOF 45 the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes rules of conversion that follow the lines of De Morgan’s laws.[9] Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan’s laws can be proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid in proofs and deductive arguments.

12.3 Informal proof

De Morgan’s theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula.

12.3.1 Negation of a disjunction

In the case of its application to a disjunction, consider the following claim: “it is false that either of A or B is true”, which is written as:

¬(A ∨ B)

In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as:

(¬A) ∧ (¬B)

If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that “since two things are both false, it is also false that either of them is true”. Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that “not A” and “not B” are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim.

12.3.2 Negation of a conjunction

The application of De Morgan’s theorem to a conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: “it is false that A and B are both true”, which is written as:

¬(A ∧ B)

In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of “not A” and “not B” must be true). This may be written directly as:

(¬A) ∨ (¬B)

Presented in English, this follows the logic that “since it is false that two things are both true, at least one of them must be false”. Working in the opposite direction again, the second expression asserts that at least one of “not A” and “not B” must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim. 46 CHAPTER 12. DE MORGAN’S LAWS

12.4 Formal proof

The proof that (A ∩ B)c = Ac ∪ Bc is completed in 2 steps by proving both (A ∩ B)c ⊆ Ac ∪ Bc and Ac ∪ Bc ⊆ (A ∩ B)c : Let x ∈ (A ∩ B)c . Then, x ̸∈ A ∩ B . Because A ∩ B = {y|y ∈ A and y ∈ B} , it must be the case that x ̸∈ A or x ̸∈ B . If x ̸∈ A , then x ∈ Ac , so x ∈ Ac ∪ Bc . Similarly, if x ̸∈ B , then x ∈ Bc , so x ∈ Ac ∪ Bc . Thus, ∀x( if x ∈ (A ∩ B)c , then x ∈ Ac ∪ Bc) ; that is, (A ∩ B)c ⊆ Ac ∪ Bc . To prove the reverse direction, let x ∈ Ac ∪ Bc , and assume x ̸∈ (A ∩ B)c . Under that assumption, it must be the case that x ∈ A ∩ B ; it follows that x ∈ A and x ∈ B , and thus x ̸∈ Ac and x ̸∈ Bc . However, that means x ̸∈ Ac ∪ Bc , in contradiction to the hypothesis that x ∈ Ac ∪ Bc ; the assumption x ̸∈ (A ∩ B)c must not be the case, meaning that x ∈ (A ∩ B)c must be the case. Therefore, ∀x( if x ∈ Ac ∪ Bc , then x ∈ (A ∩ B)c) ; that is, Ac ∪ Bc ⊆ (A ∩ B)c . If Ac ∪ Bc ⊆ (A ∩ B)c and (A ∩ B)c ⊆ Ac ∪ Bc , then (A ∩ B)c = Ac ∪ Bc ; this concludes the proof of De Morgan’s law. The other De Morgan’s law, (A ∪ B)c = Ac ∩ Bc , is proven similarly.

12.5 Extensions

&≥1

& ≥1

De Morgan’s Laws represented as a circuit with logic gates

In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on , namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is a prerequisite for finding the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory. 12.6. SEE ALSO 47

Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator Pd defined by

Pd(p, q, ...) = ¬P (¬p, ¬q, . . . ).

This idea can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:

∀x P (x) ≡ ¬∃x ¬P (x),

∃x P (x) ≡ ¬∀x ¬P (x). To relate these quantifier dualities to the De Morgan laws, set up a model with some small number of elements in its domain D, such as

D = {a, b, c}.

Then

∀x P (x) ≡ P (a) ∧ P (b) ∧ P (c)

and

∃x P (x) ≡ P (a) ∨ P (b) ∨ P (c).

But, using De Morgan’s laws,

P (a) ∧ P (b) ∧ P (c) ≡ ¬(¬P (a) ∨ ¬P (b) ∨ ¬P (c))

and

P (a) ∨ P (b) ∨ P (c) ≡ ¬(¬P (a) ∧ ¬P (b) ∧ ¬P (c)),

verifying the quantifier dualities in the model. Then, the quantifier dualities can be extended further to modal logic, relating the box (“necessarily”) and diamond (“possibly”) operators:

□p ≡ ¬♢¬p,

♢p ≡ ¬□¬p. In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics.

12.6 See also

• Isomorphism (NOT operator as isomorphism between positive logic and negative logic)

• List of Boolean algebra topics 48 CHAPTER 12. DE MORGAN’S LAWS

12.7 References

[1] Copi and Cohen

[2] Hurley

[3] Moore and Parker

[4] Boolean Algebra by R. L. Goodstein. ISBN 0-486-45894-6

[5] 2000 Solved Problems in Digital Electronics by S. P. Bali

[6] DeMorgan’s Theorems at mtsu.edu

[7] Bocheński’s History of Formal Logic

[8] William of Ockham, Summa Logicae, part II, sections 32 and 33.

[9] Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0

[10] Augustus De Morgan (1806–1871) by Robert H. Orr

12.8 External links

• Hazewinkel, Michiel, ed. (2001), “Duality principle”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Weisstein, Eric W., “de Morgan’s Laws”, MathWorld. • de Morgan’s laws at PlanetMath.org. Chapter 13

Deduction theorem

In mathematical logic, the deduction theorem is a metatheorem of first-order logic.[1] It is a formalization of the common proof technique in which an implication A → B is proved by assuming A and then deriving B from this assumption conjoined with known results. The deduction theorem explains why proofs of conditional sentences in mathematics are logically correct. Though it has seemed “obvious” to mathematicians literally for centuries that proving B from A conjoined with a set of theorems is sufficient to proving the implication A → B based on those theorems alone, it was left to Herbrand and Tarski to show (independently) this was logically correct in the general case—another instance, perhaps, of modern logic “cleaning up” mathematical practice. The deduction theorem states that if a formula B is deducible from a set of assumptions ∆ ∪ {A} , where A is a closed formula, then the implication A → B is deducible from ∆ . In symbols, ∆ ∪ {A} ⊢ B implies ∆ ⊢ A → B. . In the special case where ∆ is the empty set, the deduction theorem shows that {A} ⊢ B implies ⊢ A → B. The deduction theorem holds for all first-order theories with the usual deductive systems for first-order logic. However, there are first-order systems in which new inference rules are added for which the deduction theorem fails.[2] The deduction rule is an important property of Hilbert-style systems because the use of this metatheorem leads to much shorter proofs than would be possible without it. Although the deduction theorem could be taken as primitive rule of inference in such systems, this approach is not generally followed; instead, the deduction theorem is obtained as an admissible rule using the other logical axioms and modus ponens. In other formal proof systems, the deduction theorem is sometimes taken as a primitive rule of inference. For example, in , the deduction theorem is recast as an introduction rule for "→".

13.1 Examples of deduction

“Prove” axiom 1:

• • P 1. hypothesis • Q 2. hypothesis • P 3. reiteration of 1 • Q→P 4. deduction from 2 to 3 • P→(Q→P) 5. deduction from 1 to 4 QED

“Prove” axiom 2:

• • P→(Q→R) 1. hypothesis • P→Q 2. hypothesis • P 3. hypothesis • Q 4. modus ponens 3,2 • Q→R 5. modus ponens 3,1 • R 6. modus ponens 4,5

49 50 CHAPTER 13. DEDUCTION THEOREM

• P→R 7. deduction from 3 to 6 • (P→Q)→(P→R) 8. deduction from 2 to 7

• (P→(Q→R))→((P→Q)→(P→R)) 9. deduction from 1 to 8 QED

Using axiom 1 to show ((P→(Q→P))→R)→R:

• • (P→(Q→P))→R 1. hypothesis • P→(Q→P) 2. axiom 1 • R 3. modus ponens 2,1

• ((P→(Q→P))→R)→R 4. deduction from 1 to 3 QED

13.2 Virtual rules of inference

From the examples, you can see that we have added three virtual (or extra and temporary) rules of inference to our normal axiomatic logic. These are “hypothesis”, “reiteration”, and “deduction”. The normal rules of inference (i.e. “modus ponens” and the various axioms) remain available. 1. Hypothesis is a step where one adds an additional premise to those already available. So, if your previous step S was deduced as:

E1,E2, ..., En−1,En ⊢ S, then one adds another premise H and gets:

E1,E2, ..., En−1,En,H ⊢ H.

This is symbolized by moving from the n-th level of indentation to the n+1-th level and saying

• • • • • S previous step • H hypothesis

2. Reiteration is a step where one re-uses a previous step. In practice, this is only necessary when one wants to take a hypothesis which is not the most recent hypothesis and use it as the final step before a deduction step. 3. Deduction is a step where one removes the most recent hypothesis (still available) and prefixes it to the previous step. This is shown by unindenting one level as follows:

• • • • • • H hypothesis • ...... (other steps) • C (conclusion drawn from H) • H→C deduction

13.3 Conversion from proof using the deduction meta-theorem to axiomatic proof

In axiomatic versions of propositional logic, one usually has among the axiom schemas (where P, Q, and R are replaced by any propositions):

• Axiom 1 is: P→(Q→P) 13.4. THE DEDUCTION THEOREM IN PREDICATE LOGIC 51

• Axiom 2 is: (P→(Q→R))→((P→Q)→(P→R))

• Modus ponens is: from P and P→Q infer Q

These axiom schemas are chosen to enable one to derive the deduction theorem from them easily. So it might seem that we are begging the question. However, they can be justified by checking that they are tautologies using truth tables and that modus ponens preserves truth. From these axiom schemas one can quickly deduce the theorem schema P→P (reflexivity of implication) which is used below:

1. (P→((Q→P)→P))→((P→(Q→P))→(P→P)) from axiom schema 2 with P,(Q→P), P

2. P→((Q→P)→P) from axiom schema 1 with P,(Q→P)

3. (P→(Q→P))→(P→P) from modus ponens applied to step 2 and step 1

4. P→(Q→P) from axiom schema 1 with P, Q

5. P→P from modus ponens applied to step 4 and step 3

Suppose that we have that Γ and H prove C, and we wish to show that Γ proves H→C. For each step S in the deduction which is a premise in Γ (a reiteration step) or an axiom, we can apply modus ponens to the axiom 1, S→(H→S), to get H→S. If the step is H itself (a hypothesis step), we apply the theorem schema to get H→H. If the step is the result of applying modus ponens to A and A→S, we first make sure that these have been converted to H→A and H→(A→S) and then we take the axiom 2, (H→(A→S))→((H→A)→(H→S)), and apply modus ponens to get (H→A)→(H→S) and then again to get H→S. At the end of the proof we will have H→C as required, except that now it only depends on Γ, not on H. So the deduction step will disappear, consolidated into the previous step which was the conclusion derived from H. To minimize the complexity of the resulting proof, some preprocessing should be done before the conversion. Any steps (other than the conclusion) which do not actually depend on H should be moved up before the hypothesis step and unindented one level. And any other unnecessary steps (which are not used to get the conclusion or can be bypassed), such as reiterations which are not the conclusion, should be eliminated. During the conversion, it may be useful to put all the applications of modus ponens to axiom 1 at the beginning of the deduction (right after the H→H step). When converting a modus ponens, if A is outside the scope of H, then it will be necessary to apply axiom 1, A→(H→A), and modus ponens to get H→A. Similarly, if A→S is outside the scope of H, apply axiom 1, (A→S)→(H→(A→S)), and modus ponens to get H→(A→S). It should not be necessary to do both of these, unless the modus ponens step is the conclusion, because if both are outside the scope, then the modus ponens should have been moved up before H and thus be outside the scope also. Under the Curry–Howard correspondence, the above conversion process for the deduction meta-theorem is analogous to the conversion process from lambda calculus terms to terms of combinatory logic, where axiom 1 corresponds to the K combinator, and axiom 2 corresponds to the S combinator. Note that the I combinator corresponds to the theorem schema P→P.

13.4 The deduction theorem in predicate logic

The deduction theorem is also valid in first-order logic in the following form:

• If T is a theory and F, G are formulas with F closed, and T∪{F}├G, then T├F→G.

Here, the symbol ├ means “is a syntactical consequence of.” We indicate below how the proof of this deduction theorem differs from that of the deduction theorem in propositional calculus. In the most common versions of the notion of formal proof, there are, in addition to the axiom schemes of proposi- tional calculus (or the understanding that all tautologies of propositional calculus are to be taken as axiom schemes 52 CHAPTER 13. DEDUCTION THEOREM

in their own right), quantifier axioms, and in addition to modus ponens, one additional rule of inference, known as the rule of generalization: “From K, infer ∀vK.” In order to convert a proof of G from T∪{F} to one of F→G from T, one deals with steps of the proof of G which are axioms or result from application of modus ponens in the same way as for proofs in propositional logic. Steps which result from application of the rule of generalization are dealt with via the following quantifier axiom (valid whenever the variable v is not free in formula H):

• (H→K)→(H→∀vK).

Since in our case F is assumed to be closed, we can take H to be F. This axiom allows one to deduce F→∀vK from F→K, which is just what is needed whenever the rule of generalization is applied to some K in the proof of G.

13.5 Example of conversion

To illustrate how one can convert a natural deduction to the axiomatic form of proof, we apply it to the tautology Q→((Q→R)→R). In practice, it is usually enough to know that we could do this. We normally use the natural- deductive form in place of the much longer axiomatic proof. First, we write a proof using a natural-deduction like method:

• • Q 1. hypothesis • Q→R 2. hypothesis • R 3. modus ponens 1,2 • (Q→R)→R 4. deduction from 2 to 3 • Q→((Q→R)→R) 5. deduction from 1 to 4 QED

Second, we convert the inner deduction to an axiomatic proof:

• (Q→R)→(Q→R) 1. theorem schema (A→A) • ((Q→R)→(Q→R))→(((Q→R)→Q)→((Q→R)→R)) 2. axiom 2 • ((Q→R)→Q)→((Q→R)→R) 3. modus ponens 1,2 • Q→((Q→R)→Q) 4. axiom 1 • Q 5. hypothesis • (Q→R)→Q 6. modus ponens 5,4 • (Q→R)→R 7. modus ponens 6,3 • Q→((Q→R)→R) 8. deduction from 5 to 7 QED

Third, we convert the outer deduction to an axiomatic proof:

• (Q→R)→(Q→R) 1. theorem schema (A→A) • ((Q→R)→(Q→R))→(((Q→R)→Q)→((Q→R)→R)) 2. axiom 2 • ((Q→R)→Q)→((Q→R)→R) 3. modus ponens 1,2 • Q→((Q→R)→Q) 4. axiom 1 • [((Q→R)→Q)→((Q→R)→R)]→[Q→(((Q→R)→Q)→((Q→R)→R))] 5. axiom 1 • Q→(((Q→R)→Q)→((Q→R)→R)) 6. modus ponens 3,5 • [Q→(((Q→R)→Q)→((Q→R)→R))]→([Q→((Q→R)→Q)]→[Q→((Q→R)→R))]) 7. axiom 2 13.6. PARACONSISTENT DEDUCTION THEOREM 53

• [Q→((Q→R)→Q)]→[Q→((Q→R)→R))] 8. modus ponens 6,7 • Q→((Q→R)→R)) 9. modus ponens 4,8 QED

These three steps can be stated succinctly using the Curry–Howard correspondence:

• first, in lambda calculus, the function f = λa. λb. b a has type q → (q → r) → r • second, by lambda elimination on b, f = λa. s i (k a) • third, by lambda elimination on a, f = s (k (s i)) k

13.6 Paraconsistent deduction theorem

In general, the classical deduction theorem doesn't hold in paraconsistent logic. However, the following “two-way deduction theorem” does hold in one form of paraconsistent logic:[3]

⊢ E → F if and only if ( E ⊢ F and ¬F ⊢ ¬E ) that requires the contrapositive inference to hold in addition to the requirement of the classical deduction theorem.

13.7 The resolution theorem

The resolution theorem is the converse of the deduction theorem. It follows immediately from modus ponens which is the elimination rule for implication.

13.8 See also

• Propositional calculus • Peirce’s law

13.9 Notes

[1] Kleene 1967, p. 39, 112; Shoenfield 1967, p. 33 [2] Kohlenbach 2008, p. 148 [3] Hewitt 2008

13.10 References

• Carl Hewitt (2008), “ORGs for Scalable, Robust, Privacy-Friendly Client Cloud Computing”, IEEE Internet Computing 12 (5): 96, doi:10.1109/MIC.2008.107. September/October 2008 • Kohlenbach, Ulrich (2008), Applied proof theory: proof interpretations and their use in mathematics, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-77532-4, MR 2445721 • Kleene, Stephen Cole (2002) [1967], Mathematical logic, New York: Dover Publications, ISBN 978-0-486- 42533-7, MR 1950307 • Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6. • Shoenfield, Joseph R. (2001) [1967], Mathematical Logic (2nd ed.), A K Peters, ISBN 978-1-56881-135-2 54 CHAPTER 13. DEDUCTION THEOREM

13.11 External links

• Introduction to Mathematical Logic by Vilnis Detlovs and Karlis Podnieks Podnieks is a comprehensive tutorial. See Section 1.5. Chapter 14

Destructive dilemma

Destructive dilemma[1][2] is the name of a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either Q is false or S is false, then either P or R must be false. In sum, if two conditionals are true, but one of their consequents is false, then one of their antecedents has to be false. Destructive dilemma is the disjunctive version of modus tollens. The disjunctive version of modus ponens is the constructive dilemma. The rule can be stated:

P → Q, R → S, ¬Q ∨ ¬S ∴ ¬P ∨ ¬R

where the rule is that wherever instances of " P → Q ", " R → S ", and " ¬Q ∨ ¬S " appear on lines of a proof, " ¬P ∨ ¬R " can be placed on a subsequent line.

14.1 Formal notation

The destructive dilemma rule may be written in sequent notation:

(P → Q), (R → S), (¬Q ∨ ¬S) ⊢ (¬P ∨ ¬R)

where ⊢ is a metalogical symbol meaning that ¬P ∨ ¬R is a syntactic consequence of P → Q , R → S , and ¬Q ∨ ¬S in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

(((P → Q) ∧ (R → S)) ∧ (¬Q ∨ ¬S)) → (¬P ∨ ¬R) where P , Q , R and S are propositions expressed in some formal system.

14.2 Natural language example

If it rains, we will stay inside. If it is sunny, we will go for a walk. Either we will not stay inside, or we will not go for a walk, or both. Therefore, either it will not rain, or it will not be sunny, or both.

55 56 CHAPTER 14. DESTRUCTIVE DILEMMA

14.3 Proof

14.4 Example proof

The validity of this argument structure can be shown by using both conditional proof (CP) and reductio ad absurdum (RAA) in the following way:

14.5 References

[1] Hurley, Patrick. A Concise Introduction to Logic With Ilrn Printed Access Card. Wadsworth Pub Co, 2008. Page 361

[2] Moore and Parker

14.6 Bibliography

• Howard-Snyder, Frances; Howard-Snyder, Daniel; Wasserman, Ryan. The Power of Logic (4th ed.). McGraw- Hill, 2009, ISBN 978-0-07-340737-1, p. 414.

14.7 External links

• http://mathworld.wolfram.com/DestructiveDilemma.html Chapter 15

Disjunction elimination

For the theorem of propositional logic which expresses Disjunction elimination, see Case analysis.

In propositional logic, disjunction elimination[1][2] (sometimes named proof by cases, case analysis, or or elimi- nation), is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement P implies a statement Q and a statement R also implies Q , then if either P or R is true, then Q has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true.

If I'm inside, I have my wallet on me. If I'm outside, I have my wallet on me. It is true that either I'm inside or I'm outside. Therefore, I have my wallet on me.

It is the rule can be stated as:

P → Q, R → Q, P ∨ R ∴ Q where the rule is that whenever instances of " P → Q ", and " R → Q " and " P ∨ R " appear on lines of a proof, " Q " can be placed on a subsequent line.

15.1 Formal notation

The disjunction elimination rule may be written in sequent notation:

(P → Q), (R → Q), (P ∨ R) ⊢ Q where ⊢ is a metalogical symbol meaning that Q is a syntactic consequence of P → Q , and R → Q and P ∨ R in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

(((P → Q) ∧ (R → Q)) ∧ (P ∨ R)) → Q where P , Q , and R are propositions expressed in some formal system.

57 58 CHAPTER 15. DISJUNCTION ELIMINATION

15.2 See also

• Disjunction

• Argument in the alternative • Disjunct normal form

15.3 References

[1] https://proofwiki.org/wiki/Rule_of_Or-Elimination

[2] http://www.cs.gsu.edu/~{}cscskp/Automata/proofs/node6.html Chapter 16

Disjunction introduction

Disjunction introduction or addition (also called or introduction)[1][2][3] is a simple valid argument form, an immediate inference and a rule of inference of propositional logic. The rule makes it possible to introduce disjunctions to logical proofs. It is the inference that if P is true, then P or Q must be true.

Socrates is a man. Therefore, either Socrates is a man or pigs are flying in formation over the English Channel.

The rule can be expressed as:

P ∴ P ∨ Q where the rule is that whenever instances of " P " appear on lines of a proof, " P ∨ Q " can be placed on a subsequent line. Disjunction introduction is controversial in paraconsistent logic because in combination with other rules of logic, it leads to explosion (i.e. everything becomes provable). See Tradeoffs in Paraconsistent logic.

16.1 Formal notation

The disjunction introduction rule may be written in sequent notation:

P ⊢ (P ∨ Q) where ⊢ is a metalogical symbol meaning that P ∨ Q is a syntactic consequence of P in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

P → (P ∨ Q) where P and Q are propositions expressed in some formal system.

16.2 References

[1] Hurley

[2] Moore and Parker [3] Copi and Cohen

59 Chapter 17

Disjunctive syllogism

In classical logic disjunctive syllogism[1][2] (historically known as modus tollendo ponens) is a valid argument form which is a syllogism having a disjunctive statement for one of its premises.[3][4]

Either the breach is a safety violation, or it is not subject to fines. The breach is not a safety violation. Therefore, it is not subject to fines.

In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or ab- breviated ∨E),[5][6][7][8] is a valid rule of inference. If we are told that at least one of two statements is true; and also told that it is not the former that is true; we can infer that it has to be the latter that is true. If either P or Q is true and P is false, then Q is true. The reason this is called “disjunctive syllogism” is that, first, it is a syllogism, a three-step argument, and second, it contains a , which simply means an “or” statement. “Either P or Q” is a disjunction; P and Q are called the statement’s disjuncts. The rule makes it possible to eliminate a disjunction from a logical proof. It is the rule that:

P ∨ Q, ¬P ∴ Q where the rule is that whenever instances of " P ∨ Q ", and " ¬P " appear on lines of a proof, " Q " can be placed on a subsequent line. Disjunctive syllogism is closely related and similar to hypothetical syllogism, in that it is also type of syllogism, and also the name of a rule of inference. It is also related to the and the , two of the three traditional laws of thought.

17.1 Formal notation

The disjunctive syllogism rule may be written in sequent notation:

P ∨ Q, ¬P ⊢ Q where ⊢ is a metalogical symbol meaning that Q is a syntactic consequence of P ∨Q , and ¬P in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

((P ∨ Q) ∧ ¬P ) → Q where P , and Q are propositions expressed in some formal system.

60 17.2. NATURAL LANGUAGE EXAMPLES 61

17.2 Natural language examples

Here is an example:

Either I will choose soup or I will choose salad. I will not choose soup. Therefore, I will choose salad.

Here is another example:

It is either red or blue. It is not blue. Therefore, it is red.

17.3 Inclusive and exclusive disjunction

Please observe that the disjunctive syllogism works whether 'or' is considered 'exclusive' or 'inclusive' disjunction. See below for the definitions of these terms. There are two kinds of logical disjunction:

• inclusive means “and/or” - at least one of them is true, or maybe both. • exclusive (“xor”) means exactly one must be true, but they cannot both be.

The widely used English language concept of or is often ambiguous between these two meanings, but the difference is pivotal in evaluating disjunctive arguments. This argument:

Either P or Q. Not P. Therefore, Q.

is valid and indifferent between both meanings. However, only in the exclusive meaning is the following form valid:

Either (only) P or (only) Q. P. Therefore, not Q.

however if the fact is true it does not commit the fallacy With the inclusive meaning you could draw no conclusion from the first two premises of that argument. See affirming a disjunct.

17.4 Related argument forms

Unlike modus ponendo ponens and modus ponendo tollens, with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a (slightly devious) combination of reductio ad absurdum and disjunction elimination. Other forms of syllogism:

• hypothetical syllogism 62 CHAPTER 17. DISJUNCTIVE SYLLOGISM

• categorical syllogism

Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent logics.[9]

17.5 References

[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 362.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 320–1.

[3] Hurley

[4] Copi and Cohen

[5] Sanford, David Hawley. 2003. If P, Then Q: Conditionals and the Foundations of Reasoning. London, UK: Routledge: 39

[6] Hurley

[7] Copi and Cohen

[8] Moore and Parker

[9] Chris Mortensen, Inconsistent Mathematics, Stanford encyclopedia of philosophy, First published Tue Jul 2, 1996; substan- tive revision Thu Jul 31, 2008 Chapter 18

Distributive property

“Distributivity” redirects here. It is not to be confused with Distributivism.

In abstract algebra and formal logic, the distributive property of binary operations generalizes the distributive law from elementary algebra. In propositional logic, distribution refers to two valid rules of replacement. The rules allow one to reformulate conjunctions and disjunctions within logical proofs. For example, in arithmetic:

2 ⋅ (1 + 3) = (2 ⋅ 1) + (2 ⋅ 3), but 2 / (1 + 3) ≠ (2 / 1) + (2 / 3).

In the left-hand side of the first equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the 1 and the 3 individually, with the products added afterwards. Because these give the same final answer (8), it is said that multiplication by 2 distributes over addition of 1 and 3. Since one could have put any real numbers in place of 2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over addition of real numbers.

18.1 Definition

Given a set S and two binary operators ∗ and + on S, we say that the operation ∗

• is left-distributive over + if, given any elements x, y, and z of S,

x ∗ (y + z) = (x ∗ y) + (x ∗ z)

• is right-distributive over + if, given any elements x, y, and z of S:

(y + z) ∗ x = (y ∗ x) + (z ∗ x)

[1] • is distributive over + if it is left- and right-distributive.

Notice that when ∗ is commutative, the three conditions above are logically equivalent.

18.2 Meaning

The operators used for examples in this section are the binary operations of addition ( + ) and multiplication ( · ) of numbers. There is a distinction between left-distributivity and right-distributivity:

63 64 CHAPTER 18. DISTRIBUTIVE PROPERTY

a · (b  c) = a · b  a · c (left-distributive) (a  b) · c = a · c  b · c (right-distributive)

In either case, the distributive property can be described in words as: To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted). If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa. One example of an operation that is “only” right-distributive is division, which is not commutative:

(a  b) ÷ c = a ÷ c  b ÷ c

In this case, left-distributivity does not apply:

a ÷ (b  c) ≠ a ÷ b  a ÷ c

The distributive laws are among the axioms for rings and fields. Examples of structures in which two operations are mutually related to each other by the distributive law are Boolean algebras such as the algebra of sets or the switching algebra. There are also combinations of operations that are not mutually distributive over each other; for example, addition is not distributive over multiplication. Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sums (keeping track of signs), and then adding up all of the resulting products.

18.3 Examples

18.3.1 Real numbers

In the following examples, the use of the distributive law on the set of real numbers R is illustrated. When multi- plication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.

First example (mental and written multiplication)

During mental arithmetic, distributivity is often used unconsciously:

6 · 16 = 6 · (10 + 6) = 6 · 10 + 6 · 6 = 60 + 36 = 96

Thus, to calculate 6 ⋅ 16 in your head, you first multiply 6 ⋅ 10 and 6 ⋅ 6 and add the intermediate results. Written multiplication is also based on the distributive law.

Second example (with variables)

3a2b · (4a − 5b) = 3a2b · 4a − 3a2b · 5b = 12a3b − 15a2b2

Third example (with two sums)

(a + b) · (a − b) = a · (a − b) + b · (a − b) = a2 − ab + ba − b2 = a2 − b2 = (a + b) · a − (a + b) · b = a2 + ba − ab − b2 = a2 − b2

Here the distributive law was applied twice and. It does not matter which bracket is first multiplied out. 18.4. PROPOSITIONAL LOGIC 65

Fourth Example Here the distributive law is applied the other way around compared to the previous examples. Consider 12a3b2 − 30a4bc + 18a2b3c2 .

Since the factor 6a2b occurs in all summand, it can be factored out. That is, due to the distributive law one obtains 12a3b2 − 30a4bc + 18a2b3c2 = 6a2b(2ab − 5a2c + 3b2c2) .

18.3.2 Matrices

The distributive law is valid for matrix multiplication. More precisely,

(A + B) · C = A · C + B · C for all l × m -matrices A, B and m × n -matrices C , as well as

A · (B + C) = A · B + A · C for all l × m -matrices A and m × n -matrices B,C . Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws.

18.3.3 Other examples

1. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. 2. The cross product is left- and right-distributive over vector addition, though not commutative. 3. The union of sets is distributive over intersection, and intersection is distributive over union. 4. Logical disjunction (“or”) is distributive over logical conjunction (“and”), and conjunction is distributive over disjunction. 5. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice versa: max(a, min(b, c)) = min(max(a, b), max(a, c)) and min(a, max(b, c)) = max(min(a, b), min(a, c)). 6. For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd(a, lcm(b, c)) = lcm(gcd(a, b), gcd(a, c)) and lcm(a, gcd(b, c)) = gcd(lcm(a, b), lcm(a, c)). 7. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a + max(b, c) = max(a + b, a + c) and a + min(b, c) = min(a + b, a + c).

18.4 Propositional logic

18.4.1 Rule of replacement

In standard truth-functional propositional logic, distribution[2][3][4] in logical proofs uses two valid rules of replacement to expand individual occurrences of certain logical connectives, within some formula, into separate applications of those connectives across subformulas of the given formula. The rules are:

(P ∧ (Q ∨ R)) ⇔ ((P ∧ Q) ∨ (P ∧ R)) and

(P ∨ (Q ∧ R)) ⇔ ((P ∨ Q) ∧ (P ∨ R)) where " ⇔ ", also written ≡, is a metalogical symbol representing “can be replaced in a proof with” or “is logically equivalent to”. 66 CHAPTER 18. DISTRIBUTIVE PROPERTY

18.4.2 Truth functional connectives

Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies.

Distribution of conjunction over conjunction (P ∧ (Q ∧ R)) ↔ ((P ∧ Q) ∧ (P ∧ R))

Distribution of conjunction over disjunction [5] (P ∧ (Q ∨ R)) ↔ ((P ∧ Q) ∨ (P ∧ R))

Distribution of disjunction over conjunction [6] (P ∨ (Q ∧ R)) ↔ ((P ∨ Q) ∧ (P ∨ R))

Distribution of disjunction over disjunction (P ∨ (Q ∨ R)) ↔ ((P ∨ Q) ∨ (P ∨ R))

Distribution of implication (P → (Q → R)) ↔ ((P → Q) → (P → R))

Distribution of implication over equivalence (P → (Q ↔ R)) ↔ ((P → Q) ↔ (P → R))

Distribution of disjunction over equivalence (P ∨ (Q ↔ R)) ↔ ((P ∨ Q) ↔ (P ∨ R))

((P ∧ Q) ∨ (R ∧ S)) ↔ (((P ∨ R) ∧ (P ∨ S)) ∧ ((Q ∨ R) ∧ (Q ∨ S))) Double distribution ((P ∨ Q) ∧ (R ∨ S)) ↔ (((P ∧ R) ∨ (P ∧ S)) ∨ ((Q ∧ R) ∨ (Q ∧ S)))

18.5 Distributivity and rounding

In practice, the distributive property of multiplication (and division) over addition may appear to be compromised or lost because of the limitations of arithmetic precision. For example, the identity ⅓ + ⅓ + ⅓ = (1 + 1 + 1) / 3 appears to fail if the addition is conducted in decimal arithmetic; however, if many significant digits are used, the calculation will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form: 0.33333 + 0.33333 + 0.33333 = 0.99999 ≠ 1, this result is a closer approximation than if fewer significant digits had been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced if those arithmetical values are rounded or truncated. For example, buying two books, each priced at £14.99 before a tax of 17.5%, in two separate transactions will actually save £0.01, over buying them together: £14.99 × 1.175 = £17.61 to the nearest £0.01, giving a total expenditure of £35.22, but £29.98 × 1.175 = £35.23. Methods such as banker’s rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.

18.6 Distributivity in rings

Distributivity is most commonly found in rings and distributive lattices. A ring has two binary operations (commonly called "+" and "∗"), and one of the requirements of a ring is that ∗ must distribute over +. Most kinds of numbers (example 1) and matrices (example 4) form rings. A lattice is another kind of algebraic structure with two binary operations, ∧ and ∨. If either of these operations (say ∧) distributes over the other (∨), then ∨ must also distribute over ∧, and the lattice is called distributive. See also the article on distributivity (order theory). Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras. Failure of one of the two distributive laws brings about near-rings and near-fields instead of rings and division rings respectively. The operations are usually configured to have the near-ring or near-field distributive on the right but not on the left. Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive but not right-distributive; example 2 is a near-rig. 18.7. GENERALIZATIONS OF DISTRIBUTIVITY 67

18.7 Generalizations of distributivity

In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice. In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either ≤ or ≥. Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic. In category theory, if (S, μ, η) and (S′, μ′, η′) are monads on a category C, a distributive law S.S′ → S′.S is a natural transformation λ : S.S′ → S′.S such that (S′, λ) is a lax map of monads S → S and (S, λ) is a colax map of monads S′ → S′. This is exactly the data needed to define a monad structure on S′.S: the multiplication map is S′μ.μ′S2.S′λS and the unit map is η′S.η. See: distributive law between monads. A generalized distributive law has also been proposed in the area of information theory.

18.7.1 Notions of antidistributivity

The ubiquitous identity that relates inverses to the binary operation in any group, namely (xy)−1 = y−1x−1, which is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an antidistributive property (of inversion as a unary operation).[7] In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one- sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements dis- tribute when multiplied on the left), then an antidistributive element a reverses the order of addition when multiplied to the right: (x + y)a = ya + xa.[8] In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote the interchange between conjunction and disjunction when implication factors over them:[9]

• (a ∨ b) ⇒ c ≡ (a ⇒ c) ∧ (b ⇒ c)

• (a ∧ b) ⇒ c ≡ (a ⇒ c) ∨ (b ⇒ c)

These two tautologies are a direct consequence of the duality in De Morgan’s laws.

18.8 Notes

[1] Ayres, p. 20

[2] Moore and Parker

[3] Copi and Cohen

[4] Hurley

[5] Russell and Whitehead, Principia Mathematica

[6] Russell and Whitehead, Principia Mathematica

[7] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. p. 4. ISBN 978-3-211-82971-4.

[8] Celestina Cotti Ferrero; Giovanni Ferrero (2002). Nearrings: Some Developments Linked to Semigroups and Groups. Kluwer Academic Publishers. pp. 62 and 67. ISBN 978-1-4613-0267-4.

[9] Eric C.R. Hehner (1993). A Practical Theory of Programming. Springer Science & Business Media. p. 230. ISBN 978-1-4419-8596-5. 68 CHAPTER 18. DISTRIBUTIVE PROPERTY

18.9 References

• Ayres, Frank, Schaum’s Outline of Modern Abstract Algebra, McGraw-Hill; 1st edition (June 1, 1965). ISBN 0-07-002655-6.

18.10 External links

• A demonstration of the Distributive Law for integer arithmetic (from cut-the-knot) Chapter 19

Double negation

This article is about the logical concept. For the linguistic concept, see double negative.

In propositional logic, double negation is the theorem that states that “If a statement is true, then it is not the case that the statement is not true.” This is expressed by saying that a proposition A is logically equivalent to not (not-A), or by the formula A ≡ ~(~A) where the sign ≡ expresses and the sign ~ expresses negation.[1] Like the law of the excluded middle, this principle is considered to be a law of thought in classical logic,[2] but it is disallowed by intuitionistic logic.[3] The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

∗4 · 13. ⊢ . p ≡ ∼ (∼ p) [4] “This is the principle of double negation, i.e. a proposition is equivalent of the falsehood of its negation.”

The principium contradictiones of modern logicians (particularly Leibnitz and Kant) in the formula A is not not-A, differs entirely in meaning and application from the Aristotelian proposition [ i.e. Law of Contradiction: not (A and not-A) i.e. ~(A & ~A), or not (( B is A) and (B is not-A))]. This latter refers to the relation between an affirmative and a negative judgment. According to Aristotle, one judgment [B is judged to be an A] contradicts another [B is judged to be a not-A]. The later proposition [ A is not not-A ] refers to the relation between subject and predicate in a single judgment; the predicate contradicts the subject. Aristotle states that one judgment is false when another is true; the later writers [Leibniz and Kant] state that a judgment is in itself and absolutely false, because the predicate contradicts the subject. What the later writers desire is a principle from which it can be known whether certain propositions are in themselves true. From the Aristotelian proposition we cannot immediately infer the truth or falsehood of any particular proposition, but only the impossibility of believing both affirmation and negation at the same time.[5]

19.1 Double negative elimination

Double negative elimination (also called double negation elimination, double negative introduction, double nega- tion introduction, double negation, or negation elimination[6][7][8]) are two valid rules of replacement. They are the inferences that if A is true, then not not-A is true and its converse, that, if not not-A is true, then A is true. The rule allows one to introduce or eliminate a negation from a logical proof. The rule is based on the equivalence of, for example, It is false that it is not raining. and It is raining. The double negation introduction rule is:

P ⇔ ¬¬P

and the double negation elimination rule is:

¬¬P ⇔ P

Where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

69 70 CHAPTER 19. DOUBLE NEGATION

19.1.1 Formal notation

The double negation introduction rule may be written in sequent notation:

P ⊢ ¬¬P

The double negation elimination rule may be written as:

¬¬P ⊢ P

In rule form:

P ¬¬P and

¬¬P P or as a tautology (plain propositional calculus sentence):

P → ¬¬P and

¬¬P → P

These can be combined together into a single biconditional formula:

¬¬P ↔ P

Since biconditionality is an equivalence relation, any instance of ¬¬A in a well-formed formula can be replaced by A, leaving unchanged the truth-value of the well-formed formula. Double negative elimination is a theorem of classical logic, but not of weaker logics such as intuitionistic logic and minimal logic. Because of their constructive character, a statement such as It’s not the case that it’s not raining is weaker than It’s raining. The latter requires a proof of rain, whereas the former merely requires a proof that rain would not be contradictory. (This distinction also arises in natural language in the form of litotes.) Double negation introduction is a theorem of both intuitionistic logic and minimal logic, as is ¬¬¬A ⊢ ¬A . In set theory also we have the negation operation of the complement which obeys this property: a set A and a set (AC)C (where AC represents the complement of A) are the same.

19.2 See also

• Gödel–Gentzen negative translation

19.3 Footnotes

[1] Or alternate symbolism such as A ↔ ¬(¬A) or Kleene’s *49o: A ∾ ¬¬A (Kleene 1952:119; in the original Kleene uses an elongated tilde ∾ for logical equivalence, approximated here with a “lazy S”.) 19.4. REFERENCES 71

[2] Hamilton is discussing Hegel in the following: “In the more recent systems of philosophy, the universality and necessity of the axiom of Reason has, with other logical laws, been controverted and rejected by speculators on the absolute.[On principle of Double Negation as another law of Thought, see Fries, Logik, §41, p. 190; Calker, Denkiehre odor Logic und Dialecktik, §165, p. 453; Beneke, Lehrbuch der Logic, §64, p. 41.]" (Hamilton 1860:68)

[3] The o of Kleene’s formula *49o indicates “the demonstration is not valid for both systems [classical system and intuitionistic system]", Kleene 1952:101.

[4] PM 1952 reprint of 2nd edition 1927 pages 101-102, page 117.

[5] Sigwart 1895:142-143

[6] Copi and Cohen

[7] Moore and Parker

[8] Hurley

19.4 References

• William Hamilton, 1860, Lectures on Metaphysics and Logic, Vol. II. Logic; Edited by Henry Mansel and John Veitch, Boston, Gould and Lincoln. Available online from googlebooks.

• Christoph Sigwart, 1895, Logic: The Judgment, Concept, and Inference; Second Edition, Translated by Helen Dendy, Macmillan & Co. New York. Available online from googlebooks.

• Stephen C. Kleene, 1952, Introduction to Metamathematics, 6th reprinting with corrections 1971, North- Holland Publishing Company, Amsterdam NY, ISBN 0 7204 2103 9. • Stephen C. Kleene, 1967, Mathematical Logic, Dover edition 2002, Dover Publicastions, Inc, Mineola N.Y. ISBN 0-486-42533-9 (pbk.) • Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, 2nd edition 1927, reprint 1962, Cambridge at the University Press, London UK, no ISBN or LCCCN. Chapter 20

Existential generalization

In predicate logic, existential generalization[1][2] (also known as existential introduction, ∃I) is a valid rule of inference that allows one to move from a specific statement, or one instance, to a quantified generalized statement, or existential proposition. In first-order logic, it is often used as a rule for the existential quantifier (∃) in formal proofs. Example: “Rover loves to wag his tail. Therefore, something loves to wag its tail.” In the Fitch-style calculus:

Q(a) → ∃x Q(x)

Where a replaces all free instances of x within Q(x).[3]

20.1 Quine

Universal instantiation and Existential Generalization are two aspects of a single principle, for instead of saying that "∀x x=x" implies “Socrates=Socrates”, we could as well say that the denial “Socrates≠Socrates"' implies "∃x x≠x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially.[4]

20.2 See also

• Inference rules

20.3 References

[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[3] pg. 347. Jon Barwise and John Etchemendy, Language proof and logic Second Ed., CSLI Publications, 2008.

[4] Willard van Orman Quine; Roger F. Gibson (2008). “V.24. Reference and Modality”. Quintessence. Cambridge, Mass: Belknap Press of Harvard University Press. Here: p.366.

72 Chapter 21

Existential instantiation

In predicate logic, existential instantiation (also called existential elimination)[1][2][3] is a valid rule of inference which says that, given a formula of the form (∃x)ϕ(x) , one may infer ϕ(c) for a new constant or variable symbol c. The rule has the restriction that the constant or variable c introduced by the rule must be a new term that has not occurred earlier in the proof. In one formal notation, the rule may be denoted

(∃x)Fx :: Fa, where a is an arbitrary term that has not been a part of our proof thus far.

21.1 See also

• existential fallacy

21.2 References

[1] Hurley, Patrick. A Concise Introduction to Logic. Wadsworth Pub Co, 2008.

[2] Copi and Cohen

[3] Moore and Parker

73 Chapter 22

Exportation (logic)

Exportation[1][2][3][4] is a valid rule of replacement in propositional logic. The rule allows conditional statements having conjunctive antecedents to be replaced by statements having conditional consequents and vice versa in logical proofs. It is the rule that: ((P ∧ Q) → R) ⇔ (P → (Q → R)) Where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

22.1 Formal notation

The exportation rule may be written in sequent notation:

((P ∧ Q) → R) ⊢ (P → (Q → R)) where ⊢ is a metalogical symbol meaning that (P → (Q → R)) is a syntactic consequence of ((P ∧ Q) → R) in some logical system; or in rule form:

(P ∧ Q) → R . P → (Q → R)

where the rule is that wherever an instance of " (P ∧ Q) → R " appears on a line of a proof, it can be replaced with " P → (Q → R) "; or as the statement of a truth-functional tautology or theorem of propositional logic:

((P ∧ Q) → R)) → (P → (Q → R))) where P , Q , and R are propositions expressed in some logical system.

22.2 Natural language

22.2.1 Truth values

At any time, if P→Q is true, it can be replaced by P→(P∧Q). One possible case for P→Q is for P to be true and Q to be true; thus P∧Q is also true, and P→(P∧Q) is true. Another possible case sets P as false and Q as true. Thus, P∧Q is false and P→(P∧Q) is false; false→false is true. The last case occurs when both P and Q are false. Thus, P∧Q is false and P→(P∧Q) is true.

74 22.3. RELATION TO FUNCTIONS 75

22.2.2 Example

It rains and the sun shines implies that there is a rainbow. Thus, if it rains, then the sun shines implies that there is a rainbow.

22.3 Relation to functions

Exportation is associated with Currying via the Curry–Howard correspondence.

22.4 References

[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 364–5.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371.

[3] Moore and Parker

[4] http://www.philosophypages.com/lg/e11b.htm Chapter 23

First principle

A first principle is a basic, foundational proposition or assumption that cannot be deduced from any other proposition or assumption. In mathematics, first principles are referred to as axioms or postulates. In physics and other sciences, theoretical work is said to be from first principles, or ab initio, if it starts directly at the level of established science and does not make assumptions such as empirical model and fitting parameters.

23.1 First principles in formal logic

In a formal logical system, that is, a set of propositions that are consistent with one another, it is probable that some of the statements can be deduced from one another. For example, in the syllogism, “All men are mortal; Socrates is a man; Socrates is mortal” the last claim can be deduced from the first two. A first principle is one that cannot be deduced from any other. The classic example is that of Euclid's (see Euclid’s Elements) geometry; its hundreds of propositions can be deduced from a set of definitions, postulates, and common notions: all three types constitute first principles.

23.2 Philosophy in general

In philosophy “first principles” are also commonly referred to as a priori terms and arguments, which are contrasted to a posteriori terms, reasoning or arguments, in that the former are simply assumed and exist prior to the reasoning process and the latter are “posterior” meaning deduced or inferred in the reasoning process. First Principles are generally treated in the realm of philosophy known as , but are an important factor in any metaphysical speculation. In philosophy “First principles” is often somewhat interchangeable and synonymous with a priori, datum and axiom or axiomatic reasoning/method.

23.3 Aristotle’s contribution

Terence Irwin writes:

When Aristotle explains in general terms what he tries to do in his philosophical works, he says he is looking for “first principles” (or “origins"; archai): In every systematic inquiry (methodos) where there are first principles, or causes, or ele- ments, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements. It is clear, then, that in the science of nature as else- where, we should try first to determine questions about the first principles. The naturally

76 23.4. DESCARTES 77

proper direction of our road is from things better known and clearer to us, to things that are clearer and better known by nature; for the things known to us are not the same as the things known unconditionally (haplôs). Hence it is necessary for us to progress, following this pro- cedure, from the things that are less clear by nature, but clearer to us, towards things that are clearer and better known by nature. (Phys. 184a10–21)

The connection between knowledge and first principles is not axiomatic as expressed in Aristotle’s ac- count of a first principle (in one sense) as “the first basis from which a thing is known” (Met. 1013a14– 15). The search for first principles is not peculiar to philosophy; philosophy shares this aim with biolog- ical, meteorological, and historical inquiries, among others. But Aristotle’s references to first principles in this opening passage of the Physics and at the start of other philosophical inquiries imply that it is a primary task of philosophy.[1]

23.4 Descartes

Profoundly influenced by Euclid, Descartes was a rationalist who invented the foundationalist system of philosophy. He used the method of doubt, now called Cartesian doubt, to systematically doubt everything he could possibly doubt, until he was left with what he saw as purely indubitable truths. Using these self-evident propositions as his axioms, or foundations, he went on to deduce his entire body of knowledge from them. The foundations are also called a priori truths. His most famous proposition is I think, therefore I am, or Cogito ergo sum. Descartes describes the concept of a first principle in the following excerpt from the preface to the Principles of Philosophy (1644):

I should have desired, in the first place, to explain in it what philosophy is, by commencing with the most common matters, as, for example, that the word philosophy signifies the study of wisdom, and that by wisdom is to be understood not merely prudence in the management of affairs, but a perfect knowledge of all that man can know, as well for the conduct of his life as for the preservation of his health and the discovery of all the arts, and that knowledge to subserve these ends must necessarily be deduced from first causes; so that in order to study the acquisition of it (which is properly called [284] philosophizing), we must commence with the investigation of those first causes which are called Principles. Now these principles must possess two conditions: in the first place, they must be so clear and evident that the human mind, when it attentively considers them, cannot doubt of their truth; in the second place, the knowledge of other things must be so dependent on them as that though the principles themselves may indeed be known apart from what depends on them, the latter cannot nevertheless be known apart from the former. It will accordingly be necessary thereafter to endeavor so to deduce from those principles the knowledge of the things that depend on them, as that there may be nothing in the whole series of deductions which is not perfectly manifest.[2]

23.5 In physics

In physics, a calculation is said to be from first principles, or ab initio, if it starts directly at the level of established laws of physics and does not make assumptions such as empirical model and fitting parameters. For example, calculation of electronic structure using Schrödinger’s equation within a set of approximations that do not include fitting the model to experimental data is an ab initio approach.

23.6 Notes

[1] Irwin, Terence. Aristotle’s First Principles. Oxford: Oxford University Press. ISBN 0-19-824290-5.

[2] VOL I, Principles, Preface to the French edition. Author’s letter to the translator of the book which may here serve as a preface, p. 181 78 CHAPTER 23. FIRST PRINCIPLE

23.7 See also

• Brute fact

• First cause • Intelligibility (philosophy)

• Law of non-contradiction

• Laws of thought • Notion (philosophy)

• Primitive notion • Principles

23.8 External links

• Euclid’s Elements Chapter 24

Formal ethics

Formal ethics is a formal logical system for describing and evaluating the form as opposed to the content of ethical principles. Formal ethics was introduced by Harry J. Gensler, in part in his 1990 logic textbook Symbolic Logic: Classical and Advanced Systems, but was more fully developed and justified in his 1996 book Formal Ethics. Formal ethics is related to ethical formalism in that its focus is the forms of moral judgments, but the exposition in Formal Ethics makes it clear that Gensler, unlike previous ethical formalists, does not consider formal ethics to be a complete ethical theory (such that the correct form would be necessary and sufficient for an ethical principle to be “correct”). In fact, the theorems of formal ethics could be seen as a largest common of most widely recognized ethical theories, in that none of its axioms (with the possible exception of rationality) is controversial among philosophers of ethics.

24.1 Symbolic representation

The axioms and theorems of formal ethics can be represented with the standard notation of predicate logic (but with a grammar closer to higher-order logics), augmented with imperative, deontic, belief, and modal logic symbols. Formal logic uses an underlined symbol (e.g. A ) to represent an imperative. If the same symbol is used without an underline, then the plain symbol is an indicative and the underlined symbol is an imperative version of the same proposition. For example, if we take the symbol A to mean the indicative “You eat an apple”, then A means the imperative “Eat an apple”. When a proposition is given as a predicate with one or more of the arguments representing agents, the agent to which the imperative applies is underlined. For example, if Dux means “You give a dollar to x” then Dux is the correct way to express “Give a dollar to x”. Within the system of formal ethics, an imperative is taken to represent a preference rather than a demand (called “anti-modal” view, because an underline doesn't behave like a modal operator). With this interpretation, the negation of an imperative (e.g. ¬A ) is taken to mean “Don't do A”, not “You may omit A”. To express demands, an imperative modal operator M (for may) is defined, so that MA = “You may do A” and ¬M¬A = “You may not omit doing A” = “You must do A”. Note that M is different from the deontic R “all right” operator defined below, as “You must do A” is still an imperative, without any ought judgment (i.e. not the same as “You ought to do A”). Following Castañeda's approach, the deontic operators O (for ought) and R (for all right, represented P for permissible in some deontic logic notations) are applied to imperatives. This is opposed to many deontic logics which apply the deontic operators to indicatives. Doing so avoids a difficulty of many deontic logics to express conditional imperatives. An often given example is If you smoke, then you ought to use an ashtray. If the deontic operators O and R only attach to indicatives, then it is not clear that either of the following representations is adequate:

O(smoke → ashtray) smoke → O(ashtray) However, by attaching the deontic operators to imperatives, we have unambiguously

O(smoke → ashtray)

79 80 CHAPTER 24. FORMAL ETHICS

Belief logic symbols, when combined with imperative logic, allow beliefs and desires to be expressed. The notation u : A is used for beliefs (“You believe A”) and u : A for desires (“You desire A”). In formal ethics, desire is taken in a strong sense when the agent of the belief is the same as the agent of the imperative. The following table shows the different interpretations for i : A depending on the agent and the tense of the imperative: This strong interpretation of desires precludes statements such as “I want to get out of bed (right now), but I don't act to get out of bed”. It does not, however, preclude “I want to get out of bed (right now), but I don't get out of bed”. Perhaps I act to get out of bed (make my best effort), but can't for some reason (e.g. I am tied down, my legs are broken, etc.). Beliefs may be indicative, as above, or imperative (e.g. u : A “Believe A”, u : A “Desire A”). They may also be combined with the deontic operators. For example, if G means “God exists”, then O(u : G) is “You ought to believe that God exists”, and (x)O(x : G) is “Everyone ought to believe that God exists”. The modal operators □ and ⋄ are used with their normal meanings in modal logic. In addition, to address the fact that logicians may disagree on what is logically necessary or possible, causal modal operators are separately defined to express that something is causally necessary or possible. The causal modal operators are represented □ and ⋄ . c c In addition, an operator ■ is used to mean “in every actual or hypothetic case”. This is used, for example, when expressing deontic and prescriptive counterfactuals, and is weaker than □ . For example,

■(OA → A) whereas

□(OA → A) means “You ought to do A logically entails do A”

Finally, formal ethics is a higher-order logic in that it allows properties, predicates that apply to other predicates. Properties can only be applied to actions, and the imperative notation is used (e.g. FA = “action A has property F”). The only types of property that formal ethics admits are universal properties, properties are not evaluative and do not make reference to proper names or pointer words. The following are examples of properties that are not universal properties:

• W , where WA means “Act A is wrong” (evaluative) • G , where GA means “Act A angers God” (proper name) [1] • I , where IA mean “Act A is something I do” (pointer word)

Requiring a property to be universal, however, is different from requiring it to be morally relevant. B , where BA means “Act A is done by a black person” is a universal property, but would not be considered morally relevant to most acts in most ethical theories. Formal ethics has a definition of relevantly similar actions that imposes certain consistency constraints, but does not have a definition of morally relevant properties. The notation G ∗ A is used to mean “G is a complete description of A in universal terms”. Put another way, G is a logical conjunction of all universal properties that A has. The G ∗ A notation is the basis for the definition of exactly similar actions and is used in the definition of relevantly similar actions.

24.2 Axioms

Formal ethics has four axioms in addition to the axioms of predicate and modal logic. These axioms (with the possible exception of Rationality, see below) are largely uncontroversial within ethical theory. In natural language, the axioms might be given as follows:

• P (Prescriptivity) — “Practice what you preach” • U (Universalizability) — “Make similar evaluations about similar cases” • R (Rationality) — “Be consistent” 24.3. NOTES 81

• E (Ends-Means) — “To achieve an end, do the necessary means”

Care must be taken in translating each of these natural language axioms to a symbolic representation, in order to avoid axioms that produce absurd results or contradictions. In particular, the axioms advocated by Gensler avoid “if-then” forms in favor of “don't combine” forms.

24.3 Notes

[1] “God” is a proper name if, for example, it is defined as “the god of Christianity”. If “God” is defined in another way, G might not reference a proper name. However, G might still not be a universal property if the definition of “God” is evaluative, for example, “the morally perfect being”. If the definition of “God” is nonevaluative (e.g. “the creator of the universe”), then G is a universal property. Perhaps a less contentionous example would be T , where TA means “Act A angers Terry”.

24.4 Further reading

• Gensler, Harry J. Formal Ethics. ISBN 0-415-13066-2 Chapter 25

Formal proof

A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language) each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concept of natural deduction is a generalization of the concept of proof.[1] The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence. Formal proofs often are constructed with the help of computers in interactive theorem proving. Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only semi-decidable, depending upon the formal system in use.

25.1 Background

25.1.1 Formal language

Main article: Formal language

A formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions; it can exist before any interpretation is assigned to it – that is, before it has any meaning. Formal proofs are expressed in some formal language.

25.1.2 Formal grammar

Main articles: Formal grammar and Formation rule

A formal grammar (also called formation rules) is a precise description of the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).

25.1.3 Formal systems

Main article: Formal system

A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation

82 25.2. SEE ALSO 83 rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.

25.1.4 Interpretations

Main articles: Formal semantics (logic) and Interpretation (logic)

An interpretation of a formal system is the assignment of meanings to the symbols, and truth-values to the sentences of a formal system. The study of interpretations is called formal semantics. Giving an interpretation is synonymous with constructing a model.

25.2 See also

• Proof (truth) • Mathematical proof

• Proof theory • Axiomatic system

25.3 References

[1] The Cambridge Dictionary of Philosophy, deduction

25.4 External links

• “A Special Issue on Formal Proof”. Notices of the American Mathematical Society. December 2008.

• 2πix.com: Logic Part of a series of articles covering mathematics and logic. Chapter 26

Formal system

A formal system is broadly defined as any well-defined system of abstract thought based on the model of mathematics. Euclid’s Elements is often held to be the first formal system and displays the characteristic of a formal system. The entailment of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a larger theory or field (e.g. Euclidean geometry) consistent with the usage in modern mathematics such as model theory. A formal system need not be mathematical as such; for example, Spinoza’s Ethics imitates the form of Euclid’s Elements. Each formal system has a formal language, which is composed by primitive symbols. These symbols act on certain rules of formation and are developed by inference from a set of axioms. The system thus consists of any number of formulas built up through finite combinations of the primitive symbols—combinations that are formed from the axioms in accordance with the stated rules.[1] Formal systems in mathematics consist of the following elements:

1. A finite set of symbols (i.e. the alphabet), that can be used for constructing formulas (i.e. finite strings of symbols).

2. A grammar, which tells how well-formed formulas (abbreviated wff) are constructed out of the symbols in the alphabet. It is usually required that there be a decision procedure for deciding whether a formula is well formed or not.

3. A set of axioms or axiom schemata: each axiom must be a wff.

4. A set of inference rules.

A formal system is said to be recursive (i.e. effective) if the set of axioms and the set of inference rules are decidable sets or semidecidable sets, according to context. Some theorists use the term formalism as a rough synonym for formal system, but the term is also used to refer to a particular style of notation, for example, Paul Dirac's bra–ket notation.

26.1 Related subjects

26.1.1 Logical system

A logical system or, for short, logic, is a formal system together with a form of semantics, usually in the form of model-theoretic interpretation, which assigns truth values to sentences of the formal language, that is, formulae that contain no free variables. A logic is sound if all sentences that can be derived are true in the interpretation, and complete if, conversely, all true sentences can be derived.

84 26.1. RELATED SUBJECTS 85

26.1.2 Deductive system

A deductive system (also called a deductive apparatus of a formal system) consists of the axioms (or axiom schemata) and rules of inference that can be used to derive the theorems of the system.[2] Such a deductive system is intended to preserve deductive qualities in the formulas that are expressed in the system. Usually the quality we are concerned with is truth as opposed to falsehood. However, other modalities, such as justification or belief may be preserved instead. In order to sustain its deductive integrity, a deductive apparatus must be definable without reference to any intended interpretation of the language. The aim is to ensure that each line of a derivation is merely a syntactic consequence of the lines that precede it. There should be no element of any interpretation of the language that gets involved with the deductive nature of the system.

26.1.3 Formal proofs

Main article: Formal proof

Formal proofs are sequences of well-formed formulas. For a wff to qualify as part of a proof, it might either be an axiom or be the product of applying an inference rule on previous wffs in the proof sequence. The last wff in the sequence is recognized as a theorem. The point of view that generating formal proofs is all there is to mathematics is often called formalism. David Hilbert founded metamathematics as a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called a metalanguage. The metalanguage may be nothing more than ordinary natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called the object language, that is, the object of the discussion in question. Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all wffs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for wffs, there is no guarantee that there will be a decision procedure for deciding whether a given wff is a theorem or not. The notion of theorem just defined should not be confused with theorems about the formal system, which, in order to avoid confusion, are usually called metatheorems.

26.1.4 Formal language

Main article: Formal language

In mathematics, logic, and computer science, a formal language is a language that is defined by precise mathematical or machine processable formulas. Like languages in linguistics, formal languages generally have two aspects:

• the syntax of a language is what the language looks like (more formally: the set of possible expressions that are valid utterances in the language)

• the semantics of a language are what the utterances of the language mean (which is formalized in various ways, depending on the type of language in question)

A special branch of mathematics and computer science exists that is devoted exclusively to the theory of language syntax: formal language theory. In formal language theory, a language is nothing more than its syntax; questions of semantics are not included in this specialty.

26.1.5 Formal grammar

Main article: Formal grammar 86 CHAPTER 26. FORMAL SYSTEM

In computer science and linguistics a formal grammar is a precise description of a formal language: a set of strings. The two main categories of formal grammar are that of generative grammars, which are sets of rules for how strings in a language can be generated, and that of analytic grammars (or reductive grammar,[3][4] which are sets of rules for how a string can be analyzed to determine whether it is a member of the language. In short, an analytic grammar describes how to recognize when strings are members in the set, whereas a generative grammar describes how to write only those strings in the set.

26.2 See also

26.3 References

[1] Encyclopædia Britannica, Formal system definition, 2007.

[2] Hunter, Geoffrey, : An Introduction to the Metatheory of Standard First-Order Logic, University of California Pres, 1971

[3] Reductive grammar: (computer science) A set of syntactic rules for the analysis of strings to determine whether the strings exist in a language. “Sci-Tech Dictionary McGraw-Hill Dictionary of Scientific and Technical Terms” (6th ed.). McGraw- Hill. About the Author Compiled by The Editors of the McGraw-Hill Encyclopedia of Science & Technology (New York, NY) an in-house staff who represents the cutting-edge of skill, knowledge, and innovation in science publishing.

[4] “There are two classes of formal-language definition compiler-writing schemes. The productive grammar approach is the most common. A productive grammar consists primarrly of a set of rules that describe a method of generating all possible strings of the language. The reductive or analytical grammar technique states a set of rules that describe a method of analyzing any string of characters and deciding whether that string is in the language.” "The TREE-META Compiler- Compiler System: A Meta Compiler System for the Univac 1108 and General Electric 645, University of Utah Technical Report RADC-TR-69-83. C. Stephen Carr, David A. Luther, Sherian Erdmann” (PDF). Retrieved 5 January 2015.

26.4 Further reading

• Raymond M. Smullyan, 1961. Theory of Formal Systems: Annals of Mathematics Studies, Princeton University Press (April 1, 1961) 156 pages ISBN 0-691-08047-X

• S. C. Kleene, 1967. Mathematical Logic Reprinted by Dover, 2002. ISBN 0-486-42533-9 • Douglas Hofstadter, 1979. Gödel, Escher, Bach: An Eternal Golden Braid ISBN 978-0-465-02656-2. 777 pages.

26.5 External links

• Encyclopædia Britannica, Formal system definition, 2007.

• What is a Formal System?: Some quotes from John Haugeland’s `Artificial Intelligence: The Very Idea' (1985), pp. 48–64. • Peter Suber, Formal Systems and Machines: An Isomorphism, 1997. Chapter 27

Hypothetical syllogism

In classical logic, hypothetical syllogism is a valid argument form which is a syllogism having a conditional statement for one or both of its premises.[1][2]

If I do not wake up, then I cannot go to work. If I cannot go to work, then I will not get paid. Therefore, if I do not wake up, then I will not get paid.

In propositional logic, hypothetical syllogism is the name of a valid rule of inference[3][4] (often abbreviated HS and sometimes also called the chain argument, chain rule, or the principle of transitivity of implication). Hypothetical syllogism is one of the rules in classical logic that is not always accepted in certain systems of non-classical logic. The rule may be stated:

P → Q, Q → R ∴ P → R where the rule is that whenever instances of " P → Q ", and " Q → R " appear on lines of a proof," P → R " can be placed on a subsequent line. Hypothetical syllogism is closely related and similar to disjunctive syllogism, in that it is also type of syllogism, and also the name of a rule of inference.

27.1 Formal notation

The hypothetical syllogism rule may be written in sequent notation:

(P → Q), (Q → R) ⊢ (P → R) where ⊢ is a metalogical symbol meaning that P → R is a syntactic consequence of P → Q , and Q → R in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic:

((P → Q) ∧ (Q → R)) → (P → R) where P , Q , and R are propositions expressed in some formal system.

27.2 See also

• Modus Ponens

87 88 CHAPTER 27. HYPOTHETICAL SYLLOGISM

• Modus Tollens

• Affirming the consequent • Denying the antecedent

• Transitive relation

27.3 References

[1] Hurley

[2] Copi and Cohen

[3] Hurley

[4] Copi and Cohen

27.4 External links

• Philosophy Index: Hypothetical Syllogism Chapter 28

List of formal systems

This is a list of formal systems, also known as logical calculi.

28.1 Mathematical

• Domain relational calculus, a calculus for the relational data model • Functional calculus, a way to apply various types of functions to operators • Join calculus, a theoretical model for distributed programming • Lambda calculus, a formulation of the theory of reflexive functions that has deep connections to computational theory • Matrix calculus, a specialized notation for multivariable calculus over spaces of matrices • Modal μ-calculus, a common temporal logic used by formal verification methods such as model checking • Pi-calculus, a formulation of the theory of concurrent, communicating processes that was invented by Robin Milner • Predicate calculus, specifies the rules of inference governing the logic of predicates • Propositional calculus, specifies the rules of inference governing the logic of propositions • Refinement calculus, a way of refining models of programs into efficient programs • Rho calculus, introduced as a general means to uniformly integrate rewriting and lambda calculus • Tuple calculus, a calculus for the relational data model, inspired the SQL language • Umbral calculus, the combinatorics of certain operations on polynomials • Vector calculus (also called vector analysis), comprising specialized notations for multivariable analysis of vectors in an inner-product space

28.2 Other formal systems

• Formal ethics

28.3 See also

• Formal system

89 Chapter 29

Material implication (rule of inference)

For other uses, see Material implication. Not to be confused with material inference.

In propositional logic, material implication [1][2] is a valid rule of replacement that allows for a conditional statement to be replaced by a disjunction if and only if the antecedent is negated. The rule states that P implies Q is logically equivalent to not-P or Q and can replace each other in logical proofs.

P → Q ⇔ ¬P ∨ Q Where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

29.1 Formal notation

The material implication rule may be written in sequent notation:

(P → Q) ⊢ (¬P ∨ Q) where ⊢ is a metalogical symbol meaning that (¬P ∨ Q) is a syntactic consequence of (P → Q) in some logical system; or in rule form:

P → Q ¬P ∨ Q where the rule is that wherever an instance of " P → Q " appears on a line of a proof, it can be replaced with " ¬P ∨ Q "; or as the statement of a truth-functional tautology or theorem of propositional logic:

(P → Q) → (¬P ∨ Q) where P and Q are propositions expressed in some formal system.

29.2 Example

If it is a bear, then it can swim. Thus, it is not a bear or it can swim.

90 29.3. REFERENCES 91 where P is the statement “it is a bear” and Q is the statement “it can swim”. If it was found that the bear could not swim, written symbolically as P ∧ ¬Q , then both sentences are false but otherwise they are both true.

29.3 References

[1] Hurley, Patrick (1991). A Concise Introduction to Logic (4th ed.). Wadsworth Publishing. pp. 364–5.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371. Chapter 30

Modus ponendo tollens

Modus ponendo tollens (Latin: “mode that by affirming, denies”)[1] is a valid rule of inference for propositional logic, sometimes abbreviated MPT.[2] It is closely related to modus ponens and modus tollens. It is usually described as having the form:

1. Not both A and B 2. A 3. Therefore, not B

For example:

1. Ann and Bill cannot both win the race. 2. Ann won the race. 3. Therefore, Bill cannot have won the race.

As E.J. Lemmon describes it:"Modus ponendo tollens is the principle that, if the negation of a conjunction holds and also one of its conjuncts, then the negation of its other conjunct holds.”[3] In logic notation this can be represented as:

1. ¬(A ∧ B) 2. A 3. ∴ ¬B

Based on the Sheffer Stroke (alternative denial), "|", the inference can also be formalized in this way:

1. A | B 2. A 3. ∴ ¬B

30.1 References

[1] Stone, Jon R. 1996. Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge:60.

[2] Politzer, Guy & Carles, Laure. 2001. 'Belief Revision and Uncertain Reasoning'. Thinking and Reasoning. 7:217-234.

[3] Lemmon, Edward John. 2001. Beginning Logic. Taylor and Francis/CRC Press: 61.

92 Chapter 31

Modus ponens

In propositional logic, modus ponendo ponens (Latin for “the way that affirms by affirming"; often abbreviated to MP or modus ponens[1][2][3][4]) or implication elimination is a valid, simple argument form and rule of inference.[5] It can be summarized as "P implies Q; P is asserted to be true, so therefore Q must be true.” The history of modus ponens goes back to antiquity.[6] While modus ponens is one of the most commonly used concepts in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the “rule of definition” and the “rule of substitution”.[7] Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment.[8] Enderton, for example, observes that “modus ponens can produce shorter formulas from longer ones”,[9] and Russell observes that “the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] . . . an inference is the dropping of a true premise; it is the dissolution of an implication”.[10] A justification for the “trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error”.[11] In other words: if one statement or proposition implies a second one, and the first statement or proposition is true, then the second one is also true. If P implies Q and P is true, then Q is true.[12] An example is:

If it is raining, I will meet you at the theater. It is raining. Therefore, I will meet you at the theater.

Modus ponens can be stated formally as:

P → Q, P ∴ Q

where the rule is that whenever an instance of "P → Q" and "P" appear by themselves on lines of a logical proof, Q can validly be placed on a subsequent line; furthermore, the premise P and the implication “dissolves”, their only trace being the symbol Q that is retained for use later e.g. in a more complex deduction. It is closely related to another valid form of argument, modus tollens. Both have apparently similar but invalid forms such as affirming the consequent, denying the antecedent, and of absence. Constructive dilemma is the disjunctive version of modus ponens. Hypothetical syllogism is closely related to modus ponens and sometimes thought of as “double modus ponens.”

31.1 Formal notation

The modus ponens rule may be written in sequent notation:

93 94 CHAPTER 31. MODUS PONENS

P → Q, P ⊢ Q

where ⊢ is a metalogical symbol meaning that Q is a syntactic consequence of P → Q and P in some logical system; or as the statement of a truth-functional tautology or theorem of propositional logic:

((P → Q) ∧ P ) → Q where P, and Q are propositions expressed in some formal system.

31.2 Explanation

The argument form has two premises (hypothesis). The first premise is the “if–then” or conditional claim, namely that P implies Q. The second premise is that P, the antecedent of the conditional claim, is true. From these two premises it can be logically concluded that Q, the consequent of the conditional claim, must be true as well. In artificial intelligence, modus ponens is often called forward chaining. An example of an argument that fits the form modus ponens:

If today is Tuesday, then John will go to work. Today is Tuesday. Therefore, John will go to work.

This argument is valid, but this has no bearing on whether any of the statements in the argument are true; for modus ponens to be a sound argument, the premises must be true for any true instances of the conclusion. An argument can be valid but nonetheless unsound if one or more premises are false; if an argument is valid and all the premises are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the reasoning for John’s going to work (because it is Wednesday) is unsound. The argument is not only sound on Tuesdays (when John goes to work), but valid on every day of the week. A propositional argument using modus ponens is said to be deductive. In single-conclusion sequent calculi, modus ponens is the Cut rule. The cut-elimination theorem for a calculus says that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut, and hence that Cut is admissible. The Curry–Howard correspondence between proofs and programs relates modus ponens to function application: if f is a function of type P → Q and x is of type P, then f x is of type Q.

31.3 Justification via truth table

The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table. In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true.

31.4 See also

• Condensed detachment

• What the Tortoise Said to Achilles 31.5. REFERENCES 95

31.5 References

[1] Stone, Jon R. (1996). Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge: 60.

[2] Copi and Cohen

[3] Hurley

[4] Moore and Parker

[5] Enderton 2001:110

[6] Susanne Bobzien (2002). The Development of Modus Ponens in Antiquity, Phronesis 47, No. 4, 2002.

[7] Alfred Tarski 1946:47. Also Enderton 2001:110ff.

[8] Tarski 1946:47

[9] Enderton 2001:111

[10] Whitehead and Russell 1927:9

[11] Whitehead and Russell 1927:9

[12] Jago, Mark (2007). Formal Logic. Humanities-Ebooks LLP. ISBN 978-1-84760-041-7.

31.6 Sources

• Alfred Tarski 1946 Introduction to Logic and to the Methodology of the Deductive Sciences 2nd Edition, reprinted by Dover Publications, Mineola NY. ISBN 0-486-28462-X (pbk). • Alfred North Whitehead and Bertrand Russell 1927 Principia Mathematica to *56 (Second Edition) paperback edition 1962, Cambridge at the University Press, London UK. No ISBN, no LCCCN. • Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition, Harcourt Academic Press, Burlington MA, ISBN 978-0-12-238452-3.

31.7 External links

• Hazewinkel, Michiel, ed. (2001), “Modus ponens”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Modus ponens at PhilPapers • Modus ponens at Wolfram MathWorld Chapter 32

Modus tollens

In propositional logic, modus tollens[1][2][3][4] (or modus tollendo tollens and also denying the consequent)[5] (Latin for “the way that denies by denying”)[6] is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contra-positive. The first to explicitly state the argument form modus tollens were the Stoics.[7] The inference rule modus tollens, also known as the law of contrapositive, validates the inference from P implies Q and the contradictory of Q , to the contradictory of P . The modus tollens rule can be stated formally as:

P → Q, ¬Q ∴ ¬P where P → Q stands for the statement “P implies Q” (and ¬Q→ ¬P is called the “contrapositive”). ¬Q stands for “it is not the case that Q” (or in brief “not Q”). Then, whenever " P → Q " and " ¬Q " each appear by themselves as a line of a proof, then " ¬P " can validly be placed on a subsequent line. The history of the inference rule modus tollens goes back to antiquity.[8] Modus tollens is closely related to modus ponens. There are two similar, but invalid, forms of argument: affirming the consequent and denying the antecedent. See also contraposition and proof by contraposition.

32.1 Formal notation

The modus tollens rule may be written in sequent notation:

P → Q, ¬Q ⊢ ¬P

where ⊢ is a metalogical symbol meaning that ¬P is a syntactic consequence of P → Q and ¬Q in some logical system; or as the statement of a functional tautology or theorem of propositional logic:

((P → Q) ∧ ¬Q) → ¬P

where P and Q are propositions expressed in some formal system; or including assumptions:

Γ ⊢ P → Q Γ ⊢ ¬Q Γ ⊢ ¬P

96 32.2. EXPLANATION 97

though since the rule does not change the set of assumptions, this is not strictly necessary. More complex rewritings involving modus tollens are often seen, for instance in set theory:

P ⊆ Q x ∈/ Q ∴ x ∈/ P (“P is a subset of Q. x is not in Q. Therefore, x is not in P.”) Also in first-order predicate logic:

∀x : P (x) → Q(x) ∃x : ¬Q(x) ∴ ∃x : ¬P (x) (“For all x if x is P then x is Q. There exists some x that is not Q. Therefore, there exists some x that is not P.”) Strictly speaking these are not instances of modus tollens, but they may be derived using modus tollens using a few extra steps.

32.2 Explanation

The argument has two premises. The first premise is a conditional or “if-then” statement, for example that if P then Q. The second premise is that it is not the case that Q . From these two premises, it can be logically concluded that it is not the case that P. Consider an example:

If the watch-dog detects an intruder, the watch-dog will bark. The watch-dog did not bark. Therefore, no intruder was detected by the watch-dog.

Supposing that the premises are both true (the dog will bark if it detects an intruder, and does indeed not bark), it follows that no intruder has been detected. This is a valid argument since it is not possible for the conclusion to be false if the premises are true. (It is conceivable that there may have been an intruder that the dog did not detect, but that does not invalidate the argument; the first premise is “if the watch-dog detects an intruder.” The thing of importance is that the dog detects or doesn't detect an intruder, not if there is one.) Another example:

If I am the axe murderer, then I can use an axe. I cannot use an axe. Therefore, I am not the axe murderer.

32.3 Relation to modus ponens

Every use of modus tollens can be converted to a use of modus ponens and one use of transposition to the premise which is a material implication. For example:

If P, then Q. (premise -- material implication) If not Q , then not P. (derived by transposition) Not Q . (premise) Therefore, not P. (derived by modus ponens)

Likewise, every use of modus ponens can be converted to a use of modus tollens and transposition. 98 CHAPTER 32. MODUS TOLLENS

32.4 Justification via truth table

The validity of modus tollens can be clearly demonstrated through a truth table. In instances of modus tollens we assume as premises that p → q is true and q is false. There is only one line of the truth table—the fourth line—which satisfies these two conditions. In this line, p is false. Therefore, in every instance in which p → q is true and q is false, p must also be false.

32.5 Formal proof

32.5.1 Via disjunctive syllogism

32.5.2 Via reductio ad absurdum

32.6 See also

• Evidence of absence • Non sequitur

• Proof by contradiction • Proof by contrapositive

32.7 Notes

[1] University of North Carolina, Philosophy Department, Logic Glossary. Accessdate on 31 October 2007.

[2] Copi and Cohen

[3] Hurley

[4] Moore and Parker

[5] Sanford, David Hawley. 2003. If P, Then Q: Conditionals and the Foundations of Reasoning. London, UK: Routledge: 39 "[Modus] tollens is always an abbreviation for modus tollendo tollens, the mood that by denying denies.”

[6] Stone, Jon R. 1996. Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge: 60.

[7] “Stanford Encyclopedia of Philosophy: Ancient Logic: The Stoics"

[8] Susanne Bobzien (2002). “The Development of Modus Ponens in Antiquity”, Phronesis 47.

32.8 External link

• Modus Tollens at Wolfram MathWorld Chapter 33

Negation introduction

Negation introduction is a Rule of inference, or Transformation rule, in the field of Propositional calculus. Negation introduction states that if a given antecedent implies both the consequent and its complement, then the antecedent is a contradiction.[1] [2]

33.1 Formal notation

This can be written as: (P → Q) ∧ (P → ¬Q) ↔ ¬P An example of its use would be an attempt to prove two contradictory statements from a single fact. For example, if a person were to state “When the phone rings I get happy” and then later state “When the phone rings I get annoyed”, the logical inference which is made from this contradictory information is that the person is making a false statement about the phone ringing.

33.2 External links

• Category:Propositional Calculus on ProofWiki (GFDLed)

33.3 References

[1] Wansing (Ed.), Heinrich (1996). Negation: A notion in focus. Berlin: Walter de Gruyter. ISBN 3110147696.

[2] Haegeman, Lilliane (30 Mar 1995). The Syntax of Negation. Cambridge: Cambridge University Press. p. 70. ISBN 0521464927.

99 Chapter 34

Physical symbol system

See also: Philosophy of artificial intelligence and Data system

A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions. The physical symbol system hypothesis (PSSH) is a position in the philosophy of artificial intelligence formulated by Allen Newell and Herbert A. Simon. They wrote:

“A physical symbol system has the necessary and sufficient means for general intelligent action.”[1] — Allen Newell and Herbert A. Simon

This claim implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).[2] The idea has philosophical roots in Hobbes (who claimed reasoning was “nothing more than reckoning”), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to “atomic impressions”) and even Kant (who analyzed all experience as controlled by formal rules).[3] The latest version is called the computational theory of mind, associated with philosophers Hilary Putnam and Jerry Fodor.[4] The hypothesis has been criticized strongly by various parties, but is a core part of AI research. A common critical view is that the hypothesis seems appropriate for higher-level intelligence such as playing chess, but less appropriate for commonplace intelligence such as vision. A distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as and and the more complex “symbols” that are present in a machine like a neural network.

34.1 Examples of physical symbol systems

Examples of physical symbol systems include:

• Formal logic: the symbols are words like “and”, “or”, “not”, “for all x” and so on. The expressions are statements in formal logic which can be true or false. The processes are the rules of logical deduction.

• Algebra: the symbols are "+", "×", "x", "y", “1”, “2”, “3”, etc. The expressions are equations. The processes are the rules of algebra, that allow one to manipulate a mathematical expression and retain its truth.

• A digital computer: the symbols are zeros and ones of computer memory, the processes are the operations of the CPU that change memory.

• Chess: the symbols are the pieces, the processes are the legal chess moves, the expressions are the positions of all the pieces on the board.

The physical symbol system hypothesis claims that both of these are also examples of physical symbol systems:

100 34.2. ARGUMENTS IN FAVOR OF THE PHYSICAL SYMBOL SYSTEM HYPOTHESIS 101

• Intelligent human thought: the symbols are encoded in our brains. The expressions are thoughts. The processes are the mental operations of thinking.

• A running artificial intelligence program: The symbols are data. The expressions are more data. The processes are programs that manipulate the data.

34.2 Arguments in favor of the physical symbol system hypothesis

34.2.1 Newell and Simon

Two lines of evidence suggested to Allen Newell and Herbert A. Simon that “symbol manipulation” was the essence of both human and machine intelligence: the development of artificial intelligence programs and psychological ex- periments on human beings. First, in the early decades of AI research there were a number of very successful programs that used high level symbol processing, such as Newell and Herbert A. Simon's General Problem Solver or Terry Winograd's SHRDLU.[5] John Haugeland named this kind of AI research “Good Old Fashioned AI” or GOFAI.[6] Expert systems and logic programming are descendants of this tradition. The success of these programs suggested that symbol processing systems could simulate any intelligent action. And second, psychological experiments carried out at the same time found that, for difficult problems in logic, plan- ning or any kind of “puzzle solving”, people used this kind of symbol processing as well. AI researchers were able to simulate the step by step problem solving skills of people with computer programs. This collaboration and the issues it raised eventually would lead to the creation of the field of cognitive science.[7] (This type of research was called "cognitive simulation.”) This line of research suggested that human problem solving consisted primarily of the manipulation of high level symbols.

34.2.2 Turing completeness

In Newell and Simon’s arguments, the “symbols” that the hypothesis is referring to are physical objects that represent things in the world, symbols such as that have a recognizable meaning or denotation and can be composed with other symbols to create more complex symbols. However, it is also possible to interpret the hypothesis as referring to the simple abstract 0s and 1s in the memory of a digital computer or the stream of 0s and 1s passing through the perceptual apparatus of a robot. These are, in some sense, symbols as well, although it is not always possible to determine exactly what the symbols are standing for. In this version of the hypothesis, no distinction is being made between “symbols” and “signals”, as David Touretzky and Dean Pomerleau explain.[8] Under this interpretation, the physical symbol system hypothesis asserts merely that intelligence can be digitized. This is a weaker claim. Indeed, Touretzky and Pomerleau write that if symbols and signals are the same thing, then "[s]ufficiency is a given, unless one is a dualist or some other sort of mystic, because physical symbol systems are Turing-universal.”[8] The widely accepted Church–Turing thesis holds that any Turing-universal system can simulate any conceivable process that can be digitized, given enough time and memory. Since any digital computer is Turing- universal, any digital computer can, in theory, simulate anything that can be digitized to a sufficient level of precision, including the behavior of intelligent organisms. The necessary condition of the physical symbol systems hypothesis can likewise be finessed, since we are willing to accept almost any signal as a form of “symbol” and all intelligent biological systems have signal pathways.

34.3 Criticism

Nils Nilsson has identified four main “themes” or grounds in which the physical symbol system hypothesis has been attacked.[2]

1. The “erroneous claim that the [physical symbol system hypothesis] lacks symbol grounding" which is presumed to be a requirement for general intelligent action. 102 CHAPTER 34. PHYSICAL SYMBOL SYSTEM

2. The common belief that AI requires non-symbolic processing (that which can be supplied by a connectionist architecture for instance).

3. The common statement that the brain is simply not a computer and that “computation as it is currently under- stood, does not provide an appropriate model for intelligence”.

4. And last of all that it is also believed in by some that the brain is essentially mindless, most of what takes place are chemical reactions and that human intelligent behaviour is analogous to the intelligent behaviour displayed for example by ant colonies.

34.3.1 Dreyfus and the primacy of unconscious skills

Main article: Dreyfus’ critique of artificial intelligence

Hubert Dreyfus attacked the necessary condition of the physical symbol system hypothesis, calling it “the psycholog- ical assumption” and defining it thus:

• The mind can be viewed as a device operating on bits of information according to formal rules.[9]

Dreyfus refuted this by showing that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation. Experts solve problems quickly by using their intuitions, rather than step-by-step trial and error searches. Dreyfus argued that these unconscious skills would never be captured in formal rules.[10]

34.3.2 Searle and his Chinese room

Main article: Chinese room

John Searle's Chinese room argument, presented in 1980, attempted to show that a program (or any physical symbol system) could not be said to “understand” the symbols that it uses; that the symbols have no meaning for the machine, and so the machine can never be truly intelligent.[11]

34.3.3 Brooks and the roboticists

Main articles: Artificial intelligence, situated approach and Moravec’s paradox

In the sixties and seventies, several laboratories attempted to build robots that used symbols to represent the world and plan actions (such as the Stanford Cart). These projects had limited success. In the middle eighties, Rodney Brooks of MIT was able to build robots that had superior ability to move and survive without the use of symbolic reasoning at all. Brooks (and others, such as Hans Moravec) discovered that our most basic skills of motion, survival, perception, balance and so on did not seem to require high level symbols at all, that in fact, the use of high level symbols was more complicated and less successful. In a 1990 paper Elephants Don't Play Chess, robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[12]

34.3.4 Connectionism

Main article: connectionism 34.4. SEE ALSO 103

34.3.5 Embodied philosophy

Main article: embodied philosophy

George Lakoff, Mark Turner and others have argued that our abstract skills in areas such as mathematics, ethics and philosophy depend on unconscious skills that derive from the body, and that conscious symbol manipulation is only a small part of our intelligence.

34.4 See also

• Artificial intelligence, situated approach

34.5 Notes

[1] Newell & Simon 1976, p. 116 and Russell & Norvig 2003, p. 18

[2] Nilsson 2007, p. 1

[3] Dreyfus 1979, p. 156, Haugeland, pp. 15–44

[4] Horst 2005

[5] Dreyfus 1979, pp. 130–148

[6] Haugeland 1985, p. 112

[7] Dreyfus 1979, pp. 91–129, 170–174

[8] Reconstructing Physical Symbol Systems David S. Touretzky and Dean A. Pomerleau Computer Science Department Carnegie Mellon University Cognitive Science 18(2):345–353, 1994. http://www.cs.cmu.edu/~{}dst/pubs/simon-reply-www. ps.gz

[9] Dreyfus 1979, p. 156

[10] Dreyfus 1972, Dreyfus 1979, Dreyfus & Dreyfus 1986. See also Russell & Norvig 2003, pp. 950–952, Crevier & 1993 120–132 and Hearn 2007, pp. 50–51

[11] Searle 1980, Crevier 1993, pp. 269–271

[12] Brooks 1990, p. 3

34.6 References

• Brooks, Rodney (1990), “Elephants Don't Play Chess” (PDF), Robotics and Autonomous Systems 6 (1-2): 3–15, doi:10.1016/S0921-8890(05)80025-9, retrieved 2007-08-30. • Cole, David (Fall 2004), “The Chinese Room Argument”, in Zalta, Edward N., The Stanford Encyclopedia of Philosophy. • Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3 • Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 0-06-011082-1 • Dreyfus, Hubert (1979), What Computers Still Can't Do, New York: MIT Press. • Dreyfus, Hubert; Dreyfus, Stuart (1986), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Oxford, U.K.: Blackwell • Gladwell, Malcolm (2005), Blink: The Power of Thinking Without Thinking, Boston: Little, Brown, ISBN 0-316-17232-4. 104 CHAPTER 34. PHYSICAL SYMBOL SYSTEM

• Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT Press.

• Hobbes (1651), Leviathan. • Horst, Steven (Fall 2005), “The Computational Theory of Mind”, in Zalta, Edward N., The Stanford Encyclo- pedia of Philosophy. • Kurzweil, Ray (2005), The Singularity is Near, New York: Viking Press, ISBN 0-670-03384-7.

• McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

• Newell, Allen; Simon, H. A. (1963), “GPS: A Program that Simulates Human Thought”, in Feigenbaum, E.A.; Feldman, J., Computers and Thought, New York: McGraw-Hill

• Newell, Allen; Simon, H. A. (1976), “Communications of the ACM”, Communications of the ACM 19 (3): 113–126, doi:10.1145/360018.360022 |chapter= ignored (help) • Nilsson, Nils (2007), Lungarella, M., ed., “50 Years of AI” (PDF), Festschrift, LNAI 4850 (Springer): 9–17 |chapter= ignored (help) • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 • Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences 3 (3): 417–457, doi:10.1017/S0140525X00005756

• Turing, Alan (October 1950), “Computing machinery and intelligence”, Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433 Chapter 35

Predicate logic

For the specific term, see First-order logic.

In mathematical logic, predicate logic is the generic term for symbolic formal systems like first-order logic, second- order logic, many-sorted logic, or infinitary logic. This formal system is distinguished from other systems in that its formulae contain variables which can be quantified. Two common quantifiers are the existential ∃ (“there exists”) and universal ∀ (“for all”) quantifiers. The variables could be elements in the universe under discussion, or perhaps relations or functions over that universe. For instance, an existential quantifier over a function symbol would be interpreted as modifier “there is a function”. The foundations of predicate logic were developed independently by Gottlob Frege and .[1] In informal usage, the term “predicate logic” occasionally refers to first-order logic. Some authors consider the predicate calculus to be an axiomatized form of predicate logic, and the predicate logic to be derived from an informal, more intuitive development.[2] Predicate logics also include logics mixing modal operators and quantifiers. See Modal logic, Saul Kripke, Barcan Marcus formulae, A. N. Prior, and Nicholas Rescher.

35.1 See also

• First-order logic

• Propositional logic

• Existential graph

35.2 Footnotes

[1] Eric M. Hammer: Semantics for Existential Graphs, Journal of Philosophical Logic, Volume 27, Issue 5 (October 1998), page 489: “Development of first-order logic independently of Frege, anticipating prenex and Skolem normal forms”

[2] Among these authors is Stolyar, p. 166. Hamilton considers both to be calculi but divides them into an informal calculus and a formal calculus.

35.3 References

• A. G. Hamilton 1978, Logic for Mathematicians, Cambridge University Press, Cambridge UK ISBN 0-521- 21838-1

• Abram Aronovic Stolyar 1970, Introduction to Elementary Mathematical Logic, Dover Publications, Inc. NY. ISBN 0-486-645614

105 106 CHAPTER 35. PREDICATE LOGIC

• George F Luger, Artificial Intelligence, Pearson Education, ISBN 978-81-317-2327-2

• Hazewinkel, Michiel, ed. (2001), “Predicate calculus”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 Chapter 36

Proof (truth)

For other uses, see Proof.

A proof is sufficient evidence or an argument for the truth of a proposition.[1][2][3][4] The concept is applied in a variety of disciplines, with both the nature of the evidence or justification and the criteria for sufficiency being area-dependent. In the area of oral and written communication such as conversation, dialog, rhetoric, etc., a proof is a persuasive perlocutionary speech act, which demonstrates the truth of a proposition.[5] In any area of mathematics defined by its assumptions or axioms, a proof is an argument establishing a theorem of that area via accepted rules of inference starting from those axioms and other previously established theorems.[6] The subject of logic, in particular proof theory, formalizes and studies the notion of formal proof.[7] In the areas of epistemology and theology, the notion of justification plays approximately the role of proof,[8] while in jurisprudence the corresponding term is evidence,[9] with burden of proof as a concept common to both philosophy and law.

36.1 On proof

In most disciplines, evidence is required to prove something. Evidence is drawn from experience of the world around us, with science obtaining its evidence from nature,[10] law obtaining its evidence from witnesses and forensic investi- gation,[11] and so on. A notable exception is mathematics, whose proofs are drawn from a mathematical world begun with axioms and further developed and enriched by theorems proved earlier. Exactly what evidence is sufficient to prove something is also strongly area-dependent, usually with no absolute thresh- old of sufficiency at which evidence becomes proof.[12][13][14] In law, the same evidence that may convince one jury may not persuade another. Formal proof provides the main exception, where the criteria for proofhood are ironclad and it is impermissible to defend any step in the reasoning as “obvious";[15] for a well-formed formula to qualify as part of a formal proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence.[16] Proofs have been presented since antiquity. Aristotle used the observation that patterns of nature never display the machine-like uniformity of determinism as proof that chance is an inherent part of nature.[17] On the other hand, Thomas Aquinas used the observation of the existence of rich patterns in nature as proof that nature is not ruled by chance.[18] Proofs need not be verbal. Before Galileo, people took the apparent motion of the Sun across the sky as proof that the Sun went round the Earth.[19] Suitably incriminating evidence left at the scene of a crime may serve as proof of the identity of the perpetrator. Conversely, a verbal entity need not assert a proposition to constitute a proof of that proposition. For example, a signature constitutes direct proof of authorship; less directly, handwriting analysis may be submitted as proof of authorship of a document.[20] Privileged information in a document can serve as proof that the document’s author had access to that information; such access might in turn establish the location of the author at certain time, which might then provide the author with an alibi.

107 108 CHAPTER 36. PROOF (TRUTH)

36.2 See also

• Mathematical proof • Proof theory • Proof of concept • Provability logic • Evidence, information which tends to determine or demonstrate the truth of a proposition • Proof procedure • Proof complexity • Standard of proof

36.3 References

[1] Proof and other dilemmas: mathematics and philosophy by Bonnie Gold, Roger A. Simons 2008 ISBN 0883855674 pages 12–20 [2] Philosophical Papers, Volume 2 by Imre Lakatos, John Worrall, Gregory Currie, ISBN Philosophical Papers, Volume 2 by Imre Lakatos, John Worrall, Gregory Currie 1980 ISBN 0521280303 pages 60–63 [3] Evidence, proof, and facts: a book of sources by Peter Murphy 2003 ISBN 0199261954 pages 1–2 [4] Logic in Theology – And Other Essays by Isaac Taylor 2010 ISBN 1445530139 pages 5–15 [5] John Langshaw Austin: How to Do Things With Words. Cambridge (Mass.) 1962 – Paperback: Harvard University Press, 2nd edition, 2005, ISBN 0-674-41152-8. [6] Cupillari, Antonella. The Nuts and Bolts of Proofs. Academic Press, 2001. Page 3. [7] Alfred Tarski, Introduction to Logic and to the Methodology of the Deductive Sciences (ed. Jan Tarski). 4th Edition. Oxford Logic Guides, No. 24. New York and Oxford: Oxford University Press, 1994, xxiv + 229 pp. ISBN 0-19-504472- X [8] http://plato.stanford.edu/entries/justep-foundational/ [9] http://dictionary.reference.com/browse/proof [10] Reference Manual on Scientific Evidence, 2nd Ed. (2000), p. 71. Accessed May 13, 2007. [11] John Henry Wigmore, A Treatise on the System of Evidence in Trials at Common Law, 2nd ed., Little, Brown, and Co., Boston, 1915 [12] Simon, Rita James, and Mahan, Linda. (1971). “Quantifying Burdens of Proof—A View from the Bench, the Jury, and the Classroom”. Law and Society Review 5 (3): 319–330. doi:10.2307/3052837. JSTOR 3052837. [13] Katie Evans, David Osthus, Ryan G. Spurrier. “Distributions of Interest for Quantifying Reasonable Doubt and Their Applications” (PDF). Retrieved 2007-01-14. [14] The Principle of Sufficient Reason: A Reassessment by Alexander R. Pruss [15] A. S. Troelstra, H. Schwichtenberg (1996). Basic Proof Theory. In series Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, ISBN 0-521-77911-1. [16] Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Pres, 1971 [17] Aristotle’s Physics: a Guided Study, Joe Sachs, 1995 ISBN 0813521920 p. 70 [18] The treatise on the divine nature: Summa theologiae I, 1–13, by Saint Thomas Aquinas, Brian J. Shanley, 2006 ISBN 0872208052 p. 198 [19] Thomas S. Kuhn, The Copernican Revolution, pp. 5–20 [20] Trial tactics by Stephen A. Saltzburg, 2007 ISBN 159031767X page 47 Chapter 37

Rule of inference

In logic, a rule of inference, inference rule, or transformation rule is a consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of inference called modus ponens takes two premises, one in the form “If p then q” and another in the form “p”, and returns the conclusion “q”. The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion. Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference’s action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.[1] Popular rules of inference in propositional logic include modus ponens, modus tollens, and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers.

37.1 The standard form of rules of inference

In formal logic (and many related areas), rules of inference are usually given in the following standard form: Premise#1 Premise#2 ... Premise#n Conclusion This expression states that whenever in the course of some logical derivation the given premises have been obtained, the specified conclusion can be taken for granted as well. The exact formal language that is used to describe both premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical formulae, such as in: A→B A B This is the modus ponens rule of propositional logic. Rules of inference are often formulated as schemata employing metavariables.[2] In the rule (schema) above, the metavariables A and B can be instantiated to any element of the universe (or sometimes, by convention, a restricted subset such as propositions) to form an infinite set of inference rules. A proof system is formed from a set of rules chained together to form proofs, also called derivations. Any derivation has only one final conclusion, which is the statement proved or derived. If premises are left unsatisfied in the derivation, then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds.”

109 110 CHAPTER 37. RULE OF INFERENCE

37.2 Axiom schemas and axioms

Inference rules may also be stated in this form: (1) zero or more premises, (2) a turnstile symbol ⊢ , which means “infers”, “proves”, or “concludes”, and (3) a conclusion. This form usually embodies the relational (as opposed to functional) view of a rule of inference, where the turnstile stands for a deducibility relation holding between premises and conclusion. An inference rule containing no premises is called an axiom schema or, if it contains no metavariables, simply an axiom.[2] Rules of inference must be distinguished from axioms of a theory. In terms of semantics, axioms are valid assertions. Axioms are usually regarded as starting points for applying rules of inference and generating a set of conclusions. Or, in less technical terms: Rules are statements about the system, axioms are statements in the system. For example:

• The rule that from ⊢ p you can infer ⊢ Provable(p) is a statement that says if you've proven p , then it is provable that p is provable. This rule holds in Peano arithmetic, for example.

• The axiom p → Provable(p) would mean that every true statement is provable. This axiom does not hold in Peano arithmetic.

Rules of inference play a vital role in the specification of logical calculi as they are considered in proof theory, such as the sequent calculus and natural deduction.

37.3 Example: Hilbert systems for two propositional logics

In a , the premises and conclusion of the inference rules are simply formulae of some language, usually employing metavariables. For graphical compactness of the presentation and to emphasize the distinction between axioms and rules of inference, this section uses the sequent notation (⊢) instead of a vertical presentation of rules. The formal language for classical propositional logic can be expressed using just negation (¬), implication (→) and propositional symbols. A well-known axiomatization, comprising three axiom schema and one inference rule (modus ponens), is: (CA1) ⊢ A → (B → A) (CA2) ⊢ (A → (B → C)) → ((A → B) → (A → C)) (CA3) ⊢ (¬A → ¬B) → (B → A) (MP) A, A → B ⊢ B It may seem redundant to have two notions of inference in this case, ⊢ and →. In classical propositional logic, they indeed coincide; the deduction theorem states that A ⊢ B if and only if ⊢ A → B. There is however a distinction worth emphasizing even in this case: the first notation describes a deduction, that is an activity of passing from sentences to sentences, whereas A → B is simply a formula made with a , implication in this case. Without an inference rule (like modus ponens in this case), there is no deduction or inference. This point is illustrated in Lewis Carroll's dialogue called "What the Tortoise Said to Achilles".[3] For some non-classical logics, the deduction theorem does not hold. For example, the three-valued logic Ł3 of Łukasiewicz can be axiomatized as:[4] (CA1) ⊢ A → (B → A) (LA2) ⊢ (A → B) → ((B → C) → (A → C)) (CA3) ⊢ (¬A → ¬B) → (B → A) (LA4) ⊢ ((A → ¬A) → A) → A (MP) A, A → B ⊢ B This sequence differs from classical logic by the change in axiom 2 and the addition of axiom 4. The classical deduction theorem does not hold for this logic, however a modified form does hold, namely A ⊢ B if and only if ⊢ A → (A → B).[5] 37.4. ADMISSIBILITY AND DERIVABILITY 111

37.4 Admissibility and derivability

Main article: Admissible rule

In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the difference, consider the following set of rules for defining the natural numbers (the judgment n nat asserts the fact that n is a natural number):

n nat 0 nat s(n) nat

The first rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof system, the following rule, demonstrating that the second successor of a natural number is also a natural number, is derivable:

n nat s(s(n)) nat

Its derivation is the composition of two uses of the successor rule above. The following rule for asserting the existence of a predecessor for any nonzero number is merely admissible:

s(n) nat n nat This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, assume a derivation of the premise and induct on it to produce a derivation of n nat .) However, it is not derivable, because it depends on the structure of the derivation of the premise. Because of this, derivability is stable under additions to the proof system, whereas admissibility is not. To see the difference, suppose the following nonsense rule were added to the proof system:

s(−3) nat

In this new system, the double-successor rule is still derivable. However, the rule for finding the predecessor is no longer admissible, because there is no way to derive −3 nat . The brittleness of admissibility comes from the way it is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add new cases to this proof, which may no longer hold. Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut elimination holds, the cut rule is admissible.

37.5 See also

• Inference objection

• Immediate inference

• Law of thought

• List of rules of inference

• structural rule 112 CHAPTER 37. RULE OF INFERENCE

37.6 References

[1] Boolos, George; Burgess, John; Jeffrey, Richard C. (2007). Computability and logic. Cambridge: Cambridge University Press. p. 364. ISBN 0-521-87752-0.

[2] John C. Reynolds (2009) [1998]. Theories of Programming Languages. Cambridge University Press. p. 12. ISBN 978-0- 521-10697-9.

[3] Kosta Dosen (1996). “Logical consequence: a turn in style”. In Maria Luisa Dalla Chiara, Kees Doets, Daniele Mundici, Johan van Benthem. Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995. Springer. p. 290. ISBN 978-0-7923-4383-7. preprint (with different pagination)

[4] Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 100. ISBN 978-0-521-88128-9.

[5] Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 114. ISBN 978-0-521-88128-9. Chapter 38

Rule of replacement

In logic, a rule of replacement[1][2][3] is a transformation rule that may be applied to only a particular segment of an expression.A logical system may be constructed so that it uses either axioms, rules of inference, or both as transformation rules for logical expressions in the system. Whereas a rule of inference is always applied to a whole logical expression, a rule of replacement may be applied to only a particular segment. Within the context of a logical proof, logically equivalent expressions may replace each other. Rules of replacement are used in propositional logic to manipulate propositions. Common rules of replacement include de Morgan’s laws, commutation, association, distribution, double negation,[4] transposition, material implication, material equivalence, exportation, and tautology.

38.1 References

[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[3] Moore and Parker

[4] not admitted in intuitionistic logic

113 Chapter 39

Tautology (rule of inference)

In propositional logic, tautology is one of two commonly used rules of replacement.[1][2][3] The rules are used to eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are: The principle of idempotency of disjunction:

P ∨ P ⇔ P and the principle of idempotency of conjunction:

P ∧ P ⇔ P

Where " ⇔ " is a metalogical symbol representing “can be replaced in a logical proof with.”

39.1 Relation to tautology

The rule gets its name from the fact that the concept of the rule is the same as the tautologous statements If “p and p” is true then “p” is true. and If “p or p” is true then “p” is true. This type of tautology is called idempotency. Although the rule is the expression of a particular tautology, this is a bit misleading, as every rule of inference can be expressed as a tautology and vice versa.

39.2 Formal notation

Theorems are those logical formulas ϕ where ⊢ ϕ is the conclusion of a valid proof,[4] while the equivalent semantic consequence |= ϕ indicates a tautology. The tautology rule may be expressed as a sequent:

P ∨ P ⊢ P

and

P ∧ P ⊢ P where ⊢ is a metalogical symbol meaning that P is a syntactic consequence of P ∨ P , in the one case, P ∧ P in the other, in some logical system; or as a rule of inference:

114 39.3. REFERENCES 115

P ∨ P ∴ P and

P ∧ P ∴ P where the rule is that wherever an instance of " P ∨ P " or " P ∧ P " appears on a line of a proof, it can be replaced with " P "; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P ∨ P ) → P and

(P ∧ P ) → P where P is a proposition expressed in some formal system.

39.3 References

[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 364–5.

[2] Copi and Cohen

[3] Moore and Parker

[4] Logic in Computer Science, p. 13 Chapter 40

Transposition (logic)

In propositional logic, transposition[1][2][3] is a valid rule of replacement that permits one to switch the antecedent with the consequent of a conditional statement in a logical proof if they are also both negated. It is the inference from the truth of "A implies B" the truth of “Not-B implies not-A", and conversely.[4][5] It is very closely related to the rule of inference modus tollens. It is the rule that:

(P → Q) ⇔ (¬ Q → ¬ P)

Where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

40.1 Formal notation

The transposition rule may be expressed as a sequent:

(P → Q) ⊢ (¬Q → ¬P )

where ⊢ is a metalogical symbol meaning that (¬Q → ¬P ) is a syntactic consequence of (P → Q) in some logical system; or as a rule of inference:

P → Q ∴ ¬Q → ¬P

where the rule is that wherever an instance of " P → Q " appears on a line of a proof, it can be replaced with " ¬Q → ¬P "; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P → Q) → (¬Q → ¬P )

where P and Q are propositions expressed in some formal system.

40.2 Traditional logic

116 40.2. TRADITIONAL LOGIC 117

40.2.1 Form of transposition

In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol for material implication signifies the proposition as a hypothetical, or the “if-then” form, e.g. “if P then Q”. The biconditional statement of the rule of transposition (↔) refers to the relation between hypothetical (→) propo- sitions, with each proposition including an antecent and consequential term. As a matter of logical inference, to transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both sides of the biconditional relationship. Meaning, to transpose or convert (P → Q) to (Q → P) requires that the other proposition, (~Q → ~P), be transposed or converted to (~P → ~Q). Otherwise, to convert the terms of one proposition and not the other renders the rule invalid, violating the sufficient condition and necessary condition of the terms of the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent or affirming the consequent by means of illicit conversion. The truth of the rule of transposition is dependent upon the relations of sufficient condition and necessary condition in logic.

40.2.2 Sufficient condition

In the proposition “If P then Q”, the occurrence of 'P' is sufficient reason for the occurrence of 'Q'. 'P', as an individual or a class, materially implicates 'Q', but the relation of 'Q' to 'P' is such that the converse proposition “If Q then P” does not necessarily have sufficient condition. The rule of inference for sufficient condition is modus ponens, which is an argument for conditional implication: Premise (1): If P, then Q Premise (2): P Conclusion: Therefore, Q

40.2.3 Necessary condition

Since the converse of premise (1) is not valid, all that can be stated of the relationship of 'P' and 'Q' is that in the absence of 'Q', 'P' does not occur, meaning that 'Q' is the necessary condition for 'P'. The rule of inference for necessary condition is modus tollens: Premise (1): If P, then Q Premise (2): not Q Conclusion: Therefore, not P

40.2.4 Grammatically speaking

A grammatical example traditionally used by logicians contrasting sufficient and necessary conditions is the statement “If there is fire, then oxygen is present”. An oxygenated environment is necessary for fire or combustion, but simply because there is an oxygenated environment does not necessarily mean that fire or combustion is occurring. While one can infer that fire stipulates the presence of oxygen, from the presence of oxygen the converse “If there is oxygen present, then fire is present” cannot be inferred. All that can be inferred from the original proposition is that “If oxygen is not present, then there cannot be fire”.

40.2.5 Relationship of propositions

The symbol for the biconditional ("↔") signifies the relationship between the propositions is both necessary and sufficient, and is verbalized as "if and only if", or, according to the example “If P then Q 'if and only if' if not Q then not P”. Necessary and sufficient conditions can be explained by analogy in terms of the concepts and the rules of immediate inference of traditional logic. In the categorical proposition “All S is P”, the subject term 'S' is said to be distributed, 118 CHAPTER 40. TRANSPOSITION (LOGIC) that is, all members of its class are exhausted in its expression. Conversely, the predicate term 'P' cannot be said to be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of 'P' as a class is also a member of 'S' as a class. All that can be validly inferred is that “Some P are S”. Thus, the type 'A' proposition “All P is S” cannot be inferred by conversion from the original 'A' type proposition “All S is P”. All that can be inferred is the type “A” proposition “All non-P is non-S” (Note that (P → Q) and (~Q → ~P) are both 'A' type propositions). Grammatically, one cannot infer “all mortals are men” from “All men are mortal”. An 'A' type proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as in the inference “All bachelors are unmarried men” from “All unmarried men are bachelors”.

40.2.6 Transposition and the method of contraposition

In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions through contraposition and obversion,[6] a series of immediate inferences where the rule of obversion is first applied to the original categorical proposition “All S is P"; yielding the obverse “No S is non-P”. In the obversion of the original proposition to an 'E' type proposition, both terms become distributed. The obverse is then converted, resulting in “No non-P is S”, maintaining distribution of both terms. The No non-P is S” is again obverted, resulting in the [contrapositive] “All non-P is non-S”. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting 'A' type proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed.[7]

40.2.7 Differences between transposition and contraposition

Note that the method of transposition and contraposition should not be confused. Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposi- tion with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be ma- terial implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other,[8] i.e. “No non-P is S” and “All non-P is non-S”. The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the “mediate inferences”[9] of contraposition and is also referred to as the “law of contraposition”.[10]

40.3 Transposition in mathematical logic

See Transposition (mathematics), Set theory

40.4 Proof

40.5 See also

40.6 References

[1] Hurley, Patrick (2011). A Concise Introduction to Logic (11th ed.). Cengage Learning. p. 414.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371.

[3] Moore and Parker

[4] Brody, Bobuch A. “Glossary of Logical Terms”. Encyclopedia of Philosophy. Vol. 5–6, p. 76. Macmillan, 1973.

[5] Copi, Irving M. Symbolic Logic. 5th ed. Macmillan, 1979. See the Rules of Replacement, pp. 39-40. 40.7. FURTHER READING 119

[6] Stebbing, 1961, p. 65-66. For reference to the initial step of contraposition as obversion and conversion, see Copi, 1953, p. 141.

[7] See Stebbing, 1961, pp. 65-66. Also, for reference to the immediate inferences of obversion, conversion, and obversion again, see Copi, 1953, p. 141.

[8] See Stebbing, 1961, p. 66.

[9] For an explanation of the absorption of obversion and conversion as “mediate inferences see: Copi, Irving. Symbolic Logic. pp. 171-174, MacMillan, 1979, fifth edition.

[10] Prior, A.N. “Logic, Traditional”. Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.

40.7 Further reading

• Brody, Bobuch A. “Glossary of Logical Terms”. Encyclopedia of Philosophy. Vol. 5-6, p. 61. Macmillan, 1973. • Copi, Irving. Introduction to Logic. MacMillan, 1953.

• Copi, Irving. Symbolic Logic. MacMillan, 1979, fifth edition. • Prior, A.N. “Logic, Traditional”. Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.

• Stebbing, Susan. A Modern Introduction to Logic. Harper, 1961, Seventh edition

40.8 External links

• Improper Transposition (Fallacy Files) Chapter 41

Turnstile (symbol)

Not to be confused with , , or .

In mathematical logic and computer science the symbol ⊢ has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as “yields”, “proves”, “satisfies” or “entails”. The symbol was first used by Gottlob Frege in his 1879 book on logic, Begriffsschrift.[1] Martin-Löf analyzes the ⊢ symbol thus: "...[T]he combination of Frege’s Urteilsstrich, judgement stroke [ | ], and Inhaltsstrich, content stroke [—], came to be called the assertion sign.”[2] Frege’s notation for a judgement of some content A

⊢ A

can be then be read

I know A is true".[3]

In the same vein, a conditional assertion

P ⊢ Q

can be read as:

From P , I know that Q

In TeX, the turnstile symbol ⊢ is obtained from the command \vdash. In Unicode, the turnstile symbol (⊢) is called right tack and is at code point U+22A2.[4] On a typewriter, a turnstile can be composed from a vertical bar (|) and a dash (–). In LaTeX there is a turnstile package which issues this sign in many ways, and is capable of putting labels below or above it, in the correct places.[5]

41.1 Interpretations

The turnstile represents a binary relation. It has several different interpretations in different contexts:

• In metalogic, the study of formal languages; the turnstile represents syntactic consequence (or “derivability”). This is to say, that it shows that one string can be derived from another in a single step, according to the transformation rules (i.e. the syntax) of some given formal system.[6] As such, the expression

120 41.2. SEE ALSO 121

P ⊢ Q means that Q is derivable from P in the system. Consistent with its use for derivability, a " ⊢ " followed by an expression without anything preceding it denotes a theorem, which is to say that the expression can be derived from the rules using an empty set of axioms. As such, the expression ⊢ Q means that Q is a theorem in the system.

• In proof theory, the turnstile is used to denote “provability”. For example, if T is a formal theory and S is a particular sentence in the language of the theory then

T ⊢ S means that S is provable from T .[7] This usage is demonstrated in the article on propositional calculus.

• In the typed lambda calculus, the turnstile is used to separate typing assumptions from the typing judgment.[8][9] • In category theory, a reversed turnstile (⊣), as in F ⊣ G , is used to indicate that the functor F is left adjoint to the functor G . • In APL the symbol is called “right tack” and represents the ambivalent right identity function where both XY and Y are Y. The reversed symbol "⊣" is called “left tack” and represents the analogous left identity where XY is X and Y is Y.[10][11] • In combinatorics, λ ⊢ n means that λ is a partition of the integer n . [12]

41.2 See also

• Double turnstile |= • Sequent • Sequent calculus • List of logic symbols • List of mathematical symbols

41.3 Notes

[1] Frege 1879

[2] Martin-Löf 1996, pp. 6,15

[3] Martin-Löf 1996, p. 15

[4] Unicode standard

[5] http://www.ctan.org/tex-archive/macros/latex/contrib/turnstile

[6] http://dingo.sbs.arizona.edu/~{}hammond/ling178-sp06/mathCh6.pdf

[7] Troelstra & Schwichtenberg 2000

[8] http://www.mscs.dal.ca/~{}selinger/papers/lambdanotes.pdf

[9] Schmidt 1994

[10] http://www.jsoftware.com/papers/APLDictionary.htm

[11] Iverson 1987

[12] p.287 of Stanley, Richard P.. Enumerative Combinatorics. 1st ed. Vol. 2. Cambridge: Cambridge University Press, 1999. 122 CHAPTER 41. TURNSTILE (SYMBOL)

41.4 References

• Frege, Gottlob (1879). “Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens”. Halle. • Iverson, Kenneth (1987). “A Dictionary of APL”.

• Martin-Löf, Per (1996). “On the meanings of the logical constants and the justifications of the logical laws” (PDF). Nordic Journal of Philosophical Logic 1 (1): 11–60. Lecture notes to a short course at Università degli Studi di Siena, April 1983.

• Schmidt, David (1994). “The Structure of Typed Programming Languages”. MIT Press. ISBN 0-262-19349- 3.

• Troelstra, A. S.; Schwichtenberg, H. (2000). “Basic Proof Theory” (second ed.). Cambridge University Press. ISBN 978-0-521-77911-1. Chapter 42

Universal generalization

In predicate logic, generalization (also universal generalization or universal introduction,[1][2][3] GEN) is a valid inference rule. It states that if ⊢ P (x) has been derived, then ⊢ ∀x P (x) can be derived.

42.1 Generalization with hypotheses

The full generalization rule allows for hypotheses to the left of the turnstile, but with restrictions. Assume Γ is a set of formulas, φ a formula, and Γ ⊢ φ(y) has been derived. The generalization rule states that Γ ⊢ ∀xφ(x) can be derived if y is not mentioned in Γ and x does not occur in φ. These restrictions are necessary for soundness. Without the first restriction, one could conclude ∀xP (x) from the hypothesis P (y) . Without the second restriction, one could make the following deduction:

1. ∃z∃w(z ≠ w) (Hypothesis)

2. ∃w(y ≠ w) (Existential instantiation)

3. y ≠ x (Existential instantiation)

4. ∀x(x ≠ x) (Faulty universal generalization)

This purports to show that ∃z∃w(z ≠ w) ⊢ ∀x(x ≠ x), which is an unsound deduction.

42.2 Example of a proof

Prove: ∀x (P (x) → Q(x)) → (∀x P (x) → ∀x Q(x)) . Proof: In this proof, Universal generalization was used in step 8. The Deduction theorem was applicable in steps 10 and 11 because the formulas being moved have no free variables.

42.3 See also

• First-order logic

• Hasty generalization

• Universal instantiation

123 124 CHAPTER 42. UNIVERSAL GENERALIZATION

42.4 References

[1] Copi and Cohen

[2] Hurley

[3] Moore and Parker Chapter 43

Universal instantiation

In predicate logic universal instantiation[1][2][3] (UI, also called universal specification or universal elimination, and sometimes confused with Dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom. It is one of the basic principles used in quantification theory. Example: “All dogs are mammals. Fido is a dog. Therefore Fido is a mammal.” In symbols the rule as an axiom schema is

∀x A(x) ⇒ A(a/x),

for some term a and where A(a/x) is the result of substituting a for all occurrences of x in A. And as a rule of inference it is from ⊢ ∀x A infer ⊢ A(a/x), with A(a/x) the same as above. Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934.” [4]

43.1 Quine

Universal Instantiation and Existential generalization are two aspects of a single principle, for instead of saying that "∀x x=x" implies “Socrates=Socrates”, we could as well say that the denial “Socrates≠Socrates"' implies "∃x x≠x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially.[5]

43.2 See also

• Existential generalization

• Existential quantification

• Inference rules

125 126 CHAPTER 43. UNIVERSAL INSTANTIATION

43.3 References

[1] Irving M. Copi, Carl Cohen, Kenneth McMahon (Nov 2010). Introduction to Logic. Pearson Education. ISBN 978- 0205820375.

[2] Hurley

[3] Moore and Parker

[4] pg. 71. Symbolic Logic; 5th ed.

[5] Willard van Orman Quine; Roger F. Gibson (2008). “V.24. Reference and Modality”. Quintessence. Cambridge, Mass: Belknap Press of Harvard University Press. Here: p.366. 43.4. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 127

43.4 Text and image sources, contributors, and licenses

43.4.1 Text

• Absorption (logic) Source: https://en.wikipedia.org/wiki/Absorption_(logic)?oldid=647103328 Contributors: Michael Hardy, Gregbard, PamD, Alejandrocaro35, SwisterTwister, Dcirovic, Dooooot, BukLauBrah, Zeiimer and Anonymous: 3 • Associative property Source: https://en.wikipedia.org/wiki/Associative_property?oldid=668051303 Contributors: AxelBoldt, Zundark, Jeronimo, Andre Engels, XJaM, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Ellywa, Andres, Pizza Puzzle, Ideyal, Charles Matthews, Dysprosia, Kwantus, Robbot, Mattblack82, Wikibot, Wereon, Robinh, Tobias Bergemann, Giftlite, Smjg, Lethe, Herbee, Brona, Jason Quinn, Deleting Unnecessary Words, Creidieki, Rich Farmbrough, Guanabot, Paul August, Rgdboer, Jum- buck, Alansohn, Gary, Burn, H2g2bob, Bsadowski1, Oleg Alexandrov, Linas, Palica, Graham87, Deadcorpse, SixWingedSeraph, Fre- plySpang, Yurik, Josh Parris, Rjwilmsi, Salix alba, Mathbot, Chobot, YurikBot, Thane, AdiJapan, BOT-Superzerocool, Bota47, Banus, KnightRider~enwiki, Mmernex, Melchoir, PJTraill, Thumperward, Fuzzform, SchfiftyThree, Octahedron80, Zven, Furby100, Wine Guy, Smooth O, Cybercobra, Cothrun, Will Beback, SashatoBot, IronGargoyle, Physis, 16@r, JRSpriggs, CBM, Gregbard, Xtv, Mr Gronk, Rbanzai, Thijs!bot, Egriffin, Marek69, Escarbot, Salgueiro~enwiki, DAGwyn, David Eppstein, JCraw, Anaxial, Trusilver, Acalamari, Py- rospirit, Krishnachandranvn, Daniel5Ko, GaborLajos, Darklich14, AlnoktaBOT, Rei-bot, Wikiisawesome, SieBot, Gerakibot, Flyer22, Hello71, Trefgy, Classicalecon, ClueBot, The Thing That Should Not Be, Cliff, Razimantv, Mudshark36, Auntof6, DragonBot, Watch- duck, Bender2k14, SoxBot III, Addbot, Some jerk on the Internet, Download, BepBot, Tide rolls, Jarble, Luckas-bot, Yobot, TaBOT- zerem, Kuraga, MauritsBot, GrouchoBot, PI314r, Charvest, Ex13, Erik9bot, MacMed, Pinethicket, MastiBot, Fox Wilson, Ujoimro, GoingBatty, ZéroBot, NuclearDuckie, Quondum, D.Lazard, EdoBot, Crown Prince, ClueBot NG, Wcherowi, Kevin Gorman, Widr, Furkaocean, IkamusumeFan, SuperNerd137, Rang213, Stephan Kulla, Lugia2453, Jshflynn, Epicgenius, Kenrambergm2374, Matthew Kastor, Subcientifico, Monarchrob1 and Anonymous: 119 • Axiom Source: https://en.wikipedia.org/wiki/Axiom?oldid=670045815 Contributors: AxelBoldt, LC~enwiki, Brion VIBBER, Mav, Zun- dark, The Anome, Youssefsan, XJaM, Stevertigo, Michael Hardy, JakeVortex, Nixdorf, Graue, Glenn, Tim Retout, Nikai, Rotem Dan, Andres, Hectorthebat, EdH, Rob Hooft, Mxn, Charles Matthews, Dysprosia, Greenrd, Markhurd, Hyacinth, CecilTyme, Rob- bot, Josh Cherry, RedWolf, Altenmann, Jeronim, Flauto Dolce, Saforrest, Wikibot, Benc, Tobias Bergemann, Ancheta Wis, Tosha, Giftlite, Mshonle~enwiki, Gene Ward Smith, Dissident, Alterego, Neilc, Andycjp, Knutux, Alberto da Calvairate~enwiki, Antandrus, Dunks58, Sam Hocevar, Guppyfinsoup, Mormegil, EugeneZelenko, Rich Farmbrough, Iainscott, Mani1, Paul August, El C, Rgdboer, Irrªtiºnal, Bobo192, Babomb, Mintywalker, ParticleMan, Nk, Rje, Obradovic Goran, Haham hanuka, Jumbuck, Alansohn, 119, Jeltz, Hu, Simplebrain, Stephan Leeds, HenryLi, Killing Vector, Oleg Alexandrov, Natalya, Roland2~enwiki, Kzollman, Ruud Koot, Gim- boid13, Xeonx, Mandarax, Qwertyus, FreplySpang, Salix alba, Nmegill, FlaBot, Matharvest, Mathbot, Alvin-cs, Chobot, YurikBot, Wavelength, Splash, Chaos, Trovatore, Dbmag9, VinnyCee, Bucketsofg, Arthur Rubin, Vicarious, David Biddulph, Nahaj, SmackBot, McGeddon, Od Mishehu, SaxTeacher, Bomac, KocjoBot~enwiki, Alksub, RobotJcb, Bluebot, Adam M. Gadomski, DMS, MalafayaBot, SchfiftyThree, Spellchecker, Zhuravskij, Xiner, Xyzzyplugh, EPM, Pissant, Eddiesegoura, Jon Awbrey, Sammy1339, Vina-iwbot~enwiki, Byelf2007, SashatoBot, Lambiam, DA3N, Dialectic~enwiki, Physis, Atoll, 16@r, Loadmaster, MTSbot~enwiki, Hu12, Dreftymac, Lenoxus, Bruno321, Happy-melon, Bharatveer, JRSpriggs, CRGreathouse, CaveBat, CBM, RoddyYoung, Gregbard, Cydebot, Peterd- jones, Julian Mendez, Gproud, PamD, Fourchette, Ken Burns, CharlotteWebb, X96lee15, AlefZet, Escarbot, Quintote, Readro, Steelpil- low, JAnDbot, Oxinabox, MER-C, Four Dog Night, Magioladitis, VoABot II, JamesBWatson, Swpb, Amorette, Daarznieks, David Eppstein, R'n'B, Trusilver, MistyMorn, Maurice Carbonaro, Amritasen7, Cpiral, SlowJog, McSly, Daniel5Ko, Juliancolton, Dorftrottel, VolkovBot, Hotfeba, Jeff G., LokiClock, Philip Trueman, TXiKiBoT, AlexDenney, Dmcq, Submachinegum, Sfmammamia, LegoAx- iom1007, SieBot, Dawn Bard, JerrySteal, MiNombreDeGuerra, IdNotFound, BrianGo28, Denisarona, DEMcAdams, ClueBot, Uncle Milty, Watchduck, Jotterbot, Eloifigueiredo, Aitias, Blackhawkjxx7, GAT27, YouRang?, JKeck, Marc van Leeuwen, Gnowor, Se- shadri105, Addbot, Aelisabeths, NjardarBot, Tassedethe, Numbo3-bot, Dayewalker, Tide rolls, Legobot, Luckas-bot, Yobot, Frag- gle81, TaBOT-zerem, KamikazeBot, Goldhawk08, Jalal0, AnomieBOT, Arjun G. Menon, Jim1138, Piano non troppo, LlywelynII, Bob Burkhardt, ArthurBot, Xqbot, LordDelta, J JMesserly, GrouchoBot, The Wiki ghost, Die4cheese5, Chjoaygame, FrescoBot, Paine ,Lotje, Dinamik-bot, 777sms, Duoduoduo ,کاشف عقیل ,Ellsworth, I dream of horses, RedBot, Xiglofre, Meier99, Gamewizard71 Reaper Eternal, Minimac, EmausBot, Najeeb1010, GoingBatty, Wikipelli, AvicBot, Jay-Sebastos, MonoAV, NTox, ClueBot NG, Masssly, Helpful Pixie Bot, Calabe1992, Frog23, Davidiad, Glacialfox, HumanNaturOriginal, Jeremy112233, ChrisGualtieri, Br'er Rabbit, Spec- tral sequence, Josell2, Lakshya08, Yolow1, Gcc333, Crystallizedcarbon, Cthulhu is love cthulhu is life, Tushar15mehta, Person420 and Anonymous: 270 • Axiom schema Source: https://en.wikipedia.org/wiki/Axiom_schema?oldid=635768280 Contributors: Michael Hardy, Mpagano, AugPi, Charles Matthews, Greenrd, Tobias Bergemann, CyborgTosser, Sam nead, Luqui, TeaDrinker, YurikBot, Hairy Dude, NawlinWiki, Sardanaphalus, SmackBot, Mhss, Dreadstar, CBM, Chrisahn, Gregbard, Kallerdis, Arno Matthias, R'n'B, Addbot, BOOLE1847, Luckas- bot, GrouchoBot, False vacuum, The Wiki ghost, ZéroBot, Brirush, TehLilBugga and Anonymous: 4 • Axiomatic system Source: https://en.wikipedia.org/wiki/Axiomatic_system?oldid=641122229 Contributors: Derek Ross, Jeronimo, Rotem Dan, Charles Matthews, Hyacinth, Benwing, Naddy, Tobias Bergemann, Giftlite, Klemen Kocjancic, Rfl, Spayrard, Sole Soul, Nk, Mdd, Cipherswarm, Oleg Alexandrov, Ruud Koot, BD2412, Salix alba, Intgr, Gareth E Kegg, FrankTobia, YurikBot, Hairy Dude, NTBot~enwiki, Ste1n, Jpbowen, SmackBot, Unyoyega, Mhss, Chris the speller, Tree Biting Conspiracy, Nbarth, Spellchecker, Colonies Chris, Jahiegel, Tommyjb, Byelf2007, CRGreathouse, CBM, Myasuda, Gregbard, BetacommandBot, Nick Number, Andonic, Ben- thehutt, Cic, Seberle, Maurice Carbonaro, Policron, VolkovBot, Tomaxer, Skv avenger, SieBot, DesolateReality, Relly Komaruzaman, Dthomsen8, Addbot, Lightbot, Ptbotgourou, EnochBethany, AnomieBOT, Xqbot, GrouchoBot, Zero Thrust, Jandalhandler, DixonDBot, Lotje, Ripchip Bot, EmausBot, Shivankmehra, Damaged watson, ChuispastonBot, ClueBot NG, Wcherowi, CocuBot, Jochen Burghardt, Shurakai, Strabcat and Anonymous: 46 • Biconditional elimination Source: https://en.wikipedia.org/wiki/Biconditional_elimination?oldid=637933553 Contributors: Patrick, Justin Johnson, Angela, Rich Farmbrough, GregorB, Graham87, SmackBot, Bluebot, Lambiam, Jim.belk, CBM, Gregbard, Alejan- drocaro35, LilHelpa, Erik9bot, Gamewizard71, Mark viking and Anonymous: 4 • Biconditional introduction Source: https://en.wikipedia.org/wiki/Biconditional_introduction?oldid=619411971 Contributors: Patrick, Justin Johnson, Jitse Niesen, Sketchee, Graham87, Arthur Rubin, SmackBot, Bluebot, Jim.belk, CBM, Gregbard, VolkovBot, Legobot, KamikazeBot, Erik9bot, Gamewizard71, AvicBot, Dooooot and Anonymous: 9 128 CHAPTER 43. UNIVERSAL INSTANTIATION

• Commutative property Source: https://en.wikipedia.org/wiki/Commutative_property?oldid=669891653 Contributors: AxelBoldt, Zun- dark, Tarquin, Andre Engels, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Wshun, Ixfd64, GTBacchus, Ahoerste- meier, Snoyes, Jll, Pizza Puzzle, Ideyal, Charles Matthews, Wikiborg, Dysprosia, Jeffq, Robbot, RedWolf, Romanm, Robinh, Isopropyl, Tobias Bergemann, Enochlau, Giftlite, BenFrantzDale, Lupin, Herbee, Peruvianllama, Waltpohl, Frencheigh, Gdr, Knutux, OverlordQ, B.d.mills, Chris Howard, Mormegil, Rich Farmbrough, Ebelular, Mikael Brockman, Dbachmann, Paul August, MisterSheik, El C, Szquir- rel, Touriste, Samadam, Malcolm rowe, Jumbuck, Arthena, Mattpickman, Mlessard, Burn, Mlm42, Stillnotelf, Tony Sidaway, Oleg Alexandrov, The JPS, Linas, Justinlebar, Jeff3000, Palica, Ashmoo, Graham87, Josh Parris, Rjwilmsi, Salix alba, Vegaswikian, FlaBot, VKokielov, Ground Zero, Srleffler, Masnevets, Reetep, YurikBot, Hairy Dude, Wolfmankurd, Michael Slone, Rick Norwood, Samuel Huang, Derek.cashman, FF2010, Petri Krohn, Vicarious, SmackBot, YellowMonkey, Slashme, Melchoir, KocjoBot~enwiki, Jab843, PJTraill, Chris the speller, Bluebot, Master of Puppets, Thumperward, SchfiftyThree, Complexica, Octahedron80, DHN-bot~enwiki, Jus- tUser, Cybercobra, Wybot, Thehakimboy, Acdx, Bando26, 16@r, Childzy, Dan Gluck, Iridescent, Dreftymac, DBooth, Robert.McGibbon, Floridi~enwiki, Unmitigated Success, Gregbard, MichaelRWolf, Cydebot, Larsnostdal, Kozuch, JamesAM, Headbomb, Second Quanti- zation, Wmasterj, Thomprod, Dzer0, Grayshi, Escarbot, Fr33ke, AntiVandalBot, Nacho Librarian, Gcm, 100110100, Mikemill, Wikidude- man, DAGwyn, Dirac66, JoergenB, MartinBot, Nev1, Daniele.tampieri, Haseldon, Policron, KylieTastic, Sarregouset, Useight, VolkovBot, TreasuryTag, Am Fiosaigear~enwiki, Philip Trueman, TXiKiBoT, Anonymous Dissident, Aaron Rotenberg, Geometry guy, Spinningspark, Life, Liberty, Property, SieBot, Ivan Štambuk, Legion fi, Toddst1, Xvani, Weston.pace, OKBot, Mike2vil, Francvs, Classicalecon, ClueBot, Cliff, Bloodholds, R000t, CounterVandalismBot, Deathnomad, Excirial, Ftbhrygvn, Joe8824, Nafis ru, Stephen Poppitt, Ad- dbot, LaaknorBot, SpBot, Gail, Luckas-bot, Yobot, Weisicong, DemocraticLuntz, Citation bot, MauritsBot, Xqbot, Dithridge, 12cookk, Ubcule, GrouchoBot, Dger, HamburgerRadio, MacMed, Pinethicket, Adlerbot, Psimmler, ThinkEnemies, JV Smithy, Onel5969, Ujoimro, Aceshooter, Slightsmile, Quondum, Joshlepaknpsa, Wayne Slam, Arnaugir, Scientific29, ClueBot NG, Wcherowi, Gilderien, Sayginer, Marechal Ney, Widr, AvocatoBot, Mark Arsten, Ameulen11, CeraBot, ChrisGualtieri, None but shining hours, Khazar2, Dexbot, Stephan Kulla, Fox2k11, Dskjhgds, DavidLeighEllis, Davidliu0421, Wikibritannica, Niallhoranluv123, JMP EAX, Troolium, Hinmatóowyalahtqit, Holt Mcdougal, ABCDEFAD, Fazbear7891 and Anonymous: 178 • Conjunction elimination Source: https://en.wikipedia.org/wiki/Conjunction_elimination?oldid=662661702 Contributors: Owen, Rob- bot, Smjg, Gdr, Jiy, Anthony Appleyard, Oleg Alexandrov, Algebraist, Arthur Rubin, SmackBot, Jim.belk, IronGargoyle, JHunterJ, Grumpyyoungman01, Nicko6, CBM, Gregbard, Thijs!bot, Dfrg.msc, Father Goose, Markus Prokott, Fratrep, Silvergoat, Dylan620, Ale- jandrocaro35, Addbot, AnnaFrance, Jarble, Luckas-bot, Yobot, AnomieBOT, Erik9bot, Andrewjameskirk, Dooooot, Jochen Burghardt and Anonymous: 9 • Conjunction introduction Source: https://en.wikipedia.org/wiki/Conjunction_introduction?oldid=637933399 Contributors: Justin John- son, Silverfish, Big Bob the Finder, Bloodshedder, Jiy, Rich Farmbrough, AllyUnion, Oleg Alexandrov, Graham87, YurikBot, Arthur Rubin, SmackBot, Kurykh, Jim.belk, CBM, Gregbard, Alejandrocaro35, Legobot, Yobot, Constructive editor, LucienBOT, Set theorist, Dooooot and Anonymous: 6 • Constructive dilemma Source: https://en.wikipedia.org/wiki/Constructive_dilemma?oldid=638343034 Contributors: Michael Hardy, Charles Matthews, Jiy, Rich Farmbrough, Nortexoid, Zenosparadox, Oleg Alexandrov, Awis, Arthur Rubin, SmackBot, Jim.belk, Greg- bard, Nozzer42, ClueBot, Arunsingh16, Alejandrocaro35, Yobot, Govindjsk, Erik9bot, Gamewizard71, PrinceXantar, Dooooot and Anonymous: 8 • De Morgan’s laws Source: https://en.wikipedia.org/wiki/De_Morgan’{}s_laws?oldid=668423258 Contributors: The Anome, Tarquin, Jeronimo, Mudlock, Michael Hardy, TakuyaMurata, Ihcoyc, Ijon, AugPi, DesertSteve, Charles Matthews, Dcoetzee, Choster, Dysprosia, Xiaodai~enwiki, Hyacinth, David Shay, SirPeebles, Fredrik, Dorfl, Hadal, Giftlite, Starblue, DanielZM, Guppyfinsoup, Smimram, ES- kog, Chalst, Art LaPella, EmilJ, Scrutchfield, Linj, Alphax, Boredzo, Larry V, Jumbuck, Smylers, Oleg Alexandrov, Linas, Mindmatrix, Bkkbrad, Btyner, Graham87, Miserlou, The wub, Marozols, Mathbot, Subtractive, DVdm, YurikBot, Wavelength, RobotE, Hairy Dude, Michael Slone, Cori.schlegel, Saric, Cdiggins, Lt-wiki-bot, Rodrigoq~enwiki, SmackBot, RDBury, Gilliam, MooMan1, Mhss, JRSP, DHN-bot~enwiki, Ebertek, Coolv, Cybercobra, Jon Awbrey, Vina-iwbot~enwiki, Petrejo, Gobonobo, Darktemplar, 16@r, Loadmaster, Drae, MTSbot~enwiki, Adambiswanger1, Nutster, JForget, Gregbard, Kanags, Thijs!bot, Epbr123, Jojan, Helgus, Futurebird, AntiVan- dalBot, Hannes Eder, MikeLynch, JAnDbot, Jqavins, Nitku, Stdazi, Gwern, General Jazza, R'n'B, Bongomatic, Ttwo, Javawizard, Kratos 84, Policron, TWiStErRob, VolkovBot, TXiKiBoT, Drake Redcrest, Ttennebkram, Smoseson, SieBot, Squelle, Fratrep, Melcombe, Into The Fray, ClueBot, B1atv, Mild Bill Hiccup, Cholmeister, PixelBot, Alejandrocaro35, Hans Adler, Cldoyle, Rror, Alexius08, Addbot, Mitch feaster, Tide rolls, Luckas-bot, Yobot, Linket, KamikazeBot, Eric-Wester, AnomieBOT, Materialscientist, DannyAsher, Obersach- sebot, Xqbot, Capricorn42, Boongie, Action ben, JascalX, Omnipaedista, Jsorr, Mfwitten, Rapsar, Stpasha, RBarryYoung, DixonDBot, Teknad, EmausBot, WikitanvirBot, Mbonet, Chewings72, Davikrehalt, Llightex, ClueBot NG, Wcherowi, Benjgil, Widr, Helpful Pixie Bot, David815, Sylvier11, Waleed.598, ChromaNebula, Jochen Burghardt, Epicgenius, Bluemathman, G S Palmer, Idonei, Scotus12, Danlarteygh and Anonymous: 143 • Deduction theorem Source: https://en.wikipedia.org/wiki/Deduction_theorem?oldid=645742813 Contributors: Michael Hardy, AugPi, Sethmahoney, Silverfish, Charles Matthews, Giftlite, Rossrs, Discospinster, Luqui, Paul August, Nortexoid, Obradovic Goran, Eric Kvaalen, MikeMorgan, Oleg Alexandrov, Woohookitty, Ryan Reich, Koavf, SmackBot, BeteNoir, Mhss, Ezrarez, Mets501, JRSpriggs, CBM, Gregbard, Julian Mendez, Headbomb, Four Dog Night, Ampy1, BBCOFFEECAT, Aaron Rotenberg, Geometry guy, Andrewaskew, VanishedUserABC, ClueBot, DumZiBoT, WikHead, Addbot, Rdanneskjold, Luckas-bot, Citation bot, Andrewjameskirk, Citation bot 1, Gamewizard71, Ebrambot and Anonymous: 22 • Destructive dilemma Source: https://en.wikipedia.org/wiki/Destructive_dilemma?oldid=650823891 Contributors: Michael Hardy, Zen- sufi, Jiy, Rich Farmbrough, Zenosparadox, Oleg Alexandrov, Lsuff, MagneticFlux, SmackBot, Cybercobra, Tesseran, Jim.belk, Floridi~enwiki, Gregbard, Adavidb, Niceguyedc, Alejandrocaro35, Erik9bot, 478jjjz, Helpful Pixie Bot, Greenm22, Dooooot and Anonymous: 11 • Disjunction elimination Source: https://en.wikipedia.org/wiki/Disjunction_elimination?oldid=655615074 Contributors: Michael Hardy, Justin Johnson, Cimon Avaro, Evercat, Arcadian, Linas, GregorB, Graham87, Jameshfisher, Arthur Rubin, SmackBot, Kurykh, Cyber- cobra, Jim.belk, Mets501, CBM, Gregbard, Julian Mendez, Vzaliva, Hotfeba, Alejandrocaro35, Erik9bot, Dooooot and Anonymous: 4 • Disjunction introduction Source: https://en.wikipedia.org/wiki/Disjunction_introduction?oldid=641932059 Contributors: Amillar, Justin Johnson, Evercat, Sam Hocevar, Esperant, Jiy, Rctay, Graham87, SmackBot, Jim.belk, Gregbard, Alejandrocaro35, Download, Legobot, Luckas-bot, Calle, Erik9bot, Voomoo and Anonymous: 8 • Disjunctive syllogism Source: https://en.wikipedia.org/wiki/Disjunctive_syllogism?oldid=668712430 Contributors: AugPi, Cimon Avaro, Evercat, Charles Matthews, Dysprosia, Taak, Jiy, Rich Farmbrough, Guanabot, ESkog, Jumbuck, Bookandcoffee, Oleg Alexandrov, 43.4. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 129

FlaBot, Kwhittingham, Mathbot, Jameshfisher, YurikBot, KSchutte, Lomn, Shadro, Arthur Rubin, Pentasyllabic, SmackBot, Wlmg, Mhss, Bluebot, Kingdon, Jim.belk, Sophomoric, Tawkerbot2, Gregbard, Nmajdan, Thijs!bot, Mcguire, Anarchia, It Is Me Here, Jamelan, Alejandrocaro35, UnCatBot, Flash94, Addbot, SpillingBot, Yobot, AnomieBOT, John of Reading, Donner60, Dooooot and Anonymous: 27 • Distributive property Source: https://en.wikipedia.org/wiki/Distributive_property?oldid=665647722 Contributors: AxelBoldt, Tarquin, Youssefsan, Toby Bartels, Patrick, Xavic69, Michael Hardy, Andres, Ideyal, Dysprosia, Malcohol, Andrewman327, Shizhao, PuzzletChung, Romanm, Chris Roy, Wikibot, Tobias Bergemann, Giftlite, Markus Krötzsch, Dissident, Nodmonkey, Mike Rosoft, Smimram, Discospin- ster, Paul August, ESkog, Rgdboer, EmilJ, Bobo192, Robotje, Smalljim, Jumbuck, Arthena, Keenan Pepper, Mykej, Bsadowski1, Blax- thos, Linas, Evershade, Isnow, Marudubshinki, Salix alba, Vegaswikian, Nneonneo, Bfigura, FlaBot, [email protected], Mathbot, Andy85719, Ichudov, DVdm, YurikBot, Michael Slone, Grafen, Trovatore, Bota47, Banus, Melchoir, Bluebot, Ladislav the Posthu- mous, Octahedron80, UNV, Jiddisch~enwiki, Khazar, FrozenMan, Bando26, 16@r, Dicklyon, EdC~enwiki, Engelec, Exzakin, Jokes Free4Me, Simeon, Gregbard, Thijs!bot, Barticus88, Marek69, Nezzadar, Escarbot, Mhaitham.shammaa, Salgueiro~enwiki, JAnDbot, Onkel Tuca~enwiki, Acroterion, Numbo3, Katalaveno, AntiSpamBot, GaborLajos, Lyctc, Idioma-bot, Janice Margaret Vian, Montchav, TXiKiBoT, Anonymous Dissident, Dictouray, Oxfordwang, Martin451, Skylarkmichelle, Jackfork, Enviroboy, Dmcq, AlleborgoBot, Gerakibot, Bentogoa, Flyer22, Radon210, Hello71, ClueBot, The Thing That Should Not Be, Cliff, Mild Bill Hiccup, Niceguyedc, Gold- kingtut5, Excirial, Jusdafax, NuclearWarfare, NERIC-Security, Pichpich, Mm40, Addbot, Jojhutton, Ronhjones, Zarcadia, Favonian, Squandermania, Jarble, Ben Ben, Legobot, Luckas-bot, AnomieBOT, Materialscientist, NFD9001, Greatfermat, False vacuum, Ribot- BOT, Pinethicket, I dream of horses, MastiBot, Andrea105, Slon02, Saul34, J36miles, John of Reading, Davejohnsan, Orphan Wiki, Super48paul, Sp33dyphil, Slawekb, Quondum, BrokenAnchorBot, TyA, Donner60, Chewings72, DASHBotAV, AlecJansen, ClueBot NG, Wcherowi, IfYouDoIfYouDon't, Dreth, O.Koslowski, Asukite, Widr, Vibhijain, Helpful Pixie Bot, Pmi1924, BG19bot, TCN7JM, Dan653, Forkloop, CallofDutyboy9, EuroCarGT, Sandeep.ps4, Christian314, Ivashikhmin, Mogism, Makecat-bot, Lugia2453, Gphilip, Brirush, Wywin, BB-GUN101, ElHef, DavidLeighEllis, Shaun9876, Pkramer2021, Kitkat1234567880, Kcolemantwin3, Gracecandy1143, David88063, Abruce123412, Amortias, Solid Frog, Jj 1213 wiki, Dalangster, Iwamwickham, 123456me123456, Some1Redirects4You and Anonymous: 239 • Double negation Source: https://en.wikipedia.org/wiki/Double_negation?oldid=669911366 Contributors: Angela, Dysprosia, Wik, Ja- son Quinn, Cybercobra, David ekstrand, Wvbailey, CBM, Gregbard, Fratrep, TFCforever, Alejandrocaro35, Haruth, Helpful Pixie Bot, ChrisGualtieri, YiFeiBot and Anonymous: 3 • Existential generalization Source: https://en.wikipedia.org/wiki/Existential_generalization?oldid=637363180 Contributors: Hyacinth, Loadmaster, Yarou, Gregbard, Alejandrocaro35, Wcherowi, BG19bot, Fan Singh Long and Jochen Burghardt • Existential instantiation Source: https://en.wikipedia.org/wiki/Existential_instantiation?oldid=637627433 Contributors: InverseHyper- cube, CBM, Gregbard, Alejandrocaro35, Quondum, Theinactivist and Anonymous: 3 • Exportation (logic) Source: https://en.wikipedia.org/wiki/Exportation_(logic)?oldid=640841156 Contributors: Michael Hardy, Jitse Niesen, Arthur Rubin, HenningThielemann, Gregbard, Alejandrocaro35, Yobot, SwisterTwister, RjwilmsiBot, John of Reading, Dooooot and Charles.w.chambers • First principle Source: https://en.wikipedia.org/wiki/First_principle?oldid=647008203 Contributors: Michael Hardy, N-true, Dpbsmith, Chealer, Sanders muc, Everyking, Ajgorhoe, Rgfuller, Mike Rosoft, Blanchette, Abelson, FirstPrinciples, Lycurgus, Rgdboer, Pearle, Gary, Anthony Appleyard, Keenan Pepper, Mrholybrain, Vuo, Woohookitty, Pol098, Ariamaki, Btyner, Teemu Leisti, BD2412, Michael estelle, Helvetius, BradBeattie, RussBot, Ziddy, Botteville, Closedmouth, Wikiwawawa, Snowboardpunk, SmackBot, Josephprymak, Egsan Bacon, LuchoX, Fullstop, Dqb124, Lambiam, Kuru, Gregbard, Was a bee, Heckrodt, Omicronpersei8, OniTony, Tarotcards, STBotD, Haade, Powerglide, TXiKiBoT, Cindamuse, Kbrose, Newbyguesses, Etbnc, The way, the truth, and the light, Oxymoron83, Mr. Granger, Jusdafax, Alejandrocaro35, Tnxman307, Laforgue, Addbot, V-squared, AnnaFrance, Nallimbot, AnomieBOT, Wortafad, Jkbw, Aaron Kauppi, FrescoBot, Kuliwil, Mfwitten, Machine Elf 1735, Theo10011, RjwilmsiBot, Tesseract2, Wgunther, Hpvpp, AvicAWB, ClueBot NG, Math4kids, Helpful Pixie Bot, Lemnaminor, I am One of Many and Anonymous: 53 • Formal ethics Source: https://en.wikipedia.org/wiki/Formal_ethics?oldid=605589706 Contributors: CyborgTosser, Kwamikagami, Bobo192, BD2412, Gaius Cornelius, Lambiam, RomanSpa, Sdorrance, Gregbard, SlackerMom, Lightbot, Eumolpo, Collini, Helpful Pixie Bot and Anonymous: 3 • Formal proof Source: https://en.wikipedia.org/wiki/Formal_proof?oldid=661169170 Contributors: Charles Matthews, Hyacinth, J D, Timrollpickering, Pmanderson, EmilJ, Dfranke, BD2412, Salix alba, Gaius Cornelius, Seegoon, Chrylis, Gregbard, Nick Number, Phil- ogo, VanishedUserABC, Radagast3, Ndenison, Mild Bill Hiccup, Hans Adler, HexaChord, Addbot, Numbo3-bot, AnomieBOT, MattTait, The Wiki ghost, Disinvented and Anonymous: 9 • Formal system Source: https://en.wikipedia.org/wiki/Formal_system?oldid=666935939 Contributors: Michael Hardy, Modster, Mpagano, AugPi, Charles Matthews, Dysprosia, Hyacinth, Timrollpickering, Benc, Ancheta Wis, Giftlite, Acattell, Pmanderson, Felix Wiemann, Ascánder, R. S. Shaw, PWilkinson, Obradovic Goran, Mdd, Ruud Koot, Waldir, Raguks, Qwertyus, FlaBot, Margosbot~enwiki, Tillmo, Rekleov, YurikBot, Jpbowen, Arthur Rubin, Bsod2, SmackBot, Rex the first, Pro8, Hmains, Colonies Chris, Salt Yeung, Jon Awbrey, Byelf2007, Lambiam, Bjankuloski06en~enwiki, RandomCritic, 16@r, Mets501, JMK, BrainMagMo, CBM, Neelix, Gregbard, Cydebot, Al Lemos, Olaf, MartinBot, R'n'B, Maurice Carbonaro, Jonathanzung, Am Fiosaigear~enwiki, Maximillion Pegasus, Philogo, Popopp, Dmcq, Udirock, Vanished user kijsdion3i4jf, DesolateReality, Kas-nik, WestwoodMatt, Hans Adler, Aleksd, Sleepinj, Libcub, Addbot, TomorrowsDream, Adama44, Ptbotgourou, TaBOT-zerem, Pcap, KamikazeBot, AnomieBOT, Buenasdiaz, Omnipaedista, The Wiki ghost, Nikiriy, Undsoweiter, FrescoBot, Anthony.h.burton, Jonkerz, EmausBot, John of Reading, Montgolfière, Architectchao, Tijfo098, ChuispastonBot, Rmashhadi, Wcherowi, Paylett, JRBugembe, Helpful Pixie Bot, BG19bot, Justincheng12345-bot, Steamerandy, Saehry, Trackteur, Mario Castelán Castro and Anonymous: 35 • Hypothetical syllogism Source: https://en.wikipedia.org/wiki/Hypothetical_syllogism?oldid=638420630 Contributors: Rossami, Charles Matthews, I R Solecism, Dysprosia, Taak, Jiy, Chalst, Arthena, RJFJR, Oleg Alexandrov, Mel Etitis, Lsloan, Algebraist, Noam~enwiki, Arthur Rubin, SmackBot, Haza-w, Mhss, Bluebot, Cybercobra, Jim.belk, Simeon, Gregbard, Cydebot, JamesBWatson, .José~enwiki, Squids and Chips, Jamelan, Chenzw, ClueBot, Alexbot, Alejandrocaro35, SchreiberBike, Addbot, BobHelix, Chzz, Luckas-bot, The Parting Glass, Erik9bot, ClueBot NG, Dooooot, Flosfa, Brad7777, LatinAddict and Anonymous: 33 • List of formal systems Source: https://en.wikipedia.org/wiki/List_of_formal_systems?oldid=616003650 Contributors: Michael Hardy, Katharineamy, Paylett, Ashliveslove and Anonymous: 1 130 CHAPTER 43. UNIVERSAL INSTANTIATION

• Material implication (rule of inference) Source: https://en.wikipedia.org/wiki/Material_implication_(rule_of_inference)?oldid=639162496 Contributors: Toby Bartels, Jason Quinn, RussBot, KSchutte, Arthur Rubin, Incnis Mrsi, Gregbard, Alejandrocaro35, Quondum, Helpful Pixie Bot, Dooooot, CarrieVS, Jochen Burghardt and Anonymous: 10 • Modus ponendo tollens Source: https://en.wikipedia.org/wiki/Modus_ponendo_tollens?oldid=642934298 Contributors: Evercat, Mike Rosoft, Macai, Amsoman, FlaBot, Carolynparrishfan, Mikeblas, Arthur Rubin, Srnec, Gobonobo, Jim.belk, Gregbard, Anarchia, Addbot, Aaagmnr, Rmtzr, Dooooot, Wvenialbo and Anonymous: 15 • Modus ponens Source: https://en.wikipedia.org/wiki/Modus_ponens?oldid=670062651 Contributors: AxelBoldt, Zundark, Tarquin, Larry Sanger, Andre Engels, Rootbeer, Ryguasu, Frecklefoot, Michael Hardy, Voidvector, Liftarn, J'raxis, AugPi, BAxelrod, Charles Matthews, Dysprosia, Jitse Niesen, Andyfugard, Ruakh, Giftlite, Jrquinlisk, Leonard G., 20040302, Siroxo, Matt Crypto, Neilc, Toy- toy, Antandrus, Yayay, Sword~enwiki, Jiy, Rich Farmbrough, Elwikipedista~enwiki, Jonon, Nortexoid, Obradovic Goran, Jumbuck, Marabean, M7, Trylks, Dandv, Ruud Koot, Waldir, Marudubshinki, Graham87, Rjwilmsi, Notapipe, [email protected], RexNL, Spencerk, WhyBeNormal, YurikBot, Vecter, KSmrq, Shawn81, Schoen, Voidxor, Noam~enwiki, Hakeem.gadi, Pacogo7, Otto ter Haar, Incnis Mrsi, Eskimbot, Mhss, Nicolas.Wu, Cybercobra, Spiritia, Wvbailey, Gobonobo, Robofish, Jim.belk, Don Warren, CBM, Gregbard, Cydebot, Steel, Thijs!bot, Leuko, Magioladitis, Yocko, Mbarbier, Ercarter, Heliac, Anarchia, Nathanjones15, Policron, ABF, TXiKiBoT, Broadbot, Cgwaldman, Jamelan, Chenzw, Yintan, Svick, Classicalecon, Velvetron, Logperson, Alejandrocaro35, Darkicebot, Addbot, Luckas-bot, Yobot, Jim1138, Jo3sampl, Xqbot, Tyrol5, Romnempire, Machine Elf 1735, WillNess, Wingman4l7, ClueBot NG, Trust- edgunny, DBigXray, Svartskägg, MRG90, Dooooot, Planeswalkerdude, Firefirefire2, Nimrodomer and Anonymous: 83 • Modus tollens Source: https://en.wikipedia.org/wiki/Modus_tollens?oldid=668711080 Contributors: AxelBoldt, The Cunctator, Void- vector, J'raxis, Kingturtle, Andrewa, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Andyfugard, Banno, Chuunen Baka, Highland- wolf, Jrquinlisk, Wwoods, Jorend, 20040302, Sundar, Nayuki, Neilc, Pgan002, Toytoy, Iantresman, Neutrality, Brianjd, EricBright, Jiy, Elwikipedista~enwiki, Charm, El C, Obradovic Goran, Keenan Pepper, Kenyon, Oleg Alexandrov, Philbarker, Waldir, Graham87, KYPark, FlaBot, Notapipe, Mathbot, Nivaca, Spencerk, YurikBot, Schoen, W33v1l, Icelight, Voidxor, Doncram, FF2010, Shawnc, Ge- offrey.landis, Otto ter Haar, KnightRider~enwiki, Anastrophe, Chris the speller, Leland McInnes, Robma, Richard001, Gobonobo, Tim bates, Robofish, Jim.belk, Gregbard, Cydebot, Steel, Rifleman 82, Mr Gronk, Wejstheman, Thijs!bot, Pampas Cat, WinBot, JAnDbot, Husond, Anarchia, Nullie, Hiddenhearts, Izno, VolkovBot, Human step, Chaos5023, The Wonky Gnome, Ferengi, Cgwaldman, Jamelan, Dictioneer, Hotbelgo, Michaelwy, Wanderer57, Liempt, Auntof6, Alejandrocaro35, Alastair Carnegie, Mccaskey, Addbot, Luckas-bot, Yobot, Tojasonharris, KDS4444, 1exec1, Xqbot, Andersæøå, Anime Addict AA, FrescoBot, MondalorBot, Full-date unlinking bot, Lightlowemon, Ripchip Bot, Tesseract2, Farrest, Arno Peters, JSquish, Exfenestracide, ClueBot NG, TheJrLinguist, Dooooot, Flosfa, Kephir, François Robere, Bobblond, Lukesnydermusic and Anonymous: 107 • Negation introduction Source: https://en.wikipedia.org/wiki/Negation_introduction?oldid=661305432 Contributors: Michael Hardy, Jitse Niesen, JRSpriggs, Matthew Kastor, Mcpheeandrew, C1776M and Joejoebob • Physical symbol system Source: https://en.wikipedia.org/wiki/Physical_symbol_system?oldid=602605228 Contributors: Stevertigo, Nealmcb, Michael Hardy, BAxelrod, Mdd, Obersachse, BD2412, Rjwilmsi, Yihfeng, Andriyko, Gareth E Kegg, RussBot, SmackBot, David Poole, Jon Awbrey, Gregbard, Joanna Bryson, Alaibot, PKT, Wikid77, Nick Number, Coffee2theorems, Hulten, AlleborgoBot, CharlesGillingham, ClueBot, Addbot, Tassedethe, Informatwr, Yobot, Citation bot, Citation bot 1, Johnx33, Jonesey95, Dcmichelson, ClueBot NG, Helpful Pixie Bot, PhnomPencil, and Anonymous: 14 • Predicate logic Source: https://en.wikipedia.org/wiki/Predicate_logic?oldid=668159355 Contributors: Toby Bartels, Michael Hardy, Andres, Hyacinth, Robbot, MathMartin, Giftlite, Leonard G., Mindmatrix, Thekohser, Eubot, Chobot, Sharkface217, Jpbowen, Tomisti, SmackBot, Mhss, Cybercobra, Nakon, Byelf2007, Wvbailey, Bjankuloski06en~enwiki, George100, CBM, Gregbard, Naudefj, Thijs!bot, EdJohnston, JAnDbot, Hypergeek14, Stassa, Vanished user g454XxNpUVWvxzlr, Policron, Dessources, JohnBlackburne, Anonymous Dissident, Gerakibot, Soler97, Kumioko, DesolateReality, Xiaq, ClueBot, Taxa, Djk3, TimClicks, Addbot, Jayde239, Yobot, AnomieBOT, Materialscientist, RandomDSdevel, ESSch, Keri, Logichulk, Xnn, EmausBot, WikitanvirBot, Mayur, ClueBot NG, Satellizer, Chester Markel, MerlIwBot, Helpful Pixie Bot, Virago250, Brad7777, Jochen Burghardt, BoltonSM3, Tomajohnson and Anonymous: 34 • Proof (truth) Source: https://en.wikipedia.org/wiki/Proof_(truth)?oldid=661799360 Contributors: Toby Bartels, Michael Hardy, Gan- dalf61, Kwamikagami, BlastOButter42, Woohookitty, BD2412, Rjwilmsi, Mayumashu, RussBot, SmackBot, Byelf2007, Vaughan Pratt, CRGreathouse, CBM, Gregbard, DumbBOT, Bongwarrior, Theodore.norvell, Chiswick Chap, Technopat, MustbeAmoocow, Vanishe- dUserABC, Radagast3, Moonriddengirl, Dodger67, Excirial, Addbot, Favonian, Alfie66, Citation bot, False vacuum, Sławomir Biały, Citation bot 1, 9E2, Reaper Eternal, EmausBot, Zacchro, JSquish, Tijfo098, ClueBot NG, Masssly, Widr, Helpful Pixie Bot, Lowercase sigmabot, Ssonday002, Dolphin33438, Hihahe, Superdudereturns, HamboGlider, Jshaps1, Pickles123 and Anonymous: 53 • Rule of inference Source: https://en.wikipedia.org/wiki/Rule_of_inference?oldid=654518943 Contributors: Michael Hardy, Darkwind, Poor Yorick, Rossami, BAxelrod, Hyacinth, Ldo, Timrollpickering, Markus Krötzsch, Jason Quinn, Khalid hassani, Neilc, Quadell, CSTAR, Lucidish, MeltBanana, Elwikipedista~enwiki, EmilJ, Nortexoid, Giraffedata, Joriki, Ruud Koot, Hurricane Angel, Waldir, BD2412, Kbdank71, Emallove, Brighterorange, Algebraist, YurikBot, Rsrikanth05, Cleared as filed, Arthur Rubin, Fram, Nahaj, Elwood j blues, Mhss, Chlewbot, Byelf2007, ArglebargleIV, Robofish, Tktktk, Jim.belk, Physis, JHunterJ, Grumpyyoungman01, Dan Gluck, CRGreathouse, CBM, Simeon, Gregbard, Cydebot, Thijs!bot, Epbr123, LokiClock, TXiKiBoT, Cliff, Eusebius, Addbot, Luckas-bot, AnomieBOT, Citation bot, GrouchoBot, RibotBOT, WillMall, Undsoweiter, Jonesey95, Gamewizard71, Onel5969, TomT0m, Tesser- act2, Tijfo098, ClueBot NG, Delphinebbd, Ginsuloft and Anonymous: 27 • Rule of replacement Source: https://en.wikipedia.org/wiki/Rule_of_replacement?oldid=650659702 Contributors: Michael Hardy, ENeville, Arthur Rubin, Gregbard, Cliff, Legobot, SwisterTwister, FrescoBot, Olexa Riznyk, IfYouDoIfYouDon't, Helpful Pixie Bot, BG19bot, Dooooot, Jochen Burghardt and Anonymous: 4 • Tautology (rule of inference) Source: https://en.wikipedia.org/wiki/Tautology_(rule_of_inference)?oldid=654148847 Contributors: Michael Hardy, ENeville, BiT, Gregbard, Alejandrocaro35, Tinton5, Tbhotch, RjwilmsiBot, John of Reading and Anonymous: 4 • Transposition (logic) Source: https://en.wikipedia.org/wiki/Transposition_(logic)?oldid=660854930 Contributors: AxelBoldt, Mrwojo, Bdesham, Michael Hardy, Dominus, Charles Matthews, BenFrantzDale, Kwamikagami, Circeus, Cmdrjameson, Bgeer, Amerindianarts, BD2412, Gaius Cornelius, Doncram, Crystallina, SmackBot, Reedy, Mhss, Drae, Gregbard, David Eppstein, Greenwoodtree, Alan U. Kennington, Watchduck, Gerhardvalentin, Fluffernutter, Noideta, RjwilmsiBot, Quondum, SporkBot, Dooooot and Anonymous: 12 • Turnstile (symbol) Source: https://en.wikipedia.org/wiki/Turnstile_(symbol)?oldid=652111700 Contributors: Hyacinth, Ancheta Wis, Urhixidur, Porges, Night Gyr, Oleg Alexandrov, Davidkazuhiro, Apokrif, Flamingspinach, Waldir, Koavf, William Lovas, Pburka, Ha- keem.gadi, SimonMorgan, SmackBot, Unschool, CBM, Gregbard, Cydebot, Egriffin, Arthur Buchsbaum, Saibod, Methossant, Eeky, Plastikspork, Yobot, Phil Last, SporkBot, BG19bot, Leonren, JPaestpreornJeolhlna and Anonymous: 10 43.4. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 131

• Universal generalization Source: https://en.wikipedia.org/wiki/Universal_generalization?oldid=660978736 Contributors: Dieter Si- mon, Michael Hardy, Raymer, AugPi, Dcoetzee, Oleg Alexandrov, Vegaswikian, Ilmari Karonen, SmackBot, Mhss, Jim.belk, CBM, Gregbard, Peterdjones, Julian Mendez, Fyedernoggersnodden, Minimiscience, Rettetast, Nieske, Hans Adler, Addbot, Yobot, Erik9bot, Lotje, ChuispastonBot, Vsaraph, Op47 and Anonymous: 10 • Universal instantiation Source: https://en.wikipedia.org/wiki/Universal_instantiation?oldid=637075361 Contributors: Michael Hardy, Kku, Charles Matthews, Hyacinth, Nortexoid, Awared, Akrabbim, SmackBot, Mhss, Byelf2007, Jim.belk, CBM, Simeon, Gregbard, WinBot, JamesBWatson, SieBot, RatnimSnave, Eusebius, Alejandrocaro35, Burket, Addbot, Yobot, AnomieBOT, Erik9bot, Active Ba- nana, Vsaraph, Fan Singh Long, Jochen Burghardt, Mark viking and Anonymous: 7

43.4.2 Images

• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do- main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Associativity_of_real_number_addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f6/Associativity_of_real_ number_addition.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla) • File:CardContin.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/CardContin.svg License: Public domain Contrib- utors: en:Image:CardContin.png Original artist: en:User:Trovatore, recreated by User:Stannered • File:Commutative_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/36/Commutative_Addition.svg License: CC-BY-SA-3.0 Contributors: self-made using previous GFDL work by Melchoir Original artist: Weston.pace. Attribution: Apples in image were created by Melchoir • File:Commutative_Word_Origin.PNG Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Commutative_Word_Origin. PNG License: Public domain Contributors: Annales de Gergonne, Tome V, pg. 98 Original artist: Francois Servois • File:Complex-adaptive-system.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/00/Complex-adaptive-system.jpg Li- cense: Public domain Contributors: Own work by Acadac : Taken from en.wikipedia.org, where Acadac was inspired to create this graphic after reading: Original artist: Acadac • File:DeMorgan_Logic_Circuit_diagram_DIN.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/DeMorgan_Logic_ Circuit_diagram_DIN.svg License: Public domain Contributors: Own work Original artist: MichaelFrey • File:Demorganlaws.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Demorganlaws.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Teknad • File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? • File:Logic.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Logic.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: It Is Me Here • File:Logic_portal.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Logic_portal.svg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk) • File:Merge-arrows.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/52/Merge-arrows.svg License: Public domain Con- tributors: ? Original artist: ? • File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_ mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color) • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 • File:Scale_of_justice_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0e/Scale_of_justice_2.svg License: Public do- main Contributors: Own work Original artist: DTR • File:Semigroup_associative.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Semigroup_associative.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: IkamusumeFan • File:Symmetry_Of_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Symmetry_Of_Addition.svg License: Public domain Contributors: Own work Original artist: Weston.pace • File:Tamari_lattice.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Tamari_lattice.svg License: Public domain Con- tributors: Own work Original artist: David Eppstein • File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_ with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg from the Tango project. Original artist: Benjamin D. Esham (bdesham) • File:Vector_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Vector_Addition.svg License: Public do- main Contributors: ? Original artist: ? • File:Venn1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Venn1001.svg License: Public domain Contributors: ? Original artist: ? 132 CHAPTER 43. UNIVERSAL INSTANTIATION

• File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub- lic domain Contributors: Own work Original artist: Cepheus • File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain Contributors: ? Original artist: ? • File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA 3.0 Contributors: Rei-artur Original artist: Nicholas Moreau • File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs), based on original logo tossed together by Brion Vibber

43.4.3 Content license

• Creative Commons Attribution-Share Alike 3.0