<<

Necessity and sufficiency From Wikipedia, the free encyclopedia Contents

1 Abductive reasoning 1 1.1 History ...... 1 1.2 Deduction, induction, and abduction ...... 1 1.3 Formalizations of abduction ...... 2 1.3.1 -based abduction ...... 2 1.3.2 Set-cover abduction ...... 2 1.3.3 Abductive validation ...... 3 1.3.4 Probabilistic abduction ...... 3 1.3.5 Subjective logic abduction ...... 4 1.4 History ...... 4 1.4.1 1867 ...... 6 1.4.2 1878 ...... 6 1.4.3 1883 ...... 6 1.4.4 1902 and after ...... 6 1.4.5 ...... 7 1.4.6 Three levels of logic about abduction ...... 7 1.4.7 Other writers ...... 8 1.5 Applications ...... 8 1.6 See also ...... 9 1.7 References ...... 10 1.8 Notes ...... 11 1.9 External links ...... 14

2 Condition () 15 2.1 References ...... 17

3 Deductive reasoning 18 3.1 Simple example ...... 18 3.2 Law of detachment ...... 18 3.3 Law of syllogism ...... 19 3.4 Law of contrapositive ...... 19 3.5 Validity and soundness ...... 20 3.6 History ...... 20

i ii CONTENTS

3.7 Education ...... 20 3.8 See also ...... 20 3.9 References ...... 21 3.10 Further reading ...... 21 3.11 External links ...... 22

4 23 4.1 Description ...... 23 4.2 Inductive vs. deductive reasoning ...... 24 4.3 Criticism ...... 24 4.3.1 Biases ...... 24 4.4 Types ...... 25 4.4.1 Generalization ...... 25 4.4.2 Statistical syllogism ...... 25 4.4.3 Simple induction ...... 26 4.4.4 Causal ...... 26 4.4.5 Prediction ...... 26 4.5 Bayesian inference ...... 26 4.6 Inductive inference ...... 27 4.7 See also ...... 27 4.8 References ...... 28 4.9 Further reading ...... 28 4.10 External links ...... 29

5 Inference 30 5.1 Examples ...... 30 5.1.1 Example for definition #2 ...... 31 5.2 Incorrect inference ...... 31 5.3 Automatic logical inference ...... 32 5.3.1 Example using Prolog ...... 32 5.3.2 Use with the semantic web ...... 32 5.3.3 Bayesian statistics and probability logic ...... 32 5.3.4 Nonmonotonic logic[2] ...... 33 5.4 See also ...... 33 5.5 References ...... 34 5.6 Further reading ...... 34 5.7 External links ...... 35

6 Logic 36 6.1 The study of logic ...... 36 6.1.1 Logical form ...... 36 6.1.2 Deductive and inductive reasoning, and abductive inference ...... 37 CONTENTS iii

6.1.3 Consistency, validity, soundness, and completeness ...... 38 6.1.4 Rival conceptions of logic ...... 38 6.2 History ...... 38 6.3 Types of logic ...... 40 6.3.1 Syllogistic logic ...... 40 6.3.2 Propositional logic (sentential logic) ...... 40 6.3.3 Predicate logic ...... 40 6.3.4 Modal logic ...... 41 6.3.5 Informal reasoning ...... 41 6.3.6 Mathematical logic ...... 42 6.3.7 Philosophical logic ...... 42 6.3.8 Computational logic ...... 42 6.3.9 Bivalence and the law of the excluded middle; non-classical ...... 43 6.3.10 “Is logic empirical?" ...... 44 6.3.11 Implication: strict or material? ...... 44 6.3.12 Tolerating the impossible ...... 44 6.3.13 Rejection of logical ...... 44 6.4 See also ...... 45 6.5 Notes and references ...... 46 6.6 Bibliography ...... 48 6.7 External links ...... 48

7 Necessity and sufficiency 50 7.1 Definitions ...... 50 7.2 Necessity ...... 50 7.3 Sufficiency ...... 52 7.4 Relationship between necessity and sufficiency ...... 53 7.5 Simultaneous necessity and sufficiency ...... 53 7.6 See also ...... 54 7.6.1 Argument forms involving necessary and sufficient conditions ...... 54 7.7 References ...... 54 7.8 External links ...... 54

8 Occam’s razor 55 8.1 History ...... 56 8.1.1 Formulations before Ockham ...... 56 8.1.2 Ockham ...... 57 8.1.3 Later formulations ...... 57 8.2 Justifications ...... 58 8.2.1 Aesthetic ...... 58 8.2.2 Empirical ...... 58 8.2.3 Practical considerations and pragmatism ...... 58 iv CONTENTS

8.2.4 Mathematical ...... 59 8.2.5 Other ...... 60 8.3 Applications ...... 61 8.3.1 Science and the scientific method ...... 61 8.3.2 Biology ...... 62 8.3.3 Medicine ...... 63 8.3.4 Religion ...... 64 8.3.5 Penal ...... 65 8.3.6 Probability theory and statistics ...... 65 8.4 Controversial aspects of the razor ...... 66 8.5 Anti-razors ...... 66 8.6 See also ...... 67 8.7 Notes ...... 68 8.8 References ...... 68 8.9 Further reading ...... 71 8.10 External links ...... 72 8.11 Text and image sources, contributors, and licenses ...... 73 8.11.1 Text ...... 73 8.11.2 Images ...... 76 8.11.3 Content license ...... 77 Chapter 1

Abductive reasoning

“Abductive” redirects here. For other uses, see Abduction (disambiguation).

Abductive reasoning (also called abduction,[1] abductive inference[2] or retroduction[3]) is a form of logical inference that goes from an observation to a hypothesis that accounts for the observation, ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as “inference to the best explanation”.[4] The fields of law,[5] computer science, and artificial intelligence research[6] renewed interest in the subject of abduc- tion. Diagnostic expert systems frequently employ abduction.

1.1 History

The American (1839–1914) first introduced the term as “guessing”.[7] Peirce said that to abduce a hypothetical explanation a from an observed circumstance b is to surmise that a may be true because then b would be a matter of course.[8] Thus, to abduce a from b involves determining that a is sufficient, but not necessary, for b . For example, suppose we observe that the lawn is wet. If it rained last night, then it would be unsurprising that the lawn is wet. Therefore, by abductive reasoning, the possibility that it rained last night is reasonable (but note that Peirce did not remain convinced that a single logical form covers all abduction);[9]however, some other process may have also resulted in a wet lawn, i.e. dew or lawn sprinklers. Moreover, abducing that it rained last night from the observation of a wet lawn can lead to conclusion(s). Peirce argues that good abductive reasoning from P to Q involves not simply a determination that Q is sufficient for P, but also that Q is among the most economical explanations for P. Simplification and economy both call for that “leap” of abduction.[10]

1.2 Deduction, induction, and abduction

Main article: Logical reasoning

Deductive reasoning (deduction) allows deriving b from a only where b is a formal of a . In other words, deduction derives the consequences of the assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. For example, given that all bachelors are unmarried males, and given that this person is a bachelor, one can deduce that this person is an unmarried male.

Inductive reasoning (induction) allows inferring b from a , where b does not follow necessarily from a . a might give us very good to accept b , but it does not ensure b . For example, if all swans that we have observed so far are white, we may induce that the possibility that all swans are white is reasonable. We have good reason

1 2 CHAPTER 1. ABDUCTIVE REASONING

to believe the conclusion from the premise, but the truth of the conclusion is not guaranteed. (Indeed, it turns out that some swans are black.)

Abductive reasoning (abduction) allows inferring a as an explanation of b . Because of this inference, abduction allows the precondition a to be abduced from the consequence b . Deductive reasoning and abductive reasoning thus differ in the direction in which a rule like " a entails b " is used for inference. As such, abduction is formally equivalent to the logical fallacy of affirming the consequent (or Post hoc ergo propter hoc) because of multiple possible explanations for b . For example, in a billiard game, after glancing and seeing the eight ball moving towards us, we may abduce that the cue ball struck the eight ball. The strike of the cue ball would account for the movement of the eight ball. It serves as a hypothesis that explains our observation. Given the many possible explanations for the movement of the eight ball, our abduction does not leave us certain that the cue ball in fact struck the eight ball, but our abduction, still useful, can serve to orient us in our surroundings. Despite many possible explanations for any physical process that we observe, we tend to abduce a single explanation (or a few explanations) for this process in the expectation that we can better orient ourselves in our surroundings and disregard some possibilities. Properly used, abductive reasoning can be a useful source of priors in Bayesian statistics.

1.3 Formalizations of abduction

1.3.1 Logic-based abduction

In logic, explanation is done from a logical theory T representing a domain and a set of observations O . Abduction is the process of deriving a set of explanations of O according to T and picking out one of those explanations. For E to be an explanation of O according to T , it should satisfy two conditions:

• O follows from E and T ;

• E is consistent with T .

In formal logic, O and E are assumed to be sets of literals. The two conditions for E an explanation of O according to theory T are formalized as:

T ∪ E |= O

T ∪ E Among the possible explanations E satisfying these two conditions, some other condition of minimality is usually imposed to avoid irrelevant facts (not contributing to the entailment of O ) being included in the explanations. Ab- duction is then the process that picks out some member of E . Criteria for picking out a member representing “the best” explanation include the , the prior probability, or the explanatory power of the explanation. A theoretical abduction method for first order classical logic based on the sequent calculus and a dual one, based on semantic tableaux (analytic tableaux) have been proposed (Cialdea Mayer & Pirri 1993). The methods are sound and complete and work for full first order logic, without requiring any preliminary reduction of formulae into normal forms. These methods have also been extended to modal logic. Abductive logic programming is a computational framework that extends normal logic programming with abduction. It separates the theory T into two components, one of which is a normal logic program, used to generate E by means of backward reasoning, the other of which is a set of integrity constraints, used to filter the set of candidate explanations.

1.3.2 Set-cover abduction

A different formalization of abduction is based on inverting the function that calculates the visible effects of the hypotheses. Formally, we are given a set of hypotheses H and a set of manifestations M ; they are related by the domain , represented by a function e that takes as an argument a set of hypotheses and gives as a result the 1.3. FORMALIZATIONS OF ABDUCTION 3

corresponding set of manifestations. In other words, for every subset of the hypotheses H′ ⊆ H , their effects are known to be e(H′) . Abduction is performed by finding a set H′ ⊆ H such that M ⊆ e(H′) . In other words, abduction is performed by finding a set of hypotheses H′ such that their effects e(H′) include all observations M . ′ A common∪ assumption is that the effects of the hypotheses are independent, that is, for every H ⊆ H , it holds that ′ { } e(H ) = h∈H′ e( h ) . If this condition is met, abduction can be seen as a form of set covering.

1.3.3 Abductive validation

Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (see Occam’s razor). Abductive validation is common practice in hypothesis formation in science; moreover, Peirce claims that it is a ubiquitous aspect of thought:

Looking out my window this lovely spring morning, I see an azalea in full bloom. No, no! I don't see that; though that is the only way I can describe what I see. That is a proposition, a sentence, a fact; but what I perceive is not proposition, sentence, fact, but only an image, which I make intelligible in part by means of a statement of fact. This statement is abstract; but what I see is concrete. I perform an abduction when I so much as express in a sentence anything I see. The truth is that the whole fabric of our knowledge is one matted felt of pure hypothesis confirmed and refined by induction. Not the smallest advance can be made in knowledge beyond the stage of vacant staring, without making an abduction at every step.[11]

It was Peirce’s own maxim that “Facts cannot be explained by a hypothesis more extraordinary than these facts themselves; and of various hypotheses the least extraordinary must be adopted.”[12] After obtaining results from an inference procedure, we may be left with multiple assumptions, some of which may be contradictory. Abductive validation is a method for identifying the assumptions that lead to your goal.

1.3.4 Probabilistic abduction

Probabilistic abductive reasoning is a form of abductive validation, and is used extensively in areas where conclusions about possible hypotheses need to be derived, such as for making diagnoses from medical tests. For example, a pharmaceutical company that develops a test for a particular infectious disease will typically determine the reliability of the test by hiring a group of infected and a group of non-infected people to undergo the test. Assume the statements x : “Positive test”, x : “Negative test”, y : “Infected”, and y : “Not infected”. The result of these trials will then determine the reliability of the test in terms of its sensitivity p(x|y) and false positive rate p(x|y) . The interpretations of the conditionals are: p(x|y) : “The probability of positive test given infection”, and p(x|y) : “The probability of positive test in the absence of infection”. The problem with applying these conditionals in a practical setting is that they are expressed in the opposite direction to what the practitioner needs. The conditionals needed for making the diagnosis are: p(y|x) : “The probability of infection given positive test”, and p(y|x) : “The probability of infection given negative test”. The probability of infection could then have been conditionally deduced as p(y∥x) = p(x)p(y|x) + p(x)p(y|x) , where " ∥ " denotes conditional deduction. Unfortunately the required conditionals are usually not directly available to the medical practitioner, but they can be obtained if the base rate of the infection in the population is known. The required conditionals can be correctly derived{ by inverting the available conditionals using Bayes rule. The p(x∧y) p(x|y) = | inverted conditionals are obtained as follows: p(y) ⇒ p(y|x) = p(y)p(x y) . The term | p(x∧y) p(x) p(y x) = p(x) p(y) on the right hand side of the equation expresses the base rate of the infection in the population. Similarly, the term p(x) expresses the default likelihood of positive test on a random person in the population. In the expressions below a(y) and a(y) = 1 − a(y) denote the base rates of y and its complement y respectively, so that e.g. p(x) = a(y)p(x|y) + a(y)p(x|y) . The full expression for the required conditionals p(y|x) and p(y|x) are then { | a(y)p(x|y) p(y x) = a(y)p(x|y)+a(y)p(x|y) | a(y)p(x|y) p(y x) = a(y)p(x|y)+a(y)p(x|y) 4 CHAPTER 1. ABDUCTIVE REASONING

The full expression for the conditionally abduced probability of infection in a tested person, expressed as p(y∥x) , given the outcome of the test, the base rate of the infection, as well as the test’s sensitivity and false positive rate, is then given by ( ) ( ) ∥ a(y)p(x|y) a(y)p(x|y) p(y x) = p(x) a(y)p(x|y)+a(y)p(x|y) + p(x) a(y)p(x|y)+a(y)p(x|y) . This further simplifies to p(y∥x) = a(y)(p(x|y) + p(x|y)) . Probabilistic abduction can thus be described as a method for inverting conditionals in order to apply probabilistic deduction. A medical test result is typically considered positive or negative, so when applying the above equation it can be assumed that either p(x) = 1 (positive) or p(x) = 1 (negative). In case the patient tests positive, the above equation can be simplified to p(y∥x) = p(y|x) which will give the correct likelihood that the patient actually is infected. The Base rate fallacy in medicine,[13] or the Prosecutor’s fallacy[14] in legal reasoning, consists of making the erroneous assumption that p(y|x) = p(x|y) . While this reasoning error often can produce a relatively good approximation of the correct hypothesis probability value, it can lead to a completely wrong result and wrong conclusion in case the base rate is very low and the reliability of the test is not perfect. An extreme example of the base rate fallacy is to conclude that a male person is pregnant just because he tests positive in a pregnancy test. Obviously, the base rate of male pregnancy is zero, and assuming that the test is not perfect, it would be correct to conclude that the male person is not pregnant. The expression for probabilistic abduction can be generalised to multinomial cases,[15] i.e., with a state space X of multiple xi and a state space Y of multiple states yj .

1.3.5 Subjective logic abduction

Subjective logic generalises by including parameters for in the input arguments. Ab- duction in subjective logic is thus similar to probabilistic abduction described above.[15] The input arguments in subjective logic are composite functions called subjective opinions which can be binomial when the opinion applies to a single proposition or multinomial when it applies to a set of propositions. A multinomial opinion thus applies to a frame X (i.e. a state space of exhaustive and mutually disjoint propositions xi ), and is denoted by the composite ⃗ ⃗ function ωX = (b, u,⃗a) , where b is a vector of masses over the propositions of X , u is the∑ uncertainty mass, ⃗ ∑and ⃗a is a vector of base rate values over the propositions of X . These components satisfy u + b(xi) = 1 and ⃗ ⃗a(xi) = 1 as well as b(xi), u,⃗a(xi) ∈ [0, 1] .

Assume the frames X and Y , the sets of conditional opinions ωX|Y and ωX|Y , the opinion ωX on X , and the base rate function aY on Y . Based on these parameters, subjective logic provides a method for deriving the set of inverted conditionals ωY |X and ωY |X . Using these inverted conditionals, subjective logic also provides a method for deduction. Abduction in subjective logic consists of inverting the conditionals and then applying deduction. The symbolic notation for conditional abduction is " ∥ ", and the operator itself is denoted as ⊚ . The expression for [15] ⊚ subjective logic abduction is then: ωY ∥X = ωX (ωX|Y , ωX|Y , aY ) . The advantage of using subjective logic abduction compared to probabilistic abduction is that uncertainty about the probability values of the input arguments can be explicitly expressed and taken into account during the analysis. It is thus possible to perform abductive analysis in the presence of missing or incomplete input , which normally results in degrees of uncertainty in the output conclusions.

1.4 History

The philosopher Charles Sanders Peirce (/ˈpɜrs/; 1839–1914) introduced abduction into modern logic. Over the years he called such inference hypothesis, abduction, presumption, and retroduction. He considered it a topic in logic as a normative field in philosophy, not in purely formal or mathematical logic, and eventually as a topic also in economics of research. As two stages of the development, extension, etc., of a hypothesis in scientific inquiry, abduction and also induction are often collapsed into one overarching concept — the hypothesis. That is why, in the scientific method pioneered by Galileo and Bacon, the abductive stage of hypothesis formation is conceptualized simply as induction. Thus, in 1.4. HISTORY 5 the twentieth century this collapse was reinforced by 's explication of the hypothetico-deductive model, where the hypothesis is considered to be just “a guess”[16] (in the spirit of Peirce). However, when the formation of a hypothesis is considered the result of a process it becomes clear that this “guess” has already been tried and made more robust in thought as a necessary stage of its acquiring the status of hypothesis. Indeed many abductions are rejected or heavily modified by subsequent abductions before they ever reach this stage. Before 1900, Peirce treated abduction as the use of a known rule to explain an observation, e.g., it is a known rule that if it rains the grass is wet; so, to explain the fact that the grass is wet; one infers that it has rained. This remains the common use of the term “abduction” in the social sciences and in artificial intelligence. Peirce consistently characterized it as the kind of inference that originates a hypothesis by concluding in an expla- nation, though an unassured one, for some very curious or surprising (anomalous) observation stated in a premise. As early as 1865 he wrote that all conceptions of cause and force are reached through hypothetical inference; in the 1900s he wrote that all explanatory content of theories is reached through abduction. In other respects Peirce revised his view of abduction over the years.[17] In later years his view came to be:

• Abduction is guessing.[7] It is “very little hampered” by rules of logic.[8] Even a well-prepared mind’s individual guesses are more frequently wrong than right.[18] But the success of our guesses far exceeds that of random luck and seems born of attunement to by instinct[19] (some speak of intuition in such contexts[20]). • Abduction guesses a new or outside idea so as to account in a plausible, instinctive, economical way for a surprising or very complicated phenomenon. That is its proximate aim.[19] • Its longer aim is to economize inquiry itself. Its rationale is inductive: it works often enough, is the only source of new ideas, and has no substitute in expediting the discovery of new .[21] Its rationale especially involves its role in coordination with other modes of inference in inquiry. It is inference to explanatory hypotheses for selection of those best worth trying. • Pragmatism is the logic of abduction. Upon the generation of an explanation (which he came to regard as instinctively guided), the pragmatic maxim gives the necessary and sufficient logical rule to abduction in general. The hypothesis, being insecure, needs to have conceivable[22] implications for informed practice, so as to be testable[23][24] and, through its trials, to expedite and economize inquiry. The economy of research is what calls for abduction and governs its art.[10]

Writing in 1910, Peirce admits that “in almost everything I printed before the beginning of this century I more or less mixed up hypothesis and induction” and he traces the confusion of these two types of reasoning to logicians’ too “narrow and formalistic a conception of inference, as necessarily having formulated judgments from its premises.”[25] He started out in the 1860s treating hypothetical inference in a number of ways which he eventually peeled away as inessential or, in some cases, mistaken:

• as inferring the occurrence of a character (a characteristic) from the observed combined occurrence of multiple characters which its occurrence would necessarily involve;[26] for example, if any occurrence of A is known to necessitate occurrence of B, C, D, E, then the observation of B, C, D, E suggests by way of explanation the occurrence of A. (But by 1878 he no longer regarded such multiplicity as common to all hypothetical inference.[27]) • as aiming for a more or less probable hypothesis (in 1867 and 1883 but not in 1878; anyway by 1900 the justification is not probability but the lack of alternatives to guessing and the fact that guessing is fruitful;[28] by 1903 he speaks of the “likely” in the sense of nearing the truth in an “indefinite sense";[29] by 1908 he discusses plausibility as instinctive appeal.[19]) In a paper dated by editors as circa 1901, he discusses “instinct” and “naturalness”, along with the kind of considerations (low cost of testing, logical caution, breadth, and incomplexity) that he later calls methodeutical.[30] • as induction from characters (but as early as 1900 he characterized abduction as guessing[28]) • as citing a known rule in a premise rather than hypothesizing a rule in the conclusion (but by 1903 he allowed either approach[8][31]) • as basically a transformation of a deductive categorical syllogism[27] (but in 1903 he offered a variation on modus ponens instead,[8] and by 1911 he was unconvinced that any one form covers all hypothetical inference[9]). 6 CHAPTER 1. ABDUCTIVE REASONING

1.4.1 1867

In 1867, in “The Natural Classification of Arguments”,[26] hypothetical inference always deals with a cluster of char- acters (call them P′, P′′, P′′′, etc.) known to occur at least whenever a certain character (M) occurs. Note that categorical syllogisms have elements traditionally called middles, predicates, and subjects. For example: All men [middle] are mortal [predicate]; Socrates [subject] is a man [middle]; ergo Socrates [subject] is mortal [predicate]". Below, 'M' stands for a middle; 'P' for a predicate; 'S' for a subject. Note also that Peirce held that all deduction can be put into the form of the categorical syllogism Barbara (AAA-1).

1.4.2 1878

In 1878, in “Deduction, Induction, and Hypothesis”,[27] there is no longer a need for multiple characters or predicates in order for an inference to be hypothetical, although it is still helpful. Moreover Peirce no longer poses hypothetical inference as concluding in a probable hypothesis. In the forms themselves, it is understood but not explicit that induc- tion involves random selection and that hypothetical inference involves response to a “very curious circumstance”. The forms instead emphasize the modes of inference as rearrangements of one another’s propositions (without the bracketed hints shown below).

1.4.3 1883

Peirce long treated abduction in terms of induction from characters or traits (weighed, not counted like objects), explicitly so in his influential 1883 “A Theory of Probable Inference”, in which he returns to involving probability in the hypothetical conclusion.[32] Like “Deduction, Induction, and Hypothesis” in 1878, it was widely read (see the historical books on statistics by Stephen Stigler), unlike his later amendments of his conception of abduction. Today abduction remains most commonly understood as induction from characters and extension of a known rule to cover unexplained circumstances. Sherlock Holmes uses this method of reasoning in the stories of Arthur Conan Doyle, although Holmes refers to it as deductive reasoning.

1.4.4 1902 and after

In 1902 Peirce wrote that he now regarded the syllogistical forms and the doctrine of extension and comprehension (i.e., objects and characters as referenced by terms), as being less fundamental than he had earlier thought.[33] In 1903 he offered the following form for abduction:[8]

The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true.

The hypothesis is framed, but not asserted, in a premise, then asserted as rationally suspectable in the conclusion. Thus, as in the earlier categorical syllogistic form, the conclusion is formulated from some premise(s). But all the same the hypothesis consists more clearly than ever in a new or outside idea beyond what is known or observed. Induction in a sense goes beyond observations already reported in the premises, but it merely amplifies ideas already known to represent occurrences, or tests an idea supplied by hypothesis; either way it requires previous abductions in order to get such ideas in the first place. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. Note that the hypothesis (“A”) could be of a rule. It need not even be a rule strictly necessitating the surprising observation (“C”), which needs to follow only as a “matter of course"; or the “course” itself could amount to some known rule, merely alluded to, and also not necessarily a rule of strict necessity. In the same year, Peirce wrote that reaching a hypothesis may involve placing a surprising observation under either a newly hypothesized rule or a hypothesized combination of a known rule with a peculiar state of facts, so that the phenomenon would be not surprising but instead either necessarily implied or at least likely.[31] 1.4. HISTORY 7

Peirce did not remain quite convinced about any such form as the categorical syllogistic form or the 1903 form. In 1911, he wrote, “I do not, at present, feel quite convinced that any logical form can be assigned that will cover all 'Retroductions’. For what I mean by a Retroduction is simply a conjecture which arises in the mind.”[9]

1.4.5 Pragmatism

In 1901 Peirce wrote, “There would be no logic in imposing rules, and saying that they ought to be followed, until it is made out that the purpose of hypothesis requires them.”[34] In 1903 Peirce called pragmatism “the logic of ab- duction” and said that the pragmatic maxim gives the necessary and sufficient logical rule to abduction in general.[24] The pragmatic maxim is: “Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.” It is a method for fruitful clarification of conceptions by equating the meaning of a conception with the conceiv- able practical implications of its object’s conceived effects. Peirce held that that is precisely tailored to abduction’s purpose in inquiry, the forming of an idea that could conceivably shape informed conduct. In various writings in the 1900s[10][35] he said that the conduct of abduction (or retroduction) is governed by considerations of economy, belonging in particular to the economics of research. He regarded economics as a normative science whose analytic portion might be part of logical methodeutic (that is, theory of inquiry).[36]

1.4.6 Three levels of logic about abduction

Peirce came over the years to divide (philosophical) logic into three departments:

1. Stechiology, or speculative grammar, on the conditions for meaningfulness. Classification of signs (semblances, symptoms, symbols, etc.) and their combinations (as well as their objects and interpretants).

2. Logical critic, or logic proper, on validity or justifiability of inference, the conditions for true representation. Critique of arguments in their various modes (deduction, induction, abduction).

3. Methodeutic, or speculative rhetoric, on the conditions for determination of interpretations. Methodology of inquiry in its interplay of modes.

Peirce had, from the start, seen the modes of inference as being coordinated together in scientific inquiry and, by the 1900s, held that hypothetical inference in particular is inadequately treated at the level of critique of arguments.[23][24] To increase the assurance of a hypothetical conclusion, one needs to deduce implications about evidence to be found, predictions which induction can test through observation so as to evaluate the hypothesis. That is Peirce’s outline of the scientific method of inquiry, as covered in his inquiry methodology, which includes pragmatism or, as he later called it, pragmaticism, the clarification of ideas in terms of their conceivable implications regarding informed practice.

Classification of signs

As early as 1866,[37] Peirce held that: 1. Hypothesis (abductive inference) is inference through an icon (also called a likeness). 2. Induction is inference through an index (a sign by factual connection); a sample is an index of the totality from which it is drawn. 3. Deduction is inference through a symbol (a sign by interpretive habit irrespective of resemblance or connection to its object). In 1902, Peirce wrote that, in abduction: “It is recognized that the phenomena are like, i.e. constitute an Icon of, a replica of a general conception, or Symbol.”[38]

Critique of arguments

At the critical level Peirce examined the forms of abductive arguments (as discussed above), and came to hold that the hypothesis should economize explanation for plausibility in terms of the feasible and natural. In 1908 Peirce described this plausibility in some detail.[19] It involves not likeliness based on observations (which is instead the 8 CHAPTER 1. ABDUCTIVE REASONING inductive evaluation of a hypothesis), but instead optimal simplicity in the sense of the “facile and natural”, as by Galileo’s natural light of reason and as distinct from “logical simplicity” (Peirce does not dismiss logical simplicity entirely but sees it in a subordinate role; taken to its logical extreme it would favor adding no explanation to the observation at all). Even a well-prepared mind guesses oftener wrong than right, but our guesses succeed better than random luck at reaching the truth or at least advancing the inquiry, and that indicates to Peirce that they are based in instinctive attunement to nature, an affinity between the mind’s processes and the processes of the real, which would account for why appealingly “natural” guesses are the ones that oftenest (or least seldom) succeed; to which Peirce added the argument that such guesses are to be preferred since, without “a natural bent like nature’s”, people would have no hope of understanding nature. In 1910 Peirce made a three-way distinction between probability, verisimilitude, and plausibility, and defined plausibility with a normative “ought": “By plausibility, I mean the degree to which a theory ought to recommend itself to our belief independently of any kind of evidence other than our instinct urging us to regard it favorably.”[39] For Peirce, plausibility does not depend on observed frequencies or probabilities, or on verisimilitude, or even on testability, which is not a question of the critique of the hypothetical inference as an inference, but rather a question of the hypothesis’s relation to the inquiry process. The phrase “inference to the best explanation” (not used by Peirce but often applied to hypothetical inference) is not always understood as referring to the most simple and natural. However, in other senses of “best”, such as “standing up best to tests”, it is hard to know which is the best explanation to form, since one has not tested it yet. Still, for Peirce, any justification of an abductive inference as good is not completed upon its formation as an argument (unlike with induction and deduction) and instead depends also on its methodological role and promise (such as its testability) in advancing inquiry.[23][24][40]

Methodology of inquiry

At the methodeutical level Peirce held that a hypothesis is judged and selected[23] for testing because it offers, via its trial, to expedite and economize the inquiry process itself toward new truths, first of all by being testable and also by further economies,[10] in terms of cost, value, and relationships among guesses (hypotheses). Here, considerations such as probability, absent from the treatment of abduction at the critical level, come into play. For examples:

• Cost: A simple but low-odds guess, if low in cost to test for falsity, may belong first in line for testing, to get it out of the way. If surprisingly it stands up to tests, that is worth knowing early in the inquiry, which otherwise might have stayed long on a wrong though seemingly likelier track. • Value: A guess is intrinsically worth testing if it has instinctual plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be treacherous. • Interrelationships: Guesses can be chosen for trial strategically for their • caution, for which Peirce gave as example the game of Twenty Questions, • breadth of applicability to explain various phenomena, and • incomplexity, that of a hypothesis that seems too simple but whose trial “may give a good 'leave,' as the billiard-players say”, and be instructive for the pursuit of various and conflicting hypotheses that are less simple.[41]

1.4.7 Other writers

Norwood Russell Hanson, a philosopher of science, wanted to grasp a logic explaining how scientific discoveries take place. He used Peirce’s notion of abduction for this.[42] Further development of the concept can be found in Peter Lipton's Inference to the Best Explanation (Lipton, 1991).

1.5 Applications

Applications in artificial intelligence include fault diagnosis, belief revision, and automated planning. The most direct application of abduction is that of automatically detecting faults in systems: given a theory relating faults with their effects and a set of observed effects, abduction can be used to derive sets of faults that are likely to be the cause of the problem. 1.6. SEE ALSO 9

In medicine, abduction can be seen as a component of clinical evaluation and judgment.[43][44] Abduction can also be used to model automated planning.[45] Given a logical theory relating action occurrences with their effects (for example, a formula of the calculus), the problem of finding a plan for reaching a state can be modeled as the problem of abducting a set of literals implying that the final state is the goal state. In intelligence analysis, Analysis of Competing Hypotheses and Bayesian networks, probabilistic abductive reasoning is used extensively. Similarly in medical diagnosis and legal reasoning, the same methods are being used, although there have been many examples of errors, especially caused by the base rate fallacy and the prosecutor’s fallacy. Belief revision, the process of adapting beliefs in view of new information, is another field in which abduction has been applied. The main problem of belief revision is that the new information may be inconsistent with the corpus of beliefs, while the result of the incorporation cannot be inconsistent. This process can be done by the use of abduction: once an explanation for the observation has been found, integrating it does not generate inconsistency. This use of abduction is not straightforward, as adding propositional formulae to other propositional formulae can only make inconsistencies worse. Instead, abduction is done at the level of the ordering of preference of the possible worlds. Preference models use or utility models. In the , abduction has been the key inference method to support scientific realism, and much of the debate about scientific realism is focused on whether abduction is an acceptable method of inference. In historical linguistics, abduction during language acquisition is often taken to be an essential part of processes of language change such as reanalysis and analogy.[46] In anthropology, Alfred Gell in his influential book Art and defined abduction (after Eco[47]) as “a case of synthetic inference 'where we find some very curious circumstances, which would be explained by the supposition that it was a case of some general rule, and thereupon adopt that supposition”.[48] Gell criticizes existing 'anthropo- logical' studies of art, for being too preoccupied with aesthetic value and not preoccupied enough with the central anthropological concern of uncovering 'social relationships,' specifically the social contexts in which artworks are produced, circulated, and received.[49] Abduction is used as the mechanism for getting from art to agency. That is, abduction can explain how works of art inspire a sensus communis: the commonly-held views shared by members that characterize a given society.[50] The question Gell asks in the book is, 'how does it initially 'speak' to people?' He answers by saying that “No reasonable person could suppose that art-like relations between people and things do not involve at least some form of semiosis.”[48] However, he rejects any intimation that semiosis can be thought of as a language because then he would have to admit to some pre-established of the sensus communis that he wants to claim only emerges afterwards out of art. Abduction is the answer to this conundrum because the tentative nature of the abduction concept (Peirce likened it to guessing) means that not only can it operate outside of any pre-existing framework, but moreover, it can actually intimate the existence of a framework. As Gell in his analysis, the physical existence of the artwork prompts the viewer to perform an abduction that imbues the artwork with intentionality. A statue of a goddess, for example, in some senses actually becomes the goddess in the mind of the beholder; and represents not only the form of the deity but also her intentions (which are adduced from the feeling of her very presence). Therefore through abduction, Gell claims that art can have the kind of agency that plants the seeds that grow into cultural myths. The power of agency is the power to motivate actions and inspire ultimately the shared understanding that characterizes any given society.[50]

1.6 See also

• Abductive logic programming

• Analogy

• Analysis of Competing Hypotheses

• Charles Sanders Peirce

• Charles Sanders Peirce bibliography

• Deductive reasoning

• Defeasible reasoning

• Doug Walton 10 CHAPTER 1. ABDUCTIVE REASONING

• Gregory Bateson

• Inductive inference

• Inductive probability

• Inductive reasoning

• Inquiry

• List of thinking-related topics

• Practopoiesis

• Logic

• Subjective logic

• Logical reasoning

• Maximum likelihood

• Scientific method

• Sherlock Holmes

• Sign relation

1.7 References

• This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the “relicensing” terms of the GFDL, version 1.3 or later.

• Awbrey, Jon, and Awbrey, Susan (1995), “Interpretation as Action: The Risk of Inquiry”, Inquiry: Critical Thinking Across the Disciplines, 15, 40-52. Eprint

• Cialdea Mayer, Marta and Pirri, Fiora (1993) “First order abduction via tableau and sequent calculi” Logic Jnl IGPL 1993 1: 99-117; doi:10.1093/jigpal/1.1.99. Oxford Journals

• Cialdea Mayer, Marta and Pirri, Fiora (1995) “Propositional Abduction in Modal Logic”, Logic Jnl IGPL 1995 3: 907-919; doi:10.1093/jigpal/3.6.907 Oxford Journals

• Edwards, Paul (1967, eds.), “The Encyclopedia of Philosophy,” Macmillan Publishing Co, Inc. & The Free Press, New York. Collier Macmillan Publishers, London.

• Eiter, T., and Gottlob, G. (1995), “The Complexity of Logic-Based Abduction, Journal of the ACM, 42.1, 3-42.

• Hanson, N. R. (1958). Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science, Cam- bridge: Cambridge University Press. ISBN 978-0-521-09261-6.

• Harman, Gilbert (1965). “The Inference to the Best Explanation”. The Philosophical Review 74 (1): 88–95. doi:10.2307/2183532.

• Josephson, John R., and Josephson, Susan G. (1995, eds.), Abductive Inference: Computation, Philosophy, Technology, Cambridge University Press, Cambridge, UK.

• Lipton, Peter. (2001). Inference to the Best Explanation, London: Routledge. ISBN 0-415-24202-9.

• McKaughan, Daniel J. (2008), “From Ugly Duckling to Swan: C. S. Peirce, Abduction, and the Pursuit of Scientific Theories”, Transactions of the Charles S. Peirce Society, v. 44, no. 3 (summer), 446–468. Abstract.

• Menzies, T (1996). “Applications of Abduction: Knowledge-Level Modeling” (PDF). International Journal of Human-Computer Studies 45 (3): 305–335. doi:10.1006/ijhc.1996.0054. 1.8. NOTES 11

• Queiroz, Joao & Merrell, Floyd (guest eds.). (2005). “Abduction - between subjectivity and ”. (special issue on abductive inference) Semiotica 153 (1/4). .

• Santaella, Lucia (1997) “The Development of Peirce’s Three Types of Reasoning: Abduction, Deduction, and Induction”, 6th Congress of the IASS. Eprint.

• Sebeok, T. (1981) “You Know My Method”. In Sebeok, T. “The Play of Musement”. Indiana. Bloomington, IA.

• Yu, Chong Ho (1994), “Is There a Logic of Exploratory Data Analysis?", Annual Meeting of American Edu- cational Research Association, New Orleans, LA, April, 1994. Website of Dr. Chong Ho (Alex) Yu

1.8 Notes

[1] • Magnani, L. “Abduction, Reason, and Science: Processes of Discovery and Explanation”. Kluwer Academic Plenum Publishers, New York, 2001. xvii. 205 pages. Hard cover, ISBN 0-306-46514-0. • R. Josephson, J. & G. Josephson, S. “Abductive Inference: Computation, Philosophy, Technology” Cambridge Uni- versity Press, New York & Cambridge (U.K.). viii. 306 pages. Hard cover (1994), ISBN 0-521-43461-0, Paperback (1996), ISBN 0-521-57545-1. • Bunt, H. & Black, W. “Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics” (Natural Language Processing, 1.) John Benjamins, Amsterdam & Philadelphia, 2000. vi. 471 pages. Hard cover, ISBN 90-272-4983-0 (Europe), 1-58619-794-2 (U.S.)

[2] R. Josephson, J. & G. Josephson, S. “Abductive Inference: Computation, Philosophy, Technology” Cambridge University Press, New York & Cambridge (U.K.). viii. 306 pages. Hard cover (1994), ISBN 0-521-43461-0, Paperback (1996), ISBN 0-521-57545-1.

[3] “Retroduction | Dictionary | Commens”. Commens – Digital Companion to C. S. Peirce. Mats Bergman, Sami Paavola & João Queiroz. Retrieved 2014-08-24.

[4] Sober, Elliot. Core Questions in Philosophy,5th edition.

[5] See, e.g. Analysis of Evidence, 2d ed. by Terence Anderson (Cambridge University Press, 2005)

[6] For examples, see "Abductive Inference in Reasoning and ", John R. Josephson, Laboratory for Artificial Intel- ligence Research, Ohio State University, and Abduction, Reason, and Science. Processes of Discovery and Explanation by Lorenzo Magnani (Kluwer Academic/Plenum Publishers, New York, 2001).

[7] Peirce, C. S.

• “On the Logic of drawing History from Ancient Documents especially from Testimonies” (1901), Collected Papers v. 7, paragraph 219. • “PAP” ["Prolegomena to an Apology for Pragmatism"], MS 293 c. 1906, New Elements of Mathematics v. 4, pp. 319-320. • A Letter to F. A. Woods (1913), Collected Papers v. 8, paragraphs 385-388.

(See under "Abduction" and "Retroduction" at Commens Dictionary of Peirce’s Terms.)

[8] Peirce, C. S. (1903), Harvard lectures on pragmatism, Collected Papers v. 5, paragraphs 188–189.

[9] A Letter to J. H. Kehler (1911), New Elements of Mathematics v. 3, pp. 203–4, see under "Retroduction" at Commens Dictionary of Peirce’s Terms.

[10] Peirce, C.S. (1902), application to the Carnegie Institution, see MS L75.329-330, from Draft D of Memoir 27:

Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuristic and is the first question of heuristic, is to be governed by economical considerations.

[11] Peirce MS. 692, quoted in Sebeok, T. (1981) "You Know My Method" in Sebeok, T., The Play of Musement, Bloomington, IA: Indiana, page 24. 12 CHAPTER 1. ABDUCTIVE REASONING

[12] Peirce MS. 696, quoted in Sebeok, T. (1981) "You Know My Method" in Sebeok, T., The Play of Musement, Bloomington, IA: Indiana, page 31.

[13] Jonathan Koehler. The Base Rate Fallacy Reconsidered: Descriptive, Normative and Methodological Challenges. Behav- ioral and Brain Sciences. 19, 1996.

[14] Robertson, B., & Vignaux, G. A. (1995). Interpreting evidence: Evaluating forensic evidence in the courtroom. Chichester: John Wiley and Sons.

[15] A. Jøsang. Conditional Reasoning with Subjective Logic. Journal of multiple valued logic and soft computing. 15(1), pp.5-38, 2008.PDF

[16] Popper, Karl (2002), Conjectures and Refutations: The Growth of Scientific Knowledge, London, UK: Routledge. p 536

[17] See Santaella, Lucia (1997) “The Development of Peirce’s Three Types of Reasoning: Abduction, Deduction, and Induc- tion”, 6th Congress of the IASS. Eprint.

[18] Peirce, C. S. (1908), "A Neglected Argument for the of God", Hibbert Journal v. 7, pp. 90–112, see §4. In Collected Papers v. 6, see paragraph 476. In The Essential Peirce v. 2, see p. 444.

[19] Peirce, C. S. (1908), "A Neglected Argument for the Reality of God", Hibbert Journal v. 7, pp. 90–112. See both part III and part IV. Reprinted, including originally unpublished portion, in Collected Papers v. 6, paragraphs 452–85, Essential Peirce v. 2, pp. 434–50, and elsewhere.

[20] Peirce used the term “intuition” not in the sense of an instinctive or anyway half-conscious inference as people often do currently. Instead he used “intuition” usually in the sense of a cognition devoid of logical determination by previous cognitions. He said, “We have no power of Intuition” in that sense. See his “Some Consequences of Four Incapacities” (1868), Eprint.

[21] For a relevant discussion of Peirce and the aims of abductive inference, see McKaughan, Daniel J. (2008), “From Ugly Duckling to Swan: C. S. Peirce, Abduction, and the Pursuit of Scientific Theories”, Transactions of the Charles S. Peirce Society, v. 44, no. 3 (summer), 446–468.

[22] Peirce means “conceivable” very broadly. See Collected Papers v. 5, paragraph 196, or Essential Peirce v. 2, p. 235, “Pragmatism as the Logic of Abduction” (Lecture VII of the 1903 Harvard lectures on pragmatism):

It allows any flight of imagination, provided this imagination ultimately alights upon a possible practical effect; and thus many hypotheses may seem at first glance to be excluded by the pragmatical maxim that are not really so excluded.

[23] Peirce, C. S., Carnegie Application (L75, 1902, New Elements of Mathematics v. 4, pp. 37–38. See under "Abduction" at the Commens Dictionary of Peirce’s Terms:

Methodeutic has a special interest in Abduction, or the inference which starts a scientific hypothesis. For it is not sufficient that a hypothesis should be a justifiable one. Any hypothesis which explains the facts is justified critically. But among justifiable hypotheses we have to select that one which is suitable for being tested by experiment.

[24] Peirce, “Pragmatism as the Logic of Abduction” (Lecture VII of the 1903 Harvard lectures on pragmatism), see parts III and IV. Published in part in Collected Papers v. 5, paragraphs 180–212 (see 196–200, Eprint and in full in Essential Peirce v. 2, pp. 226–241 (see sections III and IV).

.... What is good abduction? What should an explanatory hypothesis be to be worthy to rank as a hypoth- esis? Of course, it must explain the facts. But what other conditions ought it to fulfill to be good? .... Any hypothesis, therefore, may be admissible, in the absence of any special reasons to the contrary, provided it be capable of experimental verification, and only insofar as it is capable of such verification. This is approximately the doctrine of pragmatism.

[25] Peirce, A Letter to Paul Carus circa 1910, Collected Papers v. 8, paragraphs 227–228. See under "Hypothesis" at the Commens Dictionary of Peirce’s Terms.

[26] (1867), “On the Natural Classification of Arguments”, Proceedings of the American Academy of Arts and Sciences v. 7, pp. 261–287. Presented April 9, 1867. See especially starting at p. 284 in Part III §1. Reprinted in Collected Papers v. 2, paragraphs 461–516 and Writings v. 2, pp. 23–49.

[27] Peirce, C. S. (1878), “Deduction, Induction, and Hypothesis”, Popular Science Monthly, v. 13, pp. 470–82, see 472. Collected Papers 2.619–44, see 623. 1.8. NOTES 13

[28] A letter to Langley, 1900, published in Historical Perspectives on Peirce’s Logic of Science. See excerpts under "Abduction" at the Commens Dictionary of Peirce’s Terms.

[29] “A Syllabus of Certain Topics of Logic'" (1903 manuscript), Essential Peirce v. 2, see p. 287. See under "Abduction" at the Commens Dictionary of Peirce’s Terms.

[30] Peirce, C. S., “On the Logic of Drawing History from Ancient Documents”, dated as circa 1901 both by the editors of Collected Papers (see CP v. 7, bk 2, ch. 3, footnote 1) and by those of the Essential Peirce (EP) (Eprint. The article’s discussion of abduction is in CP v. 7, paragraphs 218–31 and in EP v. 2, pp. 107–14.

[31] Peirce, C. S., “A Syllabus of Certain Topics of Logic” (1903), Essential Peirce v. 2, p. 287:

The mind seeks to bring the facts, as modified by the new discovery, into order; that is, to form a general conception embracing them. In some cases, it does this by an act of generalization. In other cases, no new law is suggested, but only a peculiar state of facts that will “explain” the surprising phenomenon; and a law already known is recognized as applicable to the suggested hypothesis, so that the phenomenon, under that assumption, would not be surprising, but quite likely, or even would be a necessary result. This synthesis suggesting a new conception or hypothesis, is the Abduction.

[32] Peirce, C. S. (1883), “A Theory of Probable Inference” in Studies in Logic).

[33] In Peirce, C. S., 'Minute Logic' circa 1902, Collected Papers v. 2, paragraph 102. See under "Abduction" at Commens Dictionary of Peirce’s Terms.

[34] Peirce, “On the Logic of drawing History from Ancient Documents”, 1901 manuscript, Collected Papers v. 7, paragraphs 164–231, see 202, reprinted in Essential Peirce v. 2, pp. 75–114, see 95. See under "Abduction" at Commens Dictionary of Peirce’s Terms.

[35] Peirce, “On the Logic of Drawing Ancient History from Documents”, Essential Peirce v. 2, see pp. 107–9.

[36] Peirce, Carnegie application, L75 (1902), Memoir 28: “On the Economics of Research”, scroll down to Draft E. Eprint.

[37] Peirce, C. S., the 1866 Lowell Lectures on the Logic of Science, Writings of Charles S. Peirce v. 1, p. 485. See under "Hypothesis" at Commens Dictionary of Peirce’s Terms.

[38] Peirce, C. S., “A Syllabus of Certain Topics of Logic”, written 1903. See The Essential Peirce v. 2, p. 287. Quote viewable under "Abduction" at Commens Dictionary of Peirce’s Terms.

[39] Peirce, A Letter to Paul Carus 1910, Collected Papers v. 8, see paragraph 223.

[40] Peirce, C. S. (1902), Application to the Carnegie Institution, Memoir 27, Eprint: “Of the different classes of arguments, abductions are the only ones in which after they have been admitted to be just, it still remains to inquire whether they are advantageous.”

[41] Peirce, “On the Logic of Drawing Ancient History from Documents”, Essential Peirce v. 2, see pp. 107–9 and 113. On Twenty Questions, p. 109, Peirce has pointed out that if each question eliminates half the possibilities, twenty questions can choose from among 220 or 1,048,576 objects, and goes on to say:

Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do. The secret of the business lies in the caution which breaks a hypothesis up into its smallest logical components, and only risks one of them at a time.

[42] Schwendtner, Tibor and Ropolyi, László and Kiss, Olga (eds): Hermeneutika és a természettudományok. Áron Kiadó, Budapest, 2001. It is written in Hungarian. Meaning of the title: and the natural sciences. See, e.g., Hanson’s Patterns of Discovery (Hanson, 1958), especially pp. 85-92

[43] Rapezzi, C; Ferrari, R; Branzi, A (24 December 2005). “White coats and fingerprints: diagnostic reasoning in medicine and investigative methods of fictional detectives”. BMJ (Clinical research ed.) 331 (7531): 1491–4. doi:10.1136/bmj.331.7531.1491. PMC 1322237. PMID 16373725. Retrieved 17 January 2014.

[44] Rejón Altable, C (October 2012). “Logic structure of clinical judgment and its relation to medical and psychiatric semi- ology”. Psychopathology 45 (6): 344–51. doi:10.1159/000337968. PMID 22854297. Retrieved 17 January 2014.

[45] Kave Eshghi. Abductive planning with the event calculus. In Robert A. Kowalski, Kenneth A. Bowen editors: Logic Programming, Proceedings of the Fifth International Conference and Symposium, Seattle, Washington, August 15–19, 1988. MIT Press 1988, ISBN 0-262-61056-6

[46] April M. S. McMahon (1994): Understanding language change. Cambridge: Cambridge University Press. ISBN 0-521- 44665-1 14 CHAPTER 1. ABDUCTIVE REASONING

[47] Eco, U. (1976). “A theory of Semiotics”. Bloomington, IA: Indiana. p 131

[48] Gell, A. 1984, Art and Agency. Oxford: Oxford. p 14

[49] Bowden, R. (2004) A critique of Alfred Gell on Art and Agency. Retrieved Sept 2007 from: Find Articles at BNET

[50] Whitney D. (2006) 'Abduction the agency of art.' Retrieved May 2009 from: University of California, Berkeley

1.9 External links

• Abduction entry by Igor Douven in the Stanford Encyclopedia of Philosophy • Abductive reasoning at the Indiana Philosophy Project

• Abductive reasoning at PhilPapers • "Abductive Inference" (once there, scroll down), John R. Josephson, Laboratory for Artificial Intelligence Research, Ohio State University. (Former webpage via the Wayback Machine.) • "Deduction, Induction, and Abduction", Chapter 3 in article "Charles Sanders Peirce" by Robert Burch, 2001 and 2006, in the Stanford Encyclopedia of Philosophy. • "Abduction", links to articles and websites on abductive inference, Martin Ryder.

• International Research Group on Abductive Inference, Uwe Wirth and Alexander Roesler, eds. Uses frames. Click on link at bottom of its home page for English. Wirth moved to U. of Gießen, Germany, and set up Abduktionsforschung, home page not in English but see Artikel section there. Abduktionsforschunghome page via Google translation.

• "'You Know My Method': A Juxtaposition of Charles S. Peirce and Sherlock Holmes" (1981), by Thomas Se- beok with Jean Umiker-Sebeok, from The Play of Musement, Thomas Sebeok, Bloomington, Indiana: Indiana University Press, pp. 17–52. • Commens Dictionary of Peirce’s Terms, Mats Bergman and Sami Paavola, editors, Helsinki U. Peirce’s own definitions, often many per term across the decades. There, see “Hypothesis [as a form of reasoning]", “Ab- duction”, “Retroduction”, and “Presumption [as a form of reasoning]". Chapter 2

Condition (philosophy)

Comprehensive treatment of the word "condition" requires emphasizing that it is ambiguous in the sense of having multiple normal meanings and that its meanings are often vague in the sense of admitting borderline cases. According to the 2007 : an Encyclopedia, in one widely used sense, conditions are or resem- ble qualities, , features, characteristics, or attributes.[1] In these senses, a condition is often denoted by a nominalization of a grammatical predicate: 'being equilateral' is a nominalization of the predicate 'is equilateral'. Be- ing equilateral is a necessary condition for being square. Being equilateral and being equiangular are two necessary conditions for being a square. In order for a polygon to be a square, it is necessary for it to be equilateral—and it is necessary for it to be equiangular. Being a quadrangle that is both equilateral and equiangular is a sufficient condition for being a square. In order for a quadrangle to be a square, it is sufficient for it to be both equilateral and equiangular. Being equilateral and being equiangular are separately necessary and jointly sufficient conditions for a quadrangle to be a square. Every condition is both necessary and sufficient for itself. The relational phrases 'is necessary for' and 'is sufficient for' are often elliptical for 'is a necessary condition for' and 'is a sufficient condition for'. These senses may be called attributive; other senses that may be called instrumental, causal, and situational are discussed below. Every condition applies to everything that satisfies it. Every individual satisfies every condition that applies to it. The condition of being equilateral applies to every square, and every square satisfies the condition of being equilateral. The satisfaction relation relates individuals to conditions, and the application relation relates conditions to individuals. The satisfaction and application relations are converses of each other. Necessity and sufficiency, the relations expressed by 'is a necessary condition for' and 'is a sufficient condition for', relate conditions to conditions, and they are converses of each other. Every condition necessary for a given condition is one that the given condition is sufficient for, and conversely. As a result of a chain of developments tracing back to and Augustus De Morgan, it has become somewhat standard to limit the individuals pertinent to a given discussion. The collection of pertinent individuals is usually called the universe of discourse, an expression coined by Boole in 1854. In discussions of ordinary Euclidean plane geometry, for example, the universe of discourse can be taken to be the class of plane figures. Thus, squares are pertinent [individuals], but conditions, propositions, proofs, and geometers are not. Moreover, the collection of pertinent conditions is automatically limited to those coherently applicable to individuals in the universe of discourse. Thus, triangularity and circularity are pertinent [conditions], but truth, validity, , bravery, and are not. Some philosophers posit universal and null conditions. A universal condition applies to or is satisfied by every pertinent individual. A null condition applies to or is satisfied by no pertinent individual. In ordinary Euclidean plane geometry, the condition of being planar is universal and the condition of being both round and square is null. Every figure satisfies the condition of being planar. No figure satisfies the condition of being round and square. Some philosophers posit for each given condition a complementary condition that applies to a pertinent individual if and only if the individual does not satisfy the given condition. In some of several senses, consequence is a relation between conditions. Being equilateral and being equiangular are two consequences of being square. In the sense used here, given any two conditions, the first is a consequence of the second if and only if the second is a sufficient condition for the first. Equivalently, being a consequence of a given condition is coextensive with being a necessary condition for it. The relational verb 'implies’ is frequently used for the converse of the relational verb phrase 'is a consequence of'. Given any two conditions, the first implies the second if and only if the second is a consequence of the first. In the attributive senses under discussion, a consequence of a

15 16 CHAPTER 2. CONDITION (PHILOSOPHY) condition cannot be said to be a result of the condition nor can the condition be said to be a cause of its consequences. It would be incoherent to say that being equilateral is caused by being square.[1] There are reflexive and non-reflexive senses of 'consequence' applicable to conditions. Both are useful. In the reflexive senses, which are used in this article, every condition is a consequence of itself. In the non-reflexive senses, which are not used in this article, no condition is a consequence of itself. There are material, intensional, and logical senses of 'consequence' applicable to conditions. All are useful. Because of space limitations, in this article, only material consequence is used although the other two are also described. Given any two conditions, the first is a material consequence of (is materially implied by) the second if and only if every individual that satisfies the second satisfies the first. Being equilateral is a material consequence of being an equiangular triangle, but not of being an equiangular quadrangle. As is evident, material consequence is entirely extensional in the sense that whether one given condition is a material consequence of another is determined by their two extensions, the collections of individuals that satisfy them. Given any two conditions, the first is an intensional consequence of (is intensionally implied by) the second if and only if the proposition that every individual that satisfies the second satisfies the first is analytic or intensionally true. Being equal-sided is an intensional consequence of being an equilateral triangle. Given any two conditions, the first is a logical consequence of (is logically implied by) the second if and only if the proposition that every individual that satisfies the second satisfies the first is tautological or logically true. Being equilateral is a logical consequence of being an equilateral triangle.[1] Besides the one-place conditions – such as being three-sided or being equilateral – that are satisfied or not by a given individual, there are two-place conditions – such as being equal-to or being part-of – that relate or do not relate one given individual to another. There are three-place conditions such as numerical betweenness as in “two is between one and three”. Given any three numbers, in order for the first to satisfy the betweenness condition with respect to the second and third, it is necessary and sufficient for either the second to precede the first and the third the second or the second to precede the third and the first the second. There are four-place conditions such as numerical proportionality as in “one is to two as three is to six”. Given any four numbers, in order for the first to satisfy the proportionality condition with respect to the second, third, and fourth, it is necessary and sufficient that the first be to the second as the third is to the fourth. Charles Sanders Peirce discussed polyadic or multi-place conditions as early as 1885.[2] There are many debated philosophical issues concerning conditions and consequences. Traditional philosophers ask ontological and epistemological questions about conditions. What are conditions? Do they change? Do they exist apart from the entities satisfying them? How do we know of them? How are propositions about them known to be true or to be false? In view of modern focus on identity criteria, philosophers now want to ask the questions involving them. One such ontological question asks for an identity criterion for conditions: what is a necessary and sufficient condition for “two” conditions to be identical? The widely accepted identity criterion for extensions of conditions is that given any two conditions, in order for the extension of the first to be [identical to] that of the second, it is necessary and sufficient for the two conditions to be satisfied by the same entities. There are questions concerning the ontological status of conditions. Are conditions mental, material, ideal, linguistic, or social, or do they have some other character? What is the relation of conditions to properties? A given individual satisfies (or fulfills) a given condition if and only if the condition applies to the individual. A given individual has (or possesses) a given if and only if the property belongs to the individual. Are the last two sentences simply translations of each other?[1] Philosophical terminology is not uniform. Before any of the above questions can be fully meaningful, it is necessary to interpret them or to locate them in the context of the work of an individual philosopher. We should never ask an abstract question such as what it means to say that something satisfies a condition. Rather we should ask a more specific question such as what Peirce meant by saying that accuracy of speech is an important condition of accurate thinking. 's voluminous writings provide a rich source of different senses for the words 'condition' and 'conse- quence'. Except where explicitly noted, all references to Dewey are by volume number and page in the Southern Illinois UP critical edition. It would be useful to catalogue the various senses Dewey attaches to 'condition' and 'consequence' the way that A.O. Lovejoy famously catalogued senses of 'pragmatism'. In several passages, Dewey links a sense of 'condition' with a corresponding sense of 'consequence' just as senses of these words were linked above. Two corresponding usages occur repeatedly in his writings and, it should be said, in most writings concerned with human activity including government and technology. In one, condition/consequence is somewhat analogous to means/end. In fact, Dewey sometimes uses the words 'condition' and 'means’ almost interchangeably as in his famous pronouncement: “Every intelligent act involves selection of certain things as means to other things as their consequences”.[3] A little later, he adds: “… in all inquiries in which there is an end in view (consequences to be brought into existence) there is a selective ordering of existing conditions as means …”.[4] In the other sense, condition/consequence is sim- 2.1. REFERENCES 17 ilar to cause/effect – although identification is probably not warranted in either case. Dewey studiously avoids sharp distinctions, dualisms, dichotomies, and other artificialities. There are passages where both contrasts are relevant, but as far as I know, Dewey never explicitly notes that 'condition/consequence' was used for both. The means/end sense occurs, for example, in his 1945 Journal of Philosophy article, “Ethical Subject-Matter and Language” (15, 139),[5] where he suggested that the inquiry into “conditions and consequences” should draw upon the whole knowl- edge of relevant fact. The cause/effect sense occurs on page 543 in his response to critics in the 1939 Library of Living Philosophers volume.[6] Here he wrote: “Correlation between changes that form conditions of desires, etc., and changes that form their consequences when acted upon have the same standing and function … that physical objects have …” There are scattered passages suggesting that Dewey regarded the means/end relation as one kind of cause/effect relation. In fact he regards a causal proposition as one “whose content is a relation of conditions that are means to other conditions that are consequences”.[4] In some of the senses Dewey uses, conditions are or resemble qualities, properties, features, characteristics, or at- tributes. These senses were referred to above as attributive. However, in the two of senses in question, the instrumen- tal sense and the causal sense, let us say, conditions are or resemble states or events more than qualities, properties, features, characteristics, or attributes. After all, the attributive condition of being equiangular, which is a conse- quence of the condition of being an equilateral triangle, could hardly be said to be brought about through use of the latter as means or said to be caused by the latter. Accordingly, an attributive condition is neither earlier nor later than its consequences, whereas an instrumental or causal condition necessarily precedes its consequences. As Dewey himself puts it, “The import of the causal relation as one of means-consequences is thus prospective”.[3] From a practical point of view, Dewey’s causal and instrumental senses of 'condition' and 'consequence' are at least as important as the attributive senses. In the causal sense, fuel, oxygen, and ignition are conditions for combustion as a consequence. In the instrumental sense, understanding, evidence, and judgment are conditions for knowledge as a consequence. For another important example, the Cambridge Dictionary of Philosophy [7] defines 'condition' in an important sense not explained above: a condition is a state of affairs, “way things are” or situation—most commonly referred to by a nominalization of a sentence. The expression 'Snow’s being white', which refers to the condition snow’s being white, is a nominalization of the sentence 'Snow is white'.[7] 'The truth of the proposition that snow is white' is a nominalization of the sentence 'the proposition that snow is white is true'. Snow’s being white is a necessary and sufficient condition for the truth of the proposition that snow is white. Conditions in this sense may be called situational. Usually, necessity and sufficiency relate conditions of the same kind. Being an animal is a necessary attributive condition for being a dog. Fido’s being an animal is a necessary situational condition for Fido’s being a dog.

2.1 References

[1] John Corcoran . Conditions and Consequences. American Philosophy: an Encyclopedia. 2007. Eds. John Lachs and Robert Talisse. New York: Routledge. Pages 124–7.

[2] Peirce, C. S. 1992. The Essential Peirce: Selected Philosophical Writings (1867–1893). Vol. I. Eds. N. Houser and C. Kloesel. Bloomington: Indiana UP. Pages 225ff

[3] Dewey, John. 1986. John Dewey: The Later Works, 1925–1953. Volume 12: 1938. Ed. Jo Ann Boydston. Carbondale and Edwardsville: Southern Illinois UP.Page 454.

[4] Dewey, John. 1986. John Dewey: The Later Works, 1925–1953. Volume 12: 1938. Ed. Jo Ann Boydston. Carbondale and Edwardsville: Southern Illinois UP. Page 455

[5] Dewey, John. 1986. John Dewey: The Later Works, 1925–1953. Volume 15: 1942–1953. Ed. Jo Ann Boydston. Carbondale and Edwardsville: Southern Illinois UP.Page 139.

[6] Schilpp, Paul A., Ed. 1939. The Philosophy of John Dewey. Library of Living Philosophers. LaSalle, IL: Open Court.

[7] Ernest Sosa, 1999. “Condition”. Cambridge Dictionary of Philosophy. R.Audi, Ed. Cambridge: Cambridge UP. p. 171. Chapter 3

Deductive reasoning

Deductive reasoning, also deductive logic or logical deduction or, informally, "top-down" logic,[1] is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion.[2] It differs from inductive reasoning or abductive reasoning. Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily true. Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the following way: In deductive reasoning, a conclusion is reached reductively by applying general rules that hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusions is left. In inductive reasoning, the conclusion is reached by generalizing or extrapolating from, i.e., there is epistemic uncertainty. Note, however, that the inductive reasoning mentioned here is not the same as induction used in mathematical proofs – mathematical induction is actually a form of deductive reasoning.

3.1 Simple example

An example of a deductive argument:

1. All men are mortal. 2. Kass is a man. 3. Therefore, Kass is mortal.

The first premise states that all objects classified as “men” have the attribute “mortal”. The second premise states that “Kass” is classified as a “man” – a member of the set “men”. The conclusion then states that “Kass” must be “mortal” because he inherits this attribute from his classification as a “man”.

3.2 Law of detachment

Main article: Modus ponens

The law of detachment (also known as affirming the antecedent and Modus ponens) is the first form of deductive reasoning. A single conditional statement is made, and a hypothesis (P) is stated. The conclusion (Q) is then deduced from the statement and the hypothesis. The most basic form is listed below:

1. P → Q (conditional statement) 2. P (hypothesis stated) 3. Q (conclusion deduced)

18 3.3. LAW OF SYLLOGISM 19

In deductive reasoning, we can conclude Q from P by using the law of detachment.[3] However, if the conclusion (Q) is given instead of the hypothesis (P) then there is no definitive conclusion. The following is an example of an argument using the law of detachment in the form of an if-then statement:

1. If an angle satisfies 90° < A < 180°, then A is an obtuse angle. 2. A = 120°. 3. A is an obtuse angle.

Since the measurement of angle A is greater than 90° and less than 180°, we can deduce that A is an obtuse angle. If however, we are given the conclusion that A is an obtuse angle we cannot deduce the premise that A = 120°.

3.3 Law of syllogism

The law of syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form:

1. P → Q 2. Q → R 3. Therefore, P → R.

The following is an example:

1. If Larry is sick, then he will be absent. 2. If Larry is absent, then he will miss his classwork. 3. Therefore, if Larry is sick, then he will miss his classwork.

We deduced the final statement by combining the hypothesis of the first statement with the conclusion of the second statement. We also allow that this could be a false statement. This is an example of the Transitive Property in mathematics. The Transitive Property is sometimes phrased in this form:

1. A = B. 2. B = C. 3. Therefore A = C.

3.4 Law of contrapositive

Main article: Modus tollens

The law of contrapositive states that, in a conditional, if the conclusion is false, then the hypothesis must be false also. The general form is the following:

1. P → Q. 2. ~Q. 3. Therefore we can conclude ~P.

The following are examples:

1. If it is raining, then there are clouds in the sky. 2. There are no clouds in the sky. 3. Thus, it is not raining. 20 CHAPTER 3. DEDUCTIVE REASONING

3.5 Validity and soundness

Deductive arguments are evaluated in terms of their validity and soundness. An argument is valid if it is impossible for its premises to be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be valid even though the premises are false. An argument is sound if it is valid and the premises are true. It is possible to have a deductive argument that is logically valid but is not sound. Fallacious arguments often take that form. The following is an example of an argument that is valid, but not sound:

1. Everyone who eats carrots is a quarterback. 2. John eats carrots. 3. Therefore, John is a quarterback.

The example’s first premise is false – there are people who eat carrots and are not quarterbacks – but the conclusion must be true, so long as the premises are true (i.e. it is impossible for the premises to be true and the conclusion false). Therefore the argument is valid, but not sound. Generalizations are often used to make invalid arguments, such as “everyone who eats carrots is a quarterback.” Not everyone who eats carrots is a quarterback, thus proving the flaw of such arguments. In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known as term logic – was developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic. Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”, it is possible for the conclusion to be false (determined to be false with a counterexample or other means).

3.6 History

Aristotle started documenting deductive reasoning in the 4th century BC.[4]

3.7 Education

Deductive reasoning is generally thought of as a skill that develops without any formal teaching or training. As a result of this belief, deductive reasoning skills are not taught in secondary schools, where students are expected to use reasoning more often and at a higher level.[5] It is in high school, for example, that students have an abrupt introduction to mathematical proofs – which rely heavily on deductive reasoning.[5]

3.8 See also

• Argument (logic) • Logic • Mathematical logic • Abductive reasoning • Analogical reasoning • Correspondence theory of truth • Defeasible reasoning 3.9. REFERENCES 21

• Decision making

• Decision theory

• Fallacy

• Fault Tree Analysis

• Geometry

• Hypothetico-deductive method

• Inquiry

• Mathematical induction

• Inductive reasoning

• Inference

• Logical consequence

• Natural deduction

• Retroductive reasoning

• Scientific method

• Theory of justification

• Soundness

• Syllogism

3.9 References

[1] Deduction & Induction, Research Methods Knowledge Base

[2] Sternberg, R. J. (2009). Cognitive Psychology. Belmont, CA: Wadsworth. p. 578. ISBN 978-0-495-50629-4.

[3] Guide to Logic

[4] Evans, Jonathan St. B. T.; Newstead, Stephen E.; Byrne, Ruth M. J., eds. (1993). Human Reasoning: The Psychology of Deduction (Reprint ed.). Psychology Press. p. 4. ISBN 9780863773136. Retrieved 2015-01-26. In one sense [...] one can see the psychology of deductive reasoning as being as old as the study of logic, which originated in the writings of Aristotle.

[5] Stylianides, G. J.; Stylianides (2008). “A. J.”. Mathematical Thinking and Learning 10 (2): 103–133. doi:10.1080/10986060701854425.

3.10 Further reading

• Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8

• Philip Johnson-Laird, Ruth M. J. Byrne, Deduction, Psychology Press 1991, ISBN 978-0-86377-149-1

• Zarefsky, David, Argumentation: The Study of Effective Reasoning Parts I and II, The Teaching Company 2002

• Bullemore, Thomas, * The Pragmatic . 22 CHAPTER 3. DEDUCTIVE REASONING

3.11 External links

• Deductive reasoning at PhilPapers

• Deductive reasoning at the Indiana Philosophy Ontology Project

• Deductive reasoning entry in the Internet Encyclopedia of Philosophy Chapter 4

Inductive reasoning

“Inductive inference” redirects here. For the technique in mathematical proof, see Mathematical induction. For the theory introduced by Ray Solomonoff, see Solomonoff’s theory of inductive inference.

Inductive reasoning (as opposed to deductive reasoning or abductive reasoning) is reasoning in which the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.[1] The philosophical definition of inductive reasoning is more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms, discussed below). Many dictionaries define inductive reasoning as reasoning that derives general principles from specific observations, though some sources disagree with this usage.[2]

4.1 Description

Inductive reasoning is inherently uncertain. It only deals in degrees to which, given the premises, the conclusion is credible according to some theory of evidence. Examples include a many-valued logic, Dempster–Shafer theory, or probability theory with rules for inference such as Bayes’ rule. Unlike deductive reasoning, it does not rely on universals holding over a closed domain of discourse to draw conclusions, so it can be applicable even in cases of epistemic uncertainty (technical issues with this may arise however; for example, the second axiom of probability is a closed-world assumption).[3] An example of an inductive argument:

100% of biological life forms that we know of depend on liquid water to exist.

Therefore, if we discover a new biological life form it will probably depend on liquid water to exist.

This argument could have been made every time a new biological life form was found, and would have been correct every time; however, it is still possible that in the future a biological life form not requiring water could be discovered. As a result, the argument may be stated less formally as:

All biological life forms that we know of depend on liquid water to exist.

All biological life probably depends on liquid water to exist.

23 24 CHAPTER 4. INDUCTIVE REASONING

4.2 Inductive vs. deductive reasoning

Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true.[4] Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true.[5] Given that “if A is true then B, C, and D are true”, an example of deduction would be "A is true therefore we can deduce that B, C, and D are true”. An example of induction would be "B, C, and D are observed to be true therefore A may be true”. A is a reasonable explanation for B, C, and D being true. For example:

A large enough asteroid impact would create a very large crater and cause a severe impact winter that could drive the non-avian dinosaurs to extinction. We observe that there is a very large crater in the gulf of Mexico dating to very near the time the the extinction of the non-avian dinosaurs Therefore it is possible that this impact could explain why the non-avian dinosaurs went extinct.

Note however that this is not necesarily the case. Other events also coincide with the extinction of the non-avian dinosaurs. For example the Deccan Traps in India. A classical example of an incorrect inductive argument was presented by John Vickers:

All of the swans we have seen are white. Therefore, all swans are white.

Note that this definition of inductive reasoning excludes mathematical induction, which is a form of deductive rea- soning.

4.3 Criticism

Main article: Problem of induction

Inductive reasoning has been criticized by thinkers as diverse as [6] and Karl Popper.[7] The classic philosophical treatment of the problem of induction was given by the Scottish philosopher .[8] Although the use of inductive reasoning demonstrates considerable success, its application has been questionable. Recognizing this, Hume highlighted the fact that our mind draws uncertain conclusions from relatively limited expe- riences. In deduction, the of the conclusion is based on the truth of the premise. In induction, however, the dependence on the premise is always uncertain. As an example, let’s assume “all ravens are black.” The fact that there are numerous black ravens supports the assumption. However, the assumption becomes inconsistent with the fact that there are white ravens. Therefore, the general rule of “all ravens are black” is inconsistent with the existence of the white raven. Hume further argued that it is impossible to justify inductive reasoning: specifically, that it cannot be justified deductively, so our only option is to justify it inductively. Since this is circular he concluded that our use of induction is unjustifiable with the help of “Hume’s Fork”.[9] However, Hume then stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe , Hume advocated a practical skepticism based on , where the inevitability of induction is accepted.[10]

4.3.1 Biases

Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic, confirmation bias, and the predictable-world bias. 4.4. TYPES 25

The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically “less accessible” to the individual since they are not emphasized as heavily in the world around him/her. The confirmation bias is based on the natural tendency to confirm rather than to deny a current hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hy- potheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is in fact a sociable individual. The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and, therefore, believe that they are able to predict outcomes based upon what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. However, in general, people tend to seek some type of simplistic order to explain or justify their beliefs and , and it is often difficult for them to realise that their of order may be entirely different from the truth.[11]

4.4 Types

4.4.1 Generalization

A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population.

The proportion Q of the sample has attribute A. Therefore: The proportion Q of the population has attribute A.

Example

There are 20 balls—either black or white—in an urn. To estimate their respective numbers, you draw a sample of four balls and find that three are black and one is white. A good inductive generalization would be that there are 15 black, and five white, balls in the urn. How much the premises support the conclusion depends upon (a) the number in the sample group, (b) the number in the population, and (c) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies.

4.4.2 Statistical syllogism

Main article: Statistical syllogism

A statistical syllogism proceeds from a generalization to a conclusion about an individual.

A proportion Q of population P has attribute A. An individual X is a member of P. Therefore: There is a probability which corresponds to Q that X has A.

The proportion in the first premise would be something like “3/5ths of”, “all”, “few”, etc. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident". 26 CHAPTER 4. INDUCTIVE REASONING

4.4.3 Simple induction

Simple induction proceeds from a premise about a sample group to a conclusion about another individual.

Proportion Q of the known instances of population P has attribute A. Individual I is another member of P. Therefore: There is a probability corresponding to Q that I has A.

This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Argument from analogy

Main article: Argument from analogy

The process of analogical inference involves noting the shared properties of two or more things, and from this basis inferring that they also share some further property:[12]

P and Q are similar in respect to properties a, b, and c. Object P has been observed to have further property x. Therefore, Q probably has property x also.

Analogical reasoning is very frequent in common sense, science, philosophy and the , but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning.[13]

4.4.4 Causal inference

A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

4.4.5 Prediction

A prediction draws a conclusion about a future individual from a past sample.

Proportion Q of observed members of group G have had attribute A. Therefore: There is a probability corresponding to Q that other members of group G will have attribute A when next observed.

4.5 Bayesian inference

As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous , and when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic. 4.6. INDUCTIVE INFERENCE 27

4.6 Inductive inference

Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations,[14] and can be considered as a mathematically formalized Occam’s razor. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity.

4.7 See also

• Abductive reasoning

• Algorithmic information theory

• Algorithmic probability

• Analogy

• Bayesian probability

• Deductive reasoning

• Explanation

• Failure mode and effects analysis

• Falsifiability

• Grammar induction

• Inductive inference

• Inductive logic programming

• Inductive probability

• Inductive programming

• Inductive reasoning aptitude

• Inquiry

• Kolmogorov complexity

• Lateral thinking

• Laurence Jonathan Cohen

• Logic

• Logical

• Machine learning

• Mathematical induction

• Mill’s Methods

• Minimum description length

• Minimum message length

• Open world assumption 28 CHAPTER 4. INDUCTIVE REASONING

• Raven paradox

• Recursive Bayesian estimation

• Retroduction

• Solomonoff’s theory of inductive inference

• Statistical inference

• Stephen Toulmin

• Universal artificial intelligence

4.8 References

[1] Copi, I. M.; Cohen, C.; Flage, D. E. (2007). Essentials of Logic (Second ed.). Upper Saddle River, NJ: Pearson Education. ISBN 978-0-13-238034-8.

[2] “Deductive and Inductive Arguments”, Internet Encyclopedia of Philosophy, Some dictionaries define “deduction” as rea- soning from the general to specific and “induction” as reasoning from the specific to the general. While this usage is still sometimes found even in philosophical and mathematical contexts, for the most part, it is outdated.

[3] Kosko, Bart (1990). “Fuzziness vs. Probability”. International Journal of General Systems 17 (1): 211–240. doi:10.1080/03081079008935108.

[4] John Vickers. The Problem of Induction. The Stanford Encyclopedia of Philosophy.

[5] Herms, D. “Logical Basis of Hypothesis Testing in Scientific Research” (PDF).

[6] Sextus Empiricus, Outlines Of . Trans. R.G. Bury, Harvard University Press, Cambridge, Massachusetts, 1933, p. 283.

[7] Popper, Karl R.; Miller, David W. (1983). “A proof of the impossibility of inductive probability”. Nature 302 (5910): 687–688. doi:10.1038/302687a0.

[8] David Hume (1910) [1748]. An Enquiry concerning Human Understanding. P.F. Collier & Son. ISBN 0-19-825060-6.

[9] Vickers, John. “The Problem of Induction” (Section 2). Stanford Encyclopedia of Philosophy. 21 June 2010

[10] Vickers, John. “The Problem of Induction” (Section 2.1). Stanford Encyclopedia of Philosophy. 21 June 2010.

[11] Gray, Peter (2011). Psychology (Sixth ed.). New York: Worth. ISBN 978-1-4292-1947-1.

[12] Baronett, Stan (2008). Logic. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 321–325.

[13] For more information on by analogy, see Juthe, 2005.

[14] Rathmanner, Samuel; Hutter, Marcus (2011). “A Philosophical Treatise of Universal Induction”. Entropy 13 (6): 1076– 1136. doi:10.3390/e13061076.

4.9 Further reading

• Cushan, Anna-Marie (1983/2014). Investigation into Facts and Values: Groundwork for a theory of moral conflict resolution. [Thesis, Melbourne University], Ondwelle Publications (online): Melbourne.

• Herms, D. “Logical Basis of Hypothesis Testing in Scientific Research” (PDF).

• Kemerling, G. (27 October 2001). “Causal Reasoning”.

• Holland, J. H.; Holyoak, K. J.; Nisbett, R. E.; Thagard, P. R. (1989). Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA, USA: MIT Press. ISBN 0-262-58096-9.

• Holyoak, K.; Morrison, R. (2005). The Cambridge Handbook of Thinking and Reasoning. New York: Cambridge University Press. ISBN 978-0-521-82417-0. 4.10. EXTERNAL LINKS 29

4.10 External links

• Confirmation and Induction entry in the Internet Encyclopedia of Philosophy

• Inductive Logic entry in the Stanford Encyclopedia of Philosophy • Inductive reasoning at PhilPapers

• Inductive reasoning at the Indiana Philosophy Ontology Project

• Four Varieties of Inductive Argument from the Department of Philosophy, University of North Carolina at Greensboro.

• Properties of Inductive Reasoning PDF (166 KiB), a psychological review by Evan Heit of the University of California, Merced.

• The Mind, Limber An article which employs the film The Big Lebowski to explain the value of inductive reasoning. • The Pragmatic Problem of Induction, by Thomas Bullemore Chapter 5

Inference

Inference is the act or process of deriving logical conclusions from premises known or assumed to be true.[1] The conclusion drawn is also called an idiomatic. The laws of valid inference are studied in the field of logic. Alternatively, inference may be defined as the non-logical, but rational means, through observation of patterns of facts, to indirectly see new meanings and contexts for understanding. Of particular use to this application of inference are anomalies and symbols. Inference, in this sense, does not draw conclusions but opens new paths for inquiry. (See second set of Examples.) In this definition of inference, there are two types of inference: inductive inference and deductive inference. Unlike the definition of inference in the first paragraph above, meaning of word meanings are not tested but meaningful relationships are articulated. Human inference (i.e. how humans draw conclusions) is traditionally studied within the field of cognitive psychology; artificial intelligence researchers develop automated inference systems to emulate human inference. Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. This generalizes determin- istic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative (categorical) data which may be subject to random variation.

5.1 Examples

Greek philosophers defined a number of syllogisms, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example:

1. All men are mortal

2. Socrates is a man

3. Therefore, Socrates is mortal.

The reader can check that the premises and conclusion are true, but Logic is concerned with inference: does the truth of the conclusion follow from that of the premises? The validity of an inference depends on the form of the inference. That is, the word “valid” does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion. For example, consider the form of the following symbological track:

1. All meat comes from animals.

2. Beef is a type of meat.

3. Therefore, beef comes from an animal.

30 5.2. INCORRECT INFERENCE 31

If the premises are true, then the conclusion is necessarily true, too. Now we turn to an invalid form.

1. All A are B. 2. C is a B. 3. Therefore, C is an A.

To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.

1. All apples are fruit. (Correct) 2. Bananas are fruit. (Correct) 3. Therefore, bananas are apples. (Wrong)

A valid argument with false premises may lead to a false conclusion:

1. All tall people are Greek. 2. John Lennon was tall. 3. Therefore, John Lennon was Greek. (wrong)

When a valid argument is used to derive a false conclusion from false premises, the inference is valid because it follows the form of a correct inference. A valid argument can also be used to derive a true conclusion from false premises:

1. All tall people are musicians (although wrong) 2. John Lennon was tall (right, Valid) 3. Therefore, John Lennon was a musician (Right)

In this case we have two false premises that imply a true conclusion.

5.1.1 Example for definition #2

Evidence: It is the early 1950s and you are an American stationed in the Soviet Union. You read in the Moscow newspaper that a soccer team from a small city in Siberia starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program. Knowns: The Soviet Union is a command economy: people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather. Explanation: In a command economy, people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (weather, facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly (i.e. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course.

5.2 Incorrect inference

An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning. 32 CHAPTER 5. INFERENCE

5.3 Automatic logical inference

AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines. More recent work on automated theorem proving has had a stronger basis in formal logic. An inference system’s job is to extend a knowledge base automatically. The knowledge base (KB) is a set of proposi- tions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.

5.3.1 Example using Prolog

Prolog (for “Programming in Logic”) is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining. Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code: mortal(X) :- man(X). man(socrates). ( Here :- can be read as “if”. Generally, if P → Q (if P then Q) then in Prolog we would code Q:-P (Q if P).) This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates: ?- mortal(socrates). (where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules) gives the answer “Yes”. On the other hand, asking the Prolog system the following: ?- mortal(). gives the answer “No”. This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Finally ?- mortal(X) (Is anything mortal) would result in “Yes” (and in some implementations: “Yes": X=socrates) Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.

5.3.2 Use with the semantic web

Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.

5.3.3 Bayesian statistics and probability logic

Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds de- ductive (certain) logic as a subset (this prompts some writers to call Bayesian probability “probability logic”, following E. T. Jaynes). Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that “it’s going to rain tomorrow” has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely. Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes’ theorem. See Bayesian inference for examples. 5.4. SEE ALSO 33

5.3.4 Nonmonotonic logic[2]

A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is nonmonotonic. Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added. By contrast, everyday reasoning is mostly nonmonotonic because it involves risk: we jump to conclusions from de- ductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philoso- phers (theories of induction, Peirce’s theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.

5.4 See also

• Reasoning

• Abductive reasoning • Deductive reasoning • Inductive reasoning • Retroductive reasoning

• Reasoning System

• Entailment

• Epilogism

• Analogy

• Axiom

• Bayesian inference

• Frequentist inference

• Business rule

• Business rules engine

• Expert system

• Fuzzy logic

• Immediate inference

• Inference engine

• Inferential programming

• Inquiry

• Logic

• Logic of information

• Logical assertion

• Logical graph

• Nonmonotonic logic

• Rule of inference 34 CHAPTER 5. INFERENCE

• List of rules of inference • Theorem • Transduction (machine learning) • Sherlock Holmes

5.5 References

[1] http://www.thefreedictionary.com/inference

[2] Fuhrmann, André. Nonmonotonic Logic (PDF). Archived from the original (PDF) on 9 December 2003.

5.6 Further reading

• Hacking, Ian (2011). An Introduction to Probability and Inductive Logic. Cambridge University Press. ISBN 0-521-77501-9. • Jaynes, Edwin Thompson (2003). Probability Theory: The Logic of Science. Cambridge University Press. ISBN 0-521-59271-2. • McKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 • Tijms, Henk (2004). Understanding Probability. Cambridge University Press. ISBN 0-521-70172-4.

Inductive inference:

• Carnap, Rudolf; Jeffrey, Richard C., eds. (1971). Studies in Inductive Logic and Probability 1. The University of California Press. • Jeffrey, Richard C., ed. (1979). Studies in Inductive Logic and Probability 2. The University of California Press. • Angluin, Dana (1976). An Application of the Theory of Computational Complexity to the Study of Inductive Inference (Ph.D.). University of California at Berkeley. • Angluin, Dana (1980). “Inductive Inference of Formal Languages from Positive Data” (PDF). Information and Control 45: 117–135. doi:10.1016/s0019-9958(80)90285-5. • Angluin, Dana; Smith, Carl H. (Sep 1983). “Inductive Inference: Theory and Methods” (PDF). Computing Surveys 15 (3): 237–269. doi:10.1145/356914.356918. • Gabbay, Dov M.; Hartmann, Stephan; Woods, John, eds. (2009). Inductive Logic. Handbook of the 10. Elsevier. • Goodman, Nelson (1973). Fact, Fiction, and Forecast. Bobbs-Merrill Co. Inc.

Abductive inference:

• O'Rourke, P.; Josephson, J., eds. (1997). Automated abduction: Inference to the best explanation. AAAI Press. • Psillos, Stathis (2009). Gabbay, Dov M.; Hartmann, Stephan; Woods, John, eds. An Explorer upon Untrodden Ground: Peirce on Abduction (PDF). Handbook of the History of Logic 10. Elsevier. pp. 117–152. • Ray, Oliver (Dec 2005). Hybrid Abductive Inductive Learning (Ph.D.). University of London, Imperial College. CiteSeerX: 10 .1 .1 .66 .1877. 5.7. EXTERNAL LINKS 35

Psychological investigations about human reasoning:

• deductive:

• Johnson-Laird, Philip Nicholas; Byrne, Ruth M. J. (1992). Deduction. Erlbaum. • Byrne, Ruth M. J.; Johnson-Laird, P. N. (2009). ""If” and the Problems of Conditional Reasoning” (PDF). Trends in Cognitive Science 13 (7): 282–287. doi:10.1016/j.tics.2009.04.003. • Knauff, Markus; Fangmeier, Thomas; Ruff, Christian C.; Johnson-Laird, P. N. (2003). “Reasoning, Models, and Images: Behavioral Measures and Cortical Activity” (PDF). Journal of Cognitive Neuro- science 15 (4): 559–573. doi:10.1162/089892903321662949. • Johnson-Laird, Philip N. (1995). Gazzaniga, M. S., ed. Mental Models, Deductive Reasoning, and the Brain (PDF). MIT Press. pp. 999–1008. • Khemlani, Sangeet; Johnson-Laird, P. N. (2008). “Illusory Inferences about Embedded Disjunctions”. Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington/DC (PDF). pp. 2128–2133. • statistical:

• McCloy, Rachel; Byrne, Ruth M. J.; Johnson-Laird, Philip N. (2009). “Understanding Cumulative Risk” (PDF). The Quarterly Journal of Experimental Psychology: 18. • Johnson-Laird, Philip N. (1994). “Mental Models and Probabilistic Thinking” (PDF). Cognition 50: 189–209. doi:10.1016/0010-0277(94)90028-0.,

• analogical: • Burns, B. D. (1996). “Meta-Analogical Transfer: Transfer Between Episodes of Analogical Reasoning”. Journal of Experimental Psychology: Learning, Memory, and Cognition 22 (4): 1032–1048. doi:10.1037/0278- 7393.22.4.1032.

• spatial: • Jahn, Georg; Knauff, Markus; Johnson-Laird, P. N. (2007). “Preferred mental models in reasoning about spatial relations” (PDF). Memory & Cognition 35 (8): 2075–2087. doi:10.3758/bf03192939. • Knauff, Markus; Johnson-Laird, P. N. (2002). “Visual imagery can impede reasoning” (PDF). Memory & Cognition 30 (3): 363–371. doi:10.3758/bf03194937. • Waltz, James A.; Knowlton, Barbara J.; Holyoak, Keith J.; Boone, Kyle B.; Mishkin, Fred S.; de Menezes Santos, Marcia; Thomas, Carmen R.; Miller, Bruce L. (Mar 1999). “A System for Relational Reason- ing in Human Prefrontal Cortex” (PDF). Psychological Science 10 (2): 119–125. doi:10.1111/1467- 9280.00118. • moral:

• Bucciarelli, Monica; Khemlani, Sangeet; Johnson-Laird, P. N. (Feb 2008). “The Psychology of Moral Reasoning” (PDF). Judgment and Decision Making 3 (2): 121–139.

5.7 External links

• Inference at PhilPapers

• Inference at the Indiana Philosophy Ontology Project Chapter 6

Logic

This article is about reasoning and its study. For other uses, see Logic (disambiguation).

Philosophical Logic (from the : λογική, logike)[1] is the use and study of valid reasoning.[2][3] The study of logic features most prominently in the subjects of philosophy, mathematics, and computer science. Logic was studied in several ancient civilizations, including India,[4] China,[5] Persia and Greece. In the West, logic was established as a formal discipline by Aristotle, who gave it a fundamental place in philosophy. The study of logic was part of the classical trivium, which also included grammar and rhetoric. Logic was further extended by Al-Farabi who categorized it into two separate groups (idea and proof). Later, Avicenna revived the study of logic and developed relationship between temporalis and the implication. In the East, logic was developed by Hindus, Buddhists and Jains. Logic is often divided into three parts: inductive reasoning, abductive reasoning, and deductive reasoning.

6.1 The study of logic

The concept of logical form is central to logic, it being held that the validity of an argument is determined by its logical form, not by its content. Traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logics.

• Informal logic is the study of natural language arguments. The study of fallacies is an especially important branch of informal logic. The dialogues of Plato[6] are good examples of informal logic. • Formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as a particular application of a wholly abstract rule, that is, a rule that is not about any particular thing or property. The works of Aristotle contain the earliest known formal study of logic. Modern formal logic follows and expands on Aristotle.[7] In many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language. • Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference.[8][9] Symbolic logic is often divided into two branches: propositional logic and predicate logic. • Mathematical logic is an extension of symbolic logic into other areas, in particular to the study of model theory, proof theory, set theory, and recursion theory.

6.1.1 Logical form

Main article: Logical form

Logic is generally considered formal when it analyzes and represents the form of any valid argument type. The form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language

36 6.1. THE STUDY OF LOGIC 37

to make its content usable in formal inference. If one considers the notion of form too philosophically loaded, one could say that formalizing simply means translating English sentences into the language of logic. This is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a considerable variety of form and complexity that makes their use in inference impractical. It requires, first, ignoring those grammatical features irrelevant to logic (such as gender and declension, if the argument is in Latin), replacing conjunctions irrelevant to logic (such as “but”) with logical conjunctions like “and” and replacing ambiguous, or alternative logical expressions (“any”, “every”, etc.) with expressions of a standard type (such as “all”, or the universal quantifier ∀). Second, certain parts of the sentence must be replaced with schematic letters. Thus, for example, the expression “all As are Bs” shows the logical form common to the sentences “all men are mortals”, “all cats are carnivores”, “all Greeks are philosophers”, and so on. That the concept of form is fundamental to logic was already recognized in ancient times. Aristotle uses variable letters to represent valid inferences in Prior Analytics, leading Jan Łukasiewicz to say that the introduction of variables was “one of Aristotle’s greatest inventions”.[10] According to the followers of Aristotle (such as Ammonius), only the logical principles stated in schematic terms belong to logic, not those given in concrete terms. The concrete terms “man”, “mortal”, etc., are analogous to the substitution values of the schematic placeholders A, B, C, which were called the “matter” (Greek hyle) of the inference. The fundamental difference between modern formal logic and traditional, or Aristotelian logic, lies in their differing analysis of the logical form of the sentences they treat.

• In the traditional view, the form of the sentence consists of (1) a subject (e.g., “man”) plus a sign of quantity (“all” or “some” or “no”); (2) the copula, which is of the form “is” or “is not"; (3) a predicate (e.g., “mortal”). Thus: all men are mortal. The logical constants such as “all”, “no” and so on, plus sentential connectives such as “and” and “or” were called “syncategorematic” terms (from the Greek kategorei – to predicate, and syn – together with). This is a fixed scheme, where each judgment has an identified quantity and copula, determining the logical form of the sentence.

• According to the modern view, the fundamental form of a simple sentence is given by a recursive schema, involving logical connectives, such as a quantifier with its bound variable, which are joined by juxtaposition to other sentences, which in turn may have logical structure.

• The modern view is more complex, since a single judgement of Aristotle’s system involves two or more logical connectives. For example, the sentence “All men are mortal” involves, in term logic, two non-logical terms “is a man” (here M) and “is mortal” (here D): the sentence is given by the judgement A(M,D). In predicate logic, the sentence involves the same two non-logical concepts, here analyzed as m(x) and d(x) , and the sentence is given by ∀x.(m(x) → d(x)) , involving the logical connectives for universal quantification and implication.

• But equally, the modern view is more powerful. Medieval logicians recognized the problem of multiple gen- erality, where Aristotelian logic is unable to satisfactorily render such sentences as “Some guys have all the luck”, because both quantities “all” and “some” may be relevant in an inference, but the fixed scheme that Aristotle used allows only one to govern the inference. Just as linguists recognize recursive structure in natural languages, it appears that logic needs recursive structure.

6.1.2 Deductive and inductive reasoning, and abductive inference

Deductive reasoning concerns what follows necessarily from given premises (if a, then b). However, inductive reason- ing—the process of deriving a reliable inference from observations—is often included in the study of logic. Similarly, it is important to distinguish deductive validity and inductive validity (called “strength”). An inference is deductively valid if and only if there is no possible situation in which all the premises are true but the conclusion false. An inference is inductively strong if and only if its premises give some degree of probability to its conclusion. The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. Inductive validity on the other hand requires us to define a reliable generalization of some set of observations. The task of providing this definition may be approached in various ways, some less formal than others; some of these definitions may use mathematical models of probability. For the most part this discussion of logic deals only with deductive logic. 38 CHAPTER 6. LOGIC

Abduction[11] is a form of logical inference that goes from observation to a hypothesis that accounts for the reliable data (observation) and seeks to explain relevant evidence. The American philosopher Charles Sanders Peirce (1839– 1914) first introduced the term as “guessing”.[12] Peirce said that to abduce a hypothetical explanation a from an observed surprising circumstance b is to surmise that a may be true because then b would be a matter of course.[13] Thus, to abduce a from b involves determining that a is sufficient (or nearly sufficient), but not necessary, for b .

6.1.3 Consistency, validity, soundness, and completeness

Among the important properties that logical systems can have:

• Consistency, which means that no theorem of the system contradicts another.[14]

• Validity, which means that the system’s rules of proof never allow a false inference from true premises. A logical system has the property of soundness when the logical system has the property of validity and uses only premises that prove true (or, in the case of axioms, are true by definition).[14]

• Completeness, of a logical system, which means that if a formula is true, it can be proven (if it is true, it is a theorem of the system).

• Soundness, the term soundness has multiple separate meanings, which creates a bit of confusion throughout the literature. Most commonly, soundness refers to logical systems, which means that if some formula can be proven in a system, then it is true in the relevant model/structure (if A is a theorem, it is true). This is the converse of completeness. A distinct, peripheral use of soundness refers to arguments, which means that the premises of a valid argument are true in the actual world.

Some logical systems do not have all four properties. As an example, Kurt Gödel's incompleteness theorems show that sufficiently complex formal systems of arithmetic cannot be consistent and complete;[9] however, first-order predicate logics not extended by specific axioms to be arithmetic formal systems with equality can be complete and consistent.[15]

6.1.4 Rival conceptions of logic

Main article: Definitions of logic

Logic arose (see below) from a concern with correctness of argumentation. Modern logicians usually wish to ensure that logic studies just those arguments that arise from appropriately general forms of inference. For example, Thomas Hofweber writes in the Stanford Encyclopedia of Philosophy that logic “does not, however, cover good reasoning as a whole. That is the job of the theory of rationality. Rather it deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations”.[16] By contrast, argued that logic should be conceived as the science of judgement, an idea taken up in Gottlob Frege's logical and philosophical work. But Frege’s work is ambiguous in the sense that it is both concerned with the “laws of thought” as well as with the “laws of truth”, i.e. it both treats logic in the context of a theory of the mind, and treats logic as the study of abstract formal structures.

6.2 History

Main article: History of logic In Europe, logic was first developed by Aristotle.[17] Aristotelian logic became widely accepted in science and mathe- matics and remained in wide use in the West until the early 19th century.[18] Aristotle’s system of logic was responsible for the introduction of hypothetical syllogism,[19] temporal modal logic,[20][21] and inductive logic,[22] as well as in- fluential terms such as terms, predicables, syllogisms and propositions. In Europe during the later medieval period, major efforts were made to show that Aristotle’s ideas were compatible with Christian faith. During the High Middle Ages, logic became a main focus of philosophers, who would engage in critical logical analyses of philosophical arguments, often using variations of the methodology of . In 1323, 's influential 6.2. HISTORY 39

Aristotle, 384–322 BCE.

Summa Logicae was released. By the 18th century, the structured approach to arguments had degenerated and fallen out of favour, as depicted in Holberg's satirical play Erasmus Montanus. The Chinese logical philosopher Gongsun Long (c. 325–250 BCE) proposed the paradox “One and one cannot become two, since neither becomes two.”[23] In China, the tradition of scholarly investigation into logic, however, was repressed by the Qin dynasty following the legalist philosophy of Han Feizi. In India, innovations in the scholastic school, called , continued from ancient times into the early 18th century 40 CHAPTER 6. LOGIC with the Navya-Nyaya school. By the 16th century, it developed theories resembling modern logic, such as Gottlob Frege's “distinction between sense and reference of proper names” and his “definition of number”, as well as the theory of “restrictive conditions for universals” anticipating some of the developments in modern set theory.[24] Since 1824, Indian logic attracted the attention of many Western scholars, and has had an influence on important 19th- century logicians such as Charles Babbage, Augustus De Morgan, and George Boole.[25] In the 20th century, Western philosophers like Stanislaw Schayer and Klaus Glashoff have explored Indian logic more extensively. The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of symbolic logic (now called mathematical logic). In 1854, George Boole published An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, introducing symbolic logic and the principles of what is now known as Boolean logic. In 1879, Gottlob Frege published Begriffsschrift, which inaugurated modern logic with the invention of quantifier notation. From 1910 to 1913, Alfred North Whitehead and published Principia Mathematica[8] on the foundations of mathematics, attempting to derive mathematical truths from axioms and inference rules in symbolic logic. In 1931, Gödel raised serious problems with the foundationalist program and logic ceased to focus on such issues. The development of logic since Frege, Russell, and Wittgenstein had a profound influence on the practice of philos- ophy and the perceived nature of philosophical problems (see ), and Philosophy of mathematics. Logic, especially sentential logic, is implemented in computer logic circuits and is fundamental to computer science. Logic is commonly taught by university philosophy departments, often as a compulsory discipline.

6.3 Types of logic

6.3.1 Syllogistic logic

Main article: Aristotelian logic

The Organon was Aristotle's body of work on logic, with the Prior Analytics constituting the first explicit work in formal logic, introducing the syllogistic.[26] The parts of syllogistic logic, also known by the name term logic, are the analysis of the judgements into propositions consisting of two terms that are related by one of a fixed number of relations, and the expression of inferences by means of syllogisms that consist of two propositions sharing a common term as premise, and a conclusion that is a proposition involving the two unrelated terms from the premises. Aristotle’s work was regarded in classical times and from medieval times in Europe and the Middle East as the very picture of a fully worked out system. However, it was not alone: the Stoics proposed a system of propositional logic that was studied by medieval logicians. Also, the problem of multiple generality was recognized in medieval times. Nonetheless, problems with syllogistic logic were not seen as being in need of revolutionary solutions. Today, some academics claim that Aristotle’s system is generally seen as having little more than historical value (though there is some current interest in extending term logics), regarded as made obsolete by the advent of propo- sitional logic and the predicate calculus. Others use Aristotle in argumentation theory to help develop and critically question argumentation schemes that are used in artificial intelligence and legal arguments.

6.3.2 Propositional logic (sentential logic)

Main article: Propositional calculus

A propositional calculus or logic (also a sentential calculus) is a formal system in which formulae representing propo- sitions can be formed by combining atomic propositions using logical connectives, and in which a system of formal proof rules establishes certain formulae as “theorems”.

6.3.3 Predicate logic

Main article: Predicate logic 6.3. TYPES OF LOGIC 41

Predicate logic is the generic term for symbolic formal systems such as first-order logic, second-order logic, many- sorted logic, and infinitary logic. Predicate logic provides an account of quantifiers general enough to express a wide set of arguments occurring in natural language. Aristotelian syllogistic logic specifies a small number of forms that the relevant part of the involved judgements may take. Predicate logic allows sentences to be analysed into subject and argument in several additional ways—allowing predicate logic to solve the problem of multiple generality that had perplexed medieval logicians. The development of predicate logic is usually attributed to Gottlob Frege, who is also credited as one of the founders of analytical philosophy, but the formulation of predicate logic most often used today is the first-order logic presented in Principles of Mathematical Logic by David Hilbert and Wilhelm Ackermann in 1928. The analytical generality of predicate logic allowed the formalization of mathematics, drove the investigation of set theory, and allowed the development of Alfred Tarski's approach to model theory. It provides the foundation of modern mathematical logic. Frege’s original system of predicate logic was second-order, rather than first-order. Second-order logic is most prominently defended (against the criticism of and others) by George Boolos and Stewart Shapiro.

6.3.4 Modal logic

Main article: Modal logic

In languages, modality deals with the phenomenon that sub-parts of a sentence may have their semantics modified by special verbs or modal particles. For example, "We go to the games" can be modified to give "We should go to the games", and "We can go to the games" and perhaps "We will go to the games". More abstractly, we might say that modality affects the circumstances in which we take an assertion to be satisfied. Aristotle's logic is in large parts concerned with the theory of non-modalized logic. Although, there are passages in his work, such as the famous sea-battle argument in De Interpretatione § 9, that are now seen as anticipations of modal logic and its connection with potentiality and time, the earliest formal system of modal logic was developed by Avicenna, whom ultimately developed a theory of "temporally modalized" syllogistic.[27] While the study of necessity and possibility remained important to philosophers, little logical innovation happened until the landmark investigations of Clarence Irving Lewis in 1918, who formulated a family of rival axiomatizations of the alethic modalities. His work unleashed a torrent of new work on the topic, expanding the kinds of modality treated to include deontic logic and epistemic logic. The seminal work of Arthur Prior applied the same formal language to treat temporal logic and paved the way for the marriage of the two subjects. discovered (contemporaneously with rivals) his theory of frame semantics, which revolutionized the formal technology available to modal logicians and gave a new graph-theoretic way of looking at modality that has driven many applications in computational linguistics and computer science, such as dynamic logic.

6.3.5 Informal reasoning

Main article: Informal logic

The motivation for the study of logic in ancient times was clear: it is so that one may learn to distinguish good from bad arguments, and so become more effective in argument and oratory, and perhaps also to become a better person. Half of the works of Aristotle’s Organon treat inference as it occurs in an informal setting, side by side with the development of the syllogistic, and in the Aristotelian school, these informal works on logic were seen as complementary to Aristotle’s treatment of rhetoric. This ancient motivation is still alive, although it no longer takes centre stage in the picture of logic; typically dialectical logic forms the heart of a course in critical thinking, a compulsory course at many universities. Argumentation theory is the study and research of informal logic, fallacies, and critical questions as they relate to every day and practical situations. Specific types of dialogue can be analyzed and questioned to reveal premises, conclusions, and fallacies. Argumentation theory is now applied in artificial intelligence and law. 42 CHAPTER 6. LOGIC

6.3.6 Mathematical logic

Main article: Mathematical logic

Mathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, in the other direction, the application of mathematical techniques to the representation and analysis of formal logic.[28] The earliest use of mathematics and geometry in relation to logic and philosophy goes back to the ancient Greeks such as Euclid, Plato, and Aristotle.[29] Many other ancient and medieval philosophers applied mathematical ideas and methods to their philosophical claims.[30] One of the boldest attempts to apply logic to mathematics was undoubtedly the logicism pioneered by philosopher- logicians such as Gottlob Frege and Bertrand Russell: the idea was that mathematical theories were logical tautologies, and the programme was to show this by means to a reduction of mathematics to logic.[8] The various attempts to carry this out met with a series of failures, from the crippling of Frege’s project in his Grundgesetze by Russell’s paradox, to the defeat of Hilbert’s program by Gödel’s incompleteness theorems. Both the statement of Hilbert’s program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory.[31] Despite the negative nature of the incompleteness theorems, Gödel’s completeness theorem, a result in model theory and another application of mathematics to logic, can be understood as showing how close logicism came to being true: every rigorously defined mathematical theory can be exactly captured by a first-order logical theory; Frege’s proof calculus is enough to describe the whole of mathematics, though not equivalent to it. If proof theory and model theory have been the foundation of mathematical logic, they have been but two of the four pillars of the subject. Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic, from Cantor’s theorem, through the status of the Axiom of Choice and the question of the independence of the continuum hypothesis, to the modern debate on large cardinal axioms. Recursion theory captures the idea of computation in logical and arithmetic terms; its most classical achievements are the undecidability of the Entscheidungsproblem by Alan Turing, and his presentation of the Church–Turing thesis.[32] Today recursion theory is mostly concerned with the more refined problem of complexity classes—when is a problem efficiently solvable?—and the classification of degrees of unsolvability.[33]

6.3.7 Philosophical logic

Main article: Philosophical logic

Philosophical logic deals with formal descriptions of ordinary, non-specialist (“natural”) language. Most philosophers assume that the bulk of everyday reasoning can be captured in logic if a method or methods to translate ordinary language into that logic can be found. Philosophical logic is essentially a continuation of the traditional discipline called “logic” before the invention of mathematical logic. Philosophical logic has a much greater concern with the connection between natural language and logic. As a result, philosophical logicians have contributed a great deal to the development of non-standard logics (e.g. free logics, tense logics) as well as various extensions of classical logic (e.g. modal logics) and non-standard semantics for such logics (e.g. Kripke's supervaluationism in the semantics of logic). Logic and the are closely related. Philosophy of language has to do with the study of how our language engages and interacts with our thinking. Logic has an immediate impact on other areas of study. Studying logic and the relationship between logic and ordinary speech can help a person better structure his own arguments and critique the arguments of others. Many popular arguments are filled with errors because so many people are untrained in logic and unaware of how to formulate an argument correctly.[34][35]

6.3.8 Computational logic

Main articles: Computational logic and Logic in computer science 6.3. TYPES OF LOGIC 43

Logic cut to the heart of computer science as it emerged as a discipline: Alan Turing's work on the Entscheidungsproblem followed from Kurt Gödel's work on the incompleteness theorems. The notion of the general purpose computer that came from this work was of fundamental importance to the designers of the computer machinery in the 1940s. In the 1950s and 1960s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or artificial intelligence. This was more difficult than expected because of the complexity of human reasoning. In logic programming, a program consists of a set of axioms and rules. Logic programming systems such as Prolog compute the consequences of the axioms and rules in order to answer a query. Today, logic is extensively applied in the fields of Artificial Intelligence, and Computer Science, and these fields provide a rich source of problems in formal and informal logic. Argumentation theory is one good example of how logic is being applied to artificial intelligence. The ACM Computing Classification System in particular regards:

• Section F.3 on Logics and meanings of programs and F.4 on Mathematical logic and formal languages as part of the theory of computer science: this work covers formal semantics of programming languages, as well as work of formal methods such as Hoare logic;

• Boolean logic as fundamental to computer hardware: particularly, the system’s section B.2 on Arithmetic and logic structures, relating to operatives AND, NOT, and OR;

• Many fundamental logical formalisms are essential to section I.2 on artificial intelligence, for example modal logic and default logic in Knowledge representation formalisms and methods, Horn clauses in logic program- ming, and description logic.

Furthermore, computers can be used as tools for logicians. For example, in symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving the machines can find and check proofs, as well as work with proofs too lengthy to write out by hand.

6.3.9 Bivalence and the law of the excluded middle; non-classical logics

Main article: Principle of bivalence

The logics discussed above are all "bivalent" or “two-valued"; that is, they are most naturally understood as dividing propositions into true and false propositions. Non-classical logics are those systems that reject bivalence. Hegel developed his own logic that extended Kant's transcendental logic but also brought it back to ground by assuring us that “neither in heaven nor in earth, neither in the world of mind nor of nature, is there anywhere such an abstract 'either–or' as the understanding maintains. Whatever exists is concrete, with difference and opposition in itself”.[36] In 1910, Nicolai A. Vasiliev extended the law of excluded middle and the law of contradiction and proposed the law of excluded fourth and logic tolerant to contradiction.[37] In the early 20th century Jan Łukasiewicz investigated the extension of the traditional true/false values to include a third value, “possible”, so inventing ternary logic, the first multi-valued logic.[38] Logics such as fuzzy logic have since been devised with an infinite number of “degrees of truth”, represented by a real number between 0 and 1.[39] Intuitionistic logic was proposed by L.E.J. Brouwer as the correct logic for reasoning about mathematics, based upon his rejection of the law of the excluded middle as part of his intuitionism. Brouwer rejected formalization in math- ematics, but his student Arend Heyting studied intuitionistic logic formally, as did Gerhard Gentzen. Intuitionistic logic is of great interest to computer scientists, as it is a constructive logic and can be applied for extracting verified programs from proofs. Modal logic is not truth conditional, and so it has often been proposed as a non-classical logic. However, modal logic is normally formalized with the principle of the excluded middle, and its relational semantics is bivalent, so this inclusion is disputable. 44 CHAPTER 6. LOGIC

6.3.10 “Is logic empirical?"

Main article: Is logic empirical?

What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled “Is logic empirical?"[40] , building on a suggestion of W. V. Quine, argued that in general the facts of propositional logic have a similar epistemological status as facts about the physical universe, for example as the laws of mechanics or of general relativity, and in particular that what physicists have learned about quantum mechanics provides a compelling case for abandoning certain familiar principles of classical logic: if we want to be realists about the physical phenomena described by quantum theory, then we should abandon the principle of distributivity, substituting for classical logic the quantum logic proposed by Garrett Birkhoff and John von Neumann.[41] Another paper of the same name by Sir Michael Dummett argues that Putnam’s desire for realism mandates the law of distributivity.[42] Distributivity of logic is essential for the realist’s understanding of how propositions are true of the world in just the same way as he has argued the principle of bivalence is. In this way, the question, “Is logic empirical?" can be seen to lead naturally into the fundamental controversy in on realism versus anti-realism.

6.3.11 Implication: strict or material?

Main article: Paradox of entailment

The notion of implication formalized in classical logic does not comfortably translate into natural language by means of “if ... then ...”, due to a number of problems called the paradoxes of material implication. The first class of paradoxes involves counterfactuals, such as If the moon is made of green cheese, then 2+2=5, which are puzzling because natural language does not support the principle of explosion. Eliminating this class of paradoxes was the reason for C. I. Lewis's formulation of strict implication, which eventually led to more radically revisionist logics such as relevance logic. The second class of paradoxes involves redundant premises, falsely suggesting that we know the succedent because of the antecedent: thus “if that man gets elected, granny will die” is materially true since granny is mortal, regardless of the man’s election prospects. Such sentences violate the Gricean maxim of relevance, and can be modelled by logics that reject the principle of monotonicity of entailment, such as relevance logic.

6.3.12 Tolerating the impossible

Main article: Paraconsistent logic

Hegel was deeply critical of any simplified notion of the Law of Non-Contradiction. It was based on Leibniz's idea that this law of logic also requires a sufficient ground to specify from what point of view (or time) one says that something cannot contradict itself. A building, for example, both moves and does not move; the ground for the first is our solar system and for the second the earth. In Hegelian dialectic, the law of non-contradiction, of identity, itself relies upon difference and so is not independently assertable. Closely related to questions arising from the paradoxes of implication comes the suggestion that logic ought to tolerate inconsistency. Relevance logic and paraconsistent logic are the most important approaches here, though the concerns are different: a key consequence of classical logic and some of its rivals, such as intuitionistic logic, is that they respect the principle of explosion, which means that the logic collapses if it is capable of deriving a contradiction. Graham Priest, the main proponent of dialetheism, has argued for paraconsistency on the grounds that there are in fact, true contradictions.[43]

6.3.13 Rejection of

The philosophical vein of various kinds of skepticism contains many kinds of and rejection of the various bases on which logic rests, such as the idea of logical form, correct inference, or meaning, typically leading to the conclusion 6.4. SEE ALSO 45 that there are no logical truths. Observe that this is opposite to the usual views in , where logic directs skeptical enquiry to doubt received wisdoms, as in the work of Sextus Empiricus. Friedrich Nietzsche provides a strong example of the rejection of the usual basis of logic: his radical rejection of idealization led him to reject truth as a "... mobile army of metaphors, metonyms, and anthropomorphisms—in short ... metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.”[44] His rejection of truth did not lead him to reject the idea of either inference or logic completely, but rather suggested that “logic [came] into existence in man’s head [out] of illogic, whose realm originally must have been immense. Innumerable who made inferences in a way different from ours perished”.[45] Thus there is the idea that logical inference has a use as a tool for human survival, but that its existence does not support the existence of truth, nor does it have a reality beyond the instrumental: “Logic, too, also rests on assumptions that do not correspond to anything in the real world”.[46] This position held by Nietzsche however, has come under extreme scrutiny for several reasons. He fails to demonstrate the validity of his claims and merely asserts them rhetorically. Although, since he is criticising the established criteria of validity, this does not undermine his position for one could argue that the demonstration of validity provided in the name of logic was just as rhetorically based. Some philosophers, such as Jürgen Habermas, claim his position is self-refuting—and accuse Nietzsche of not even having a coherent perspective, let alone a theory of knowledge.[47] Again, it is unclear if this is a decisive critique for the criteria of coherency and consistent theory are exactly what is under question. Georg Lukács, in his book The Destruction of Reason, asserts that, “Were we to study Nietzsche’s statements in this area from a logico-philosophical angle, we would be confronted by a dizzy chaos of the most lurid assertions, arbitrary and violently incompatible.”[48] Still, in this respect his “theory” would be a much better depicition of a confused and chaotic reality than any consistent and compatible theory. Bertrand Russell described Nietzsche’s irrational claims with “He is fond of expressing himself paradoxically and with a view to shocking conventional readers” in his book A History of .[49]

6.4 See also

• Digital electronics (also known as digital logic or logic gates) • Fallacies • List of logicians • List of logic journals • Logic puzzle • Logic symbols • Mathematics • List of mathematics articles • Outline of mathematics • Metalogic • Outline of logic • Philosophy • List of philosophy topics • • Reason • Straight and Crooked Thinking (book) • Table of logic symbols • Truth • Vector logic 46 CHAPTER 6. LOGIC

6.5 Notes and references

[1] “possessed of reason, intellectual, dialectical, argumentative”, also related to λόγος (logos), “word, thought, idea, argument, account, reason, or principle” (Liddell & Scott 1999; Online Etymology Dictionary 2001).

[2] Richard Henry Popkin; Avrum Stroll (1 July 1993). Philosophy Made Simple. Random House Digital, Inc. p. 238. ISBN 978-0-385-42533-9. Retrieved 5 March 2012.

[3] Jacquette, D. (2002). A Companion to Philosophical Logic. Wiley Online Library. p. 2.

[4] For example, Nyaya (syllogistic recursion) dates back 1900 years.

[5] Mohists and the date back at 2200 years.

[6] Plato (1976). Buchanan, Scott, ed. The Portable Plato. Penguin. ISBN 0-14-015040-4.

[7] Aristotle (2001). "Posterior Analytics". In Mckeon, Richard. The Basic Works. Modern Library. ISBN 0-375-75799-6.

[8] Whitehead, Alfred North; Russell, Bertrand (1967). Principia Mathematica to *56. Cambridge University Press. ISBN 0-521-62606-4.

[9] For a more modern treatment, see Hamilton, A. G. (1980). Logic for Mathematicians. Cambridge University Press. ISBN 0-521-29291-3.

[10] Łukasiewicz, Jan (1957). Aristotle’s syllogistic from the standpoint of modern formal logic (2nd ed.). Oxford University Press. p. 7. ISBN 978-0-19-824144-7.

[11] • Magnani, L. “Abduction, Reason, and Science: Processes of Discovery and Explanation”. Kluwer Academic Plenum Publishers, New York, 2001. xvii. 205 pages. Hard cover, ISBN 0-306-46514-0. • R. Josephson, J. & G. Josephson, S. “Abductive Inference: Computation, Philosophy, Technology” Cambridge Uni- versity Press, New York & Cambridge (U.K.). viii. 306 pages. Hard cover (1994), ISBN 0-521-43461-0, Paperback (1996), ISBN 0-521-57545-1. • Bunt, H. & Black, W. “Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics” (Natural Language Processing, 1.) John Benjamins, Amsterdam & Philadelphia, 2000. vi. 471 pages. Hard cover, ISBN 90-272-4983-0 (Europe), 1-58619-794-2 (U.S.)

[12] Peirce, C. S.

• “On the Logic of drawing History from Ancient Documents especially from Testimonies” (1901), Collected Papers v. 7, paragraph 219. • “PAP” ["Prolegomena to an Apology for Pragmatism"], MS 293 c. 1906, New Elements of Mathematics v. 4, pp. 319-320. • A Letter to F. A. Woods (1913), Collected Papers v. 8, paragraphs 385-388.

(See under "Abduction" and "Retroduction" at Commens Dictionary of Peirce’s Terms.)

[13] Peirce, C. S. (1903), Harvard lectures on pragmatism, Collected Papers v. 5, paragraphs 188–189.

[14] Bergmann, Merrie; Moor, James; Nelson, Jack (2009). The Logic Book (Fifth ed.). New York, NY: McGraw-Hill. ISBN 978-0-07-353563-0.

[15] Mendelson, Elliott (1964). “Quantification Theory: Completeness Theorems”. Introduction to Mathematical Logic. Van Nostrand. ISBN 0-412-80830-7.

[16] Hofweber, T. (2004). “Logic and Ontology”. In Zalta, Edward N. Stanford Encyclopedia of Philosophy.

[17] E.g., Kline (1972, p.53) wrote “A major achievement of Aristotle was the founding of the ”.

[18] "Aristotle", MTU Department of Chemistry.

[19] Jonathan Lear (1986). "Aristotle and Logical Theory". Cambridge University Press. p.34. ISBN 0-521-31178-0

[20] Simo Knuuttila (1981). "Reforging the great chain of being: studies of the history of modal theories". Springer Science & Business. p.71. ISBN 90-277-1125-9

[21] Michael Fisher, Dov M. Gabbay, Lluís Vila (2005). "Handbook of temporal reasoning in artificial intelligence". Elsevier. p.119. ISBN 0-444-51493-7 6.5. NOTES AND REFERENCES 47

[22] Harold Joseph Berman (1983). "Law and revolution: the formation of the Western legal tradition". Harvard University Press. p.133. ISBN 0-674-51776-8 [23] The four Catuṣkoṭi logical divisions are formally very close to the four opposed propositions of the Greek , which in turn are analogous to the four truth values of modern relevance logic Cf. Belnap (1977); Jayatilleke, K. N., (1967, The logic of four alternatives, in Philosophy East and West, University of Hawaii Press). [24] Kisor Kumar Chakrabarti (June 1976). “Some Comparisons Between Frege’s Logic and Navya-Nyaya Logic”. Philoso- phy and Phenomenological Research (International Phenomenological Society) 36 (4): 554–563. doi:10.2307/2106873. JSTOR 2106873. This paper consists of three parts. The first part deals with Frege’s distinction between sense and ref- erence of proper names and a similar distinction in Navya-Nyaya logic. In the second part we have compared Frege’s definition of number to the Navya-Nyaya definition of number. In the third part we have shown how the study of the so-called 'restrictive conditions for universals’ in Navya-Nyaya logic anticipated some of the developments of modern set theory. [25] Jonardon Ganeri (2001). Indian logic: a reader. Routledge. pp. vii, 5, 7. ISBN 0-7007-1306-9. [26] “Aristotle”. Encyclopædia Britannica. [27] “History of logic: Arabic logic”. Encyclopædia Britannica. [28] Stolyar, Abram A. (1983). Introduction to Elementary Mathematical Logic. Dover Publications. p. 3. ISBN 0-486-64561- 4. [29] Barnes, Jonathan (1995). The Cambridge Companion to Aristotle. Cambridge University Press. p. 27. ISBN 0-521-42294- 9. [30] Aristotle (1989). Prior Analytics. Hackett Publishing Co. p. 115. ISBN 978-0-87220-064-7. [31] Mendelson, Elliott (1964). “Formal Number Theory: Gödel’s Incompleteness Theorem”. Introduction to Mathematical Logic. Monterey, Calif.: Wadsworth & Brooks/Cole Advanced Books & Software. OCLC 13580200. [32] Brookshear, J. Glenn (1989). “Computability: Foundations of Recursive Function Theory”. Theory of computation: formal languages, automata, and complexity. Redwood City, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-0143-7. [33] Brookshear, J. Glenn (1989). “Complexity”. Theory of computation: formal languages, automata, and complexity. Red- wood City, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-0143-7. [34] Goldman, Alvin I. (1986), and Cognition, Harvard University Press, p. 293, ISBN 9780674258969, untrained subjects are prone to commit various sorts of fallacies and mistakes. [35] Demetriou, A.; Efklides, A., eds. (1994), Intelligence, Mind, and Reasoning: Structure and Development, Advances in Psychology 106, Elsevier, p. 194, ISBN 9780080867601. [36] Hegel, G. W. F (1971) [1817]. . Encyclopedia of the Philosophical Sciences. trans. William Wallace. Oxford: Clarendon Press. p. 174. ISBN 0-19-875014-5. [37] Joseph E. Brenner (3 August 2008). Logic in Reality. Springer. pp. 28–30. ISBN 978-1-4020-8374-7. Retrieved 9 April 2012. [38] Zegarelli, Mark (2010), Logic For Dummies, John Wiley & Sons, p. 30, ISBN 9781118053072. [39] Hájek, Petr (2006). “Fuzzy Logic”. In Zalta, Edward N. Stanford Encyclopedia of Philosophy. [40] Putnam, H. (1969). “Is Logic Empirical?". Boston Studies in the Philosophy of Science 5. [41] Birkhoff, G.; von Neumann, J. (1936). “The Logic of Quantum Mechanics”. Annals of Mathematics (Annals of Mathe- matics) 37 (4): 823–843. doi:10.2307/1968621. JSTOR 1968621. [42] Dummett, M. (1978). “Is Logic Empirical?". Truth and Other Enigmas. ISBN 0-674-91076-1. [43] Priest, Graham (2008). “Dialetheism”. In Zalta, Edward N. Stanford Encyclopedia of Philosophy. [44] Nietzsche, 1873, On Truth and Lies in a Nonmoral Sense. [45] Nietzsche, 1882, The Gay Science. [46] Nietzsche, 1878, Human, All Too Human [47] Babette Babich, Habermas, Nietzsche, and [48] Georg Lukács. “The Destruction of Reason by Georg Lukács 1952”. Marxists.org. Retrieved 2013-06-16. [49] Russell, Bertrand (1945), A History of Western Philosophy And Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day (PDF), Simon and Schuster, p. 762 48 CHAPTER 6. LOGIC

6.6 Bibliography

• Nuel Belnap, (1977). “A useful four-valued logic”. In Dunn & Eppstein, Modern uses of multiple-valued logic. Reidel: Boston.

• Józef Maria Bocheński (1959). A précis of mathematical logic. Translated from the French and German editions by Otto Bird. D. Reidel, Dordrecht, South Holland.

• Józef Maria Bocheński, (1970). A history of formal logic. 2nd Edition. Translated and edited from the German edition by Ivo Thomas. Chelsea Publishing, New York.

• Brookshear, J. Glenn (1989). Theory of computation: formal languages, automata, and complexity. Redwood City, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-0143-7.

• Cohen, R.S, and Wartofsky, M.W. (1974). Logical and Epistemological Studies in Contemporary Physics. Boston Studies in the Philosophy of Science. D. Reidel Publishing Company: Dordrecht, Netherlands. ISBN 90-277-0377-9.

• Finkelstein, D. (1969). “Matter, Space, and Logic”. in R.S. Cohen and M.W. Wartofsky (eds. 1974).

• Gabbay, D.M., and Guenthner, F. (eds., 2001–2005). Handbook of Philosophical Logic. 13 vols., 2nd edition. Kluwer Publishers: Dordrecht.

• Hilbert, D., and Ackermann, W, (1928). Grundzüge der theoretischen Logik (Principles of Mathematical Logic). Springer-Verlag. OCLC 2085765

• Susan Haack, (1996). Deviant Logic, Fuzzy Logic: Beyond the Formalism, University of Chicago Press.

• Hodges, W., (2001). Logic. An introduction to Elementary Logic, Penguin Books.

• Hofweber, T., (2004), Logic and Ontology. Stanford Encyclopedia of Philosophy. Edward N. Zalta (ed.).

• Hughes, R.I.G., (1993, ed.). A Philosophical Companion to First-Order Logic. Hackett Publishing.

• Kline, Morris (1972). Mathematical Thought From Ancient to Modern Times. Oxford University Press. ISBN 0-19-506135-7.

• Kneale, William, and Kneale, Martha, (1962). The Development of Logic. Oxford University Press, London, UK.

• Liddell, Henry George; Scott, Robert. “Logikos”. A Greek-English Lexicon. Perseus Project. Retrieved 8 May 2009.

• Mendelson, Elliott, (1964). Introduction to Mathematical Logic. Wadsworth & Brooks/Cole Advanced Books & Software: Monterey, Calif. OCLC 13580200

• Harper, Robert (2001). “Logic”. Online Etymology Dictionary. Retrieved 8 May 2009.

• Smith, B., (1989). “Logic and the Sachverhalt”. The Monist 72(1):52–69.

• Whitehead, Alfred North and Bertrand Russell, (1910). Principia Mathematica. Cambridge University Press: Cambridge, England. OCLC 1041146

6.7 External links

• Logic at PhilPapers

• Logic at the Indiana Philosophy Ontology Project

• Logic entry in the Internet Encyclopedia of Philosophy

• Hazewinkel, Michiel, ed. (2001), “Logical calculus”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 6.7. EXTERNAL LINKS 49

• An Outline for Verbal Logic

• Introductions and tutorials • An Introduction to Philosophical Logic, by Paul Newall, aimed at beginners. • forall x: an introduction to formal logic, by P.D. Magnus, covers sentential and quantified logic. • Logic Self-Taught: A Workbook (originally prepared for on-line logic instruction). • Nicholas Rescher. (1964). Introduction to Logic, St. Martin’s Press.

• “Symbolic Logic” and “The Game of Logic”, Lewis Carroll, 1896. • Math & Logic: The history of formal mathematical, logical, linguistic and methodological ideas. In The Dictionary of the History of Ideas.

• Online Tools • Interactive Syllogistic Machine A web based syllogistic machine for exploring fallacies, figures, terms, and modes of syllogisms. • Reference material

• Translation Tips, by Peter Suber, for translating from English into logical notation. • Ontology and History of Logic. An Introduction with an annotated bibliography. • Reading lists

• The London Philosophy Study Guide offers many suggestions on what to read, depending on the student’s familiarity with the subject: • Logic & Metaphysics • Set Theory and Further Logic • Mathematical Logic Chapter 7

Necessity and sufficiency

This article is about the formal terminology in logic. For causal meanings of the terms, see . For the con- cepts in statistics, see Sufficient statistic.

In logic, necessity and sufficiency are implicational relationships between statements. The assertion that one state- ment is a necessary and sufficient condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true or simultaneously false.[1][2][3] In ordinary English, 'necessary' and 'sufficient' indicate relations between conditions or states of affairs, not statements. Being a male sibling is a necessary and sufficient condition for being a brother. Fred’s being a male sibling is necessary and sufficient for the truth of the statement that Fred is a brother.

7.1 Definitions

A true necessary condition in a conditional statement makes the statement true (see "" immediately below). In formal terms, a consequent N is a necessary condition for an antecedent S, in the conditional statement, "N if S ", "N is implied by S ", or N ⇐ S. In common words, we would also say "N is weaker than S " or "S cannot occur without N ". For example, it is necessary to be Named, to be called “Socrates”. A true sufficient condition in a conditional statement ties the statement’s truth to its consequent. In formal terms, an antecedent S is a sufficient condition for a consequent N, in the conditional statement, “if S, then N ", "S implies N ", or S ⇒ N. In common words, we would also say "S is stronger than N " or "S guarantees N ". For example, “Socrates” suffices for a Name. A necessary and sufficient condition requires both of these implications (S ⇒ N and N ⇒ S) to hold. Using the previous statement, this is expressed as "S is necessary and sufficient for N ", "S if and only if N ", or S ⇔ N.

7.2 Necessity

The assertion that Q is necessary for P is colloquially equivalent to "P cannot be true unless Q is true,” or “if Q is false then P is false.” By contraposition, this is the same thing as “whenever P is true, so is Q ". The logical relation between them is expressed as “If P then Q " and denoted "P ⇒ Q "(P implies Q), and may also be expressed as any of "Q, if P "; "Q whenever P "; and "Q when P.” One often finds, in mathematical prose for instance, several necessary conditions that, taken together, constitute a sufficient condition, as shown in Example 5.

Example 1: In order for it to be true that “John is a bachelor,” it is necessary that it be also true that he is

1. unmarried 2. male 3. adult

50 7.2. NECESSITY 51

The sun being above the horizon is a necessary condition for direct sunlight; but it is not a sufficient condition as something else may be casting a shadow, e.g. in the case of an eclipse.

since to state “John is a bachelor” implies John has each of those three additional predicates.

Example 2: For the whole numbers greater than two, being odd is necessary to being prime, since two is the only whole number that is both even and prime.

Example 3: Consider thunder, the sound caused by lightning. We say that thunder is necessary for lightning, since lightning never occurs without thunder. Whenever there’s lightning, there’s thunder. The thunder does not cause the lightning (since lightning causes thunder), but because lightning always comes with thunder, we say that thunder is necessary for lightning. (That is, in its formal sense, necessity doesn't imply causality.)

Example 4: Being at least 30 years old is necessary for serving in the U.S. Senate. If you are under 30 years old then it is impossible for you to be a senator. That is, if you are a senator, it follows that you are at least 30 years old.

Example 5: In , in order for some set S together with an operation ⋆ to form a group, it is necessary that ⋆ be associative. It is also necessary that S include a special element e such that for every x in S it is the case that e ⋆ x and x ⋆ e both equal x. It is also necessary that for every x in S there exist a 52 CHAPTER 7. NECESSITY AND SUFFICIENCY

corresponding element x " such that both x ⋆ x " and x " ⋆ x equal the special element e. None of these three necessary conditions by itself is sufficient, but the conjunction of the three is.

7.3 Sufficiency

That a train runs on schedule can be a sufficient condition for arriving on time (if one boards the train and it departs on time, then one will arrive on time); but it is not always a necessary condition, since there are other ways to travel (if the train does not run to time, one could still arrive on time through other means of transport).

To say that P is sufficient for Q is to say that, in and of itself, knowing P to be true is adequate grounds to conclude that Q is true. (However, knowing P not to be true does not, in and of itself, provide adequate grounds to conclude that Q is not true.) The logical relation is expressed as “If P then Q " or "P ⇒ Q,” and may also be expressed as "P implies Q.” Several sufficient conditions may, taken together, constitute a single necessary condition, as illustrated in example 5.

Example 1: Stating that “John is a bachelor” implies that John is male. So knowing that it is true that John is a bachelor is sufficient to know that he is a male.

Example 2: A number’s being divisible by 4 is sufficient (but not necessary) for its being even, but being divisible by 2 is both sufficient and necessary.

Example 3: An occurrence of thunder is a sufficient condition for the occurrence of lightning in the sense that hearing thunder, and unambiguously recognizing it as such, justifies concluding that there has been a lightning bolt.

Example 4: A U.S. president’s signing a bill that Congress passed is sufficient to make the bill law. Note that the case whereby the president did not sign the bill, e.g. through exercising a presidential veto, does not mean that the bill has not become law (it could still have become law through a congressional override).

Example 5: That the center of a playing card should be marked with a single large spade (♠) is sufficient for the card to be an ace. Three other sufficient conditions are that the center of the card be marked 7.4. RELATIONSHIP BETWEEN NECESSITY AND SUFFICIENCY 53

with a diamond (♦), heart (♥), or club (♣), respectively. None of these conditions is necessary to the card’s being an ace, but their disjunction is, since no card can be an ace without fulfilling at least (in fact, exactly) one of the conditions.

7.4 Relationship between necessity and sufficiency

Solution

N₁ S₁ S₂ N₂ S₃

Consider two logical statements S and N, such that S is sufficient for N (if S is true, then N must also be true). Figuratively, if we are within S then we are within N. It follows that S cannot be true unless N is true – i.e., that N is a necessary condition for S.

A condition can be either necessary or sufficient without being the other. For instance, being a mammal (N) is necessary but not sufficient to being human (S), and that a number x is rational (S) is sufficient but not necessary to x 's being a real number (N) (since there are real numbers that are not rational). A condition can be both necessary and sufficient. For example, at present, “today is the Fourth of July" is a necessary and sufficient condition for “today is Independence Day in the United States.” Similarly, a necessary and sufficient condition for invertibility of a M is that M has a nonzero determinant. Mathematically speaking, necessity and sufficiency are dual to one another. For any statements S and N, the assertion that "N is necessary for S " is equivalent to the assertion that "S is sufficient for N.” Another facet of this duality is that, as illustrated above, conjunctions (using “and”) of necessary conditions may achieve sufficiency, while disjunctions (using “or”) of sufficient conditions may achieve necessity. For a third facet, identify every mathematical predicate N with the set T(N) of objects, events, or statements for which N holds true; then asserting the necessity of N for S is equivalent to claiming that T(N) is a superset of T(S), while asserting the sufficiency of S for N is equivalent to claiming that T(S) is a subset of T(N).

7.5 Simultaneous necessity and sufficiency

See also: Material equivalence

To say that P is necessary and sufficient for Q is to say two things, that P is necessary for Q and that P is sufficient for Q. Of course, it may instead be understood to say a different two things, namely that each of P and Q is necessary 54 CHAPTER 7. NECESSITY AND SUFFICIENCY

for the other. And it may be understood in a third equivalent way: as saying that each is sufficient for the other. One may summarize any—and thus all—of these cases by the statement "P if and only if Q,” which is denoted by P ⇔ Q. For example, in graph theory a graph G is called bipartite if it is possible to assign to each of its vertices the color black or white in such a way that every edge of G has one endpoint of each color. And for any graph to be bipartite, it is a necessary and sufficient condition that it contain no odd-length cycles. Thus, discovering whether a graph has any odd cycles tells one whether it is bipartite and vice versa. A philosopher[4] might characterize this state of affairs thus: “Although the concepts of bipartiteness and absence of odd cycles differ in intension, they have identical extension.[5] If P is necessary and sufficient for Q, then Q is necessary and sufficient for P, because necessity of one for the other is equivalent to sufficiency of the other for the first one. Both of these statements (that one of P or Q is necessary and sufficient for the other) are equivalent to both of the statements "P is true if and only if Q is true” and "Q is true if and only if P is true”.

7.6 See also

• Causality • Material implication • Wason selection task • Closed concept

7.6.1 Argument forms involving necessary and sufficient conditions

Valid forms of argument

• Modus ponens • Modus tollens

Invalid forms of argument (i.e. fallacies)

• Affirming the consequent • Denying the antecedent

7.7 References

[1] Betz, Frederick (2011). Managing Science: Methodology and Organization of Research. New York: Springer. p. 247. ISBN 978-1-4419-7487-7. [2] Manktelow, K.I. (1999). Reasoning and Thinking. East Sussex, UK: Psychology Press. ISBN 0-86377-708-2. [3] Asnina, Erika; Osis, Janis; and Jansone, Asnate (2013). “Formal Specification of Topological Relations”. Databases and Information Systems VII: 175. doi:10.3233/978-1-61499-161-8-175. [4] Stanford University primer, 2006 [5] “Meanings, in this sense, are often called intensions, and things designated, extensions. Contexts in which extension is all that matters are, naturally, called extensional, while contexts in which extension is not enough are intensional. Mathematics is typically extensional throughout.” Stanford University primer, 2006

7.8 External links

• Critical thinking web tutorial: Necessary and Sufficient Conditions • Simon Fraser University: Concepts with examples Chapter 8

Occam’s razor

For the aerial theatre company, see Ockham’s Razor Theatre Company. Occam’s razor (also written as Ockham’s razor and in Latin lex parsimoniae, which means 'law of parsimony')

Andreas Cellarius's illustration of the Copernican system, from the Harmonia Macrocosmica (1708). The motions of the sun, moon and other solar system planets can be calculated using a geocentric model (the earth is at the center) or using a heliocentric model (the sun is at the center). Both work, but the geocentric system requires many more assumptions than the heliocentric system, which has only seven. This was pointed out in a preface to Copernicus' first edition of De revolutionibus orbium coelestium. is a problem-solving principle devised by William of Ockham (c. 1287–1347), who was an English Franciscan friar and scholastic philosopher and theologian. The principle states that among competing hypotheses that predict equally well, the one with the fewest assumptions should be selected. Other, more complicated solutions may ultimately prove to provide better predictions, but—in the absence of differences in predictive ability—the fewer assumptions that are

55 56 CHAPTER 8. OCCAM’S RAZOR

made, the better. The application of the principle can be used to shift the burden of proof in a discussion. However, Alan Baker, who suggests this in the online Stanford Encyclopedia of Philosophy, is careful to point out that his suggestion should not be taken generally, but only as it applies in a particular context, that is: philosophers who argue in opposition to metaphysical theories that involve an allegedly “superfluous ontological apparatus.”[lower-alpha 1] Baker then notices that principles, including Occam’s razor, are often expressed in a way that is unclear regarding which facet of “simplicity”—parsimony or elegance—the principle refers to, and that in a hypothetical formulation the facets of simplicity may work in different directions: a simpler description may refer to a more complex hypothesis, and a more complex description may refer to a simpler hypothesis.[lower-alpha 2] Solomonoff’s theory of inductive inference is a mathematically formalized Occam’s razor:[2][3][4][5][6][7] shorter computable theories have more weight when calculating the probability of the next observation, using all computable theories that perfectly describe previous observations. In science, Occam’s razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models.[8][9] In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there is always an infinite number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypothesis to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are better testable and falsifiable.[1][10][11]

8.1 History

The term Occam’s razor first appeared in 1852 in the works of Sir William Hamilton, 9th Baronet (1788–1856), centuries after William of Ockham's death in 1347.[12] Ockham did not invent this “razor”—its association with him may be due to the frequency and effectiveness with which he used it (Ariew 1976). Ockham stated the principle in various ways, but the most popular version, “Entities must not be multiplied beyond necessity” (Non sunt multiplicanda entia sine necessitate) was formulated by the Irish Franciscan philosopher John Punch in his 1639 commentary on the works of .[13]

8.1.1 Formulations before Ockham

Part of a page from Duns Scotus’ book Ordinatio:"Pluralitas non est ponenda sine necessitate", i.e., “Plurality is not to be posited without necessity”

The origins of what has come to be known as Occam’s razor are traceable to the works of earlier philosophers such as John Duns Scotus (1265–1308), Robert Grosseteste (1175-1253), Maimonides (Moses ben-Maimon, 1138–1204), and even Aristotle (384–322 BC).[14][15] Aristotle writes in his Posterior Analytics, “We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses.”[16] Ptolemy (c. AD 90 – c. AD 168) stated, “We consider it a good principle to explain the phenomena by the simplest 8.1. HISTORY 57

hypothesis possible.”[17] Phrases such as “It is vain to do with more what can be done with fewer” and “A plurality is not to be posited without necessity” were commonplace in 13th-century scholastic writing.[17] Robert Grosseteste, in Commentary on [Aristotle’s] the Posterior Analytics Books (Commentarius in Posteriorum Analyticorum Libros) (c. 1217–1220), declares: “That is better and more valuable which requires fewer, other circumstances being equal... For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal.”[18] The Summa Theologica of Thomas Aquinas (1225–1274) states that “it is superfluous to suppose that what can be accounted for by a few principles has been produced by many”. Aquinas uses this principle to construct an objection to God’s existence, an objection that he in turn answers and refutes generally (cf. quinque viae), and specifically, through an argument based on causality.[19] Hence, Aquinas acknowledges the principle that today is known as Occam’s razor, but prefers causal explanations to other simple explanations (cf. also Correlation does not imply causation). The Indian Hindu philosopher Madhva in verse 400 of his Vishnu-Tattva-Nirnaya says: "dvidhAkalpane kalpanA- gauravamiti" (“To make two suppositions when one is enough is to err by way of excessive supposition”).

8.1.2 Ockham

William of Ockham (circa 1287–1347) was an English Franciscan friar and theologian, an influential medieval philosopher and a nominalist. His popular fame as a great logician rests chiefly on the maxim attributed to him and known as Ockham’s razor. The term razor refers to distinguishing between two hypotheses either by “shaving away” unnecessary assumptions or cutting apart two similar conclusions. While it has been claimed that Ockham’s razor is not found in any of his writings,[20] one can cite statements such as Numquam ponenda est pluralitas sine necessitate [Plurality must never be posited without necessity], which occurs in his theological work on the 'Sentences of Peter Lombard' (Quaestiones et decisiones in quattuor libros Sententiarum Petri Lombardi (ed. Lugd., 1495), i, dist. 27, qu. 2, K). Nevertheless, the precise words sometimes attributed to Ockham, entia non sunt multiplicanda praeter necessitatem (entities must not be multiplied beyond necessity),[21] are absent in his extant works;[22] this particular phrasing comes from John Punch,[23] who described the principle as a “common axiom” (axioma vulgare) of the Scholastics.[13] Ockham’s contribution seems to be to restrict the operation of this principle in matters pertaining to miracles and God’s power: so, in the Eucharist, a plurality of miracles is possible, simply because it pleases God.[17] This principle is sometimes phrased as pluralitas non est ponenda sine necessitate (“plurality should not be posited without necessity”).[24] In his Summa Totius Logicae, i. 12, Ockham cites the principle of economy, Frustra fit per plura quod potest fieri per pauciora (It is futile to do with more things that which can be done with fewer”). (Thorburn, 1918, pp. 352–53; Kneale and Kneale, 1962, p. 243.)

8.1.3 Later formulations

To quote Isaac Newton, “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes.”[25][26] Bertrand Russell offers a particular version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.”[27] Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. This theory is a mathematical formalization of Occam’s razor.[2][3][4][5][28] Another technical approach to Occam’s razor is ontological parsimony.[29] The widespread layperson’s formulation that “the simplest explanation is usually the correct one” appears to have been derived from Occam’s razor. 58 CHAPTER 8. OCCAM’S RAZOR

8.2 Justifications

Beginning in the 20th century, epistemological justifications based on induction, logic, pragmatism, and especially probability theory have become more popular among philosophers.

8.2.1 Aesthetic

Prior to the 20th century, it was a commonly held belief that nature itself was simple and that simpler hypotheses about nature were thus more likely to be true. This notion was deeply rooted in the aesthetic value simplicity holds for human thought and the justifications presented for it often drew from theology. Thomas Aquinas made this argument in the 13th century, writing, “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices.”[30]

8.2.2 Empirical

Occam’s razor has gained strong empirical support in helping to converge on better theories (see “Applications” section below for some examples). In the related concept of overfitting, excessively complex models are affected by statistical noise (a problem also known as the bias-variance trade-off), whereas simpler models may capture the underlying structure better and may thus have better predictive performance. It is, however, often difficult to deduce which part of the data is noise (cf. model selection, test set, minimum description length, Bayesian inference, etc.).

Testing the razor

The razor’s statement that “other things being equal, simpler explanations are generally better than more complex ones” is amenable to empirical testing. Another interpretation of the razor’s statement would be that “simpler hy- potheses (not conclusions, i.e., explanations) are generally better than the complex ones”. The procedure to test the former interpretation would compare the track records of simple and comparatively complex explanations. If one accepts the first interpretation, the validity of Occam’s razor as a tool would then have to be rejected if the more complex explanations were more often correct than the less complex ones (while the converse would lend support to its use). If the latter interpretation is accepted, the validity of Occam’s razor as a tool could possibly be accepted if the simpler hypotheses led to correct conclusions more often than not. In the history of competing hypotheses, the simpler hypotheses have led to mathematically rigorous and empirically verifiable theories. In the history of competing explanations, this is not the case—at least not generally. Some increases in complexity are sometimes necessary. So there remains a justified general bias toward the simpler of two competing explanations. To understand why, consider that for each accepted explanation of a phenomenon, there is always an infinite number of possible, more complex, and ultimately incorrect, alternatives. This is so because one can always burden failing explanations with ad hoc hypothesis. Ad hoc hypotheses are justifications that prevent theories from being falsified. Even other empirical criteria, such as consilience, can never truly eliminate such explanations as competition. Each true explanation, then, may have had many alternatives that were simpler and false, but also an infinite number of alternatives that were more complex and false. But if an alternate ad hoc hypothesis were indeed justifiable, its implicit conclusions would be empirically verifiable. On a commonly accepted repeatability principle, these alternate theories have never been observed and continue to escape observation. In addition, one does not say an explanation is true if it has not withstood this principle. Put another way, any new, and even more complex, theory can still possibly be true. For example, if an individual makes supernatural claims that leprechauns were responsible for breaking a vase, the simpler explanation would be that he is mistaken, but ongoing ad hoc justifications (e.g., "... and that’s not me on the film; they tampered with that, too.”) successfully prevent outright falsification. This endless supply of elaborate competing explanations, called saving hypotheses, cannot be ruled out—but by using Occam’s razor.[31][32][33]

8.2.3 Practical considerations and pragmatism

See also: pragmatism and problem of induction 8.2. JUSTIFICATIONS 59

Possible explanations can become needlessly complex. It is coherent, for instance, to add the involvement of leprechauns to any explanation, but Occam’s razor would prevent such additions unless they were necessary.

The common form of the razor, used to distinguish between equally explanatory hypotheses, may be supported by the practical fact that simpler theories are easier to understand. Some argue that Occam’s razor is not an inference-driven model, but a heuristic maxim for choosing among other models and instead underlies induction. Alternatively, if one wants to have reasonable discussion one may be practically forced to accept Occam’s razor in the same way one is simply forced to accept the laws of thought and inductive reasoning (given the problem of induction). Philosopher Elliott Sober states that not even reason itself can be justified on any reasonable grounds, and that we must start with first principles of some kind (otherwise an infinite regress occurs). The pragmatist may go on, as David Hume did on the topic of induction, that there is no satisfying alternative to granting this premise. Though one may claim that Occam’s razor is invalid as a premise that helps regulate theories, putting this doubt into practice would mean doubting whether every step forward will result in locomotion or a nuclear explosion. In other words: “What’s the alternative?"

8.2.4 Mathematical

One justification of Occam’s razor is a direct result of basic probability theory. By definition, all assumptions introduce possibilities for error; if an assumption does not improve the accuracy of a theory, its only effect is to increase the 60 CHAPTER 8. OCCAM’S RAZOR probability that the overall theory is wrong. There have also been other attempts to derive Occam’s razor from probability theory, including notable attempts made by Harold Jeffreys and E. T. Jaynes. The probabilistic (Bayesian) basis for Occam’s razor is elaborated by David J. C. MacKay in chapter 28 of his book Information Theory, Inference, and Learning Algorithms,[34] where he emphasises that a prior bias in favour of simpler models is not required. William H. Jefferys (no relation to Harold Jeffreys) and James O. Berger (1991) generalize and quantify the original formulation’s “assumptions” concept as the degree to which a proposition is unnecessarily accommodating to possible observable data.[35] They state, “A hypothesis with fewer adjustable parameters will automatically have an enhanced posterior probability, due to the fact that the predictions it makes are sharp.”[35] The model they propose balances the precision of a theory’s predictions against their sharpness—preferring theories that sharply make correct predic- tions over theories that accommodate a wide range of other possible results. This, again, reflects the mathematical relationship between key concepts in Bayesian inference (namely marginal probability, conditional probability, and posterior probability).

8.2.5 Other philosophers

Karl Popper

Karl Popper argues that a preference for simple theories need not appeal to practical or aesthetic considerations. Our preference for simplicity may be justified by its falsifiability criterion: we prefer simpler theories to more complex ones “because their empirical content is greater; and because they are better testable” (Popper 1992). The idea here is that a simple theory applies to more cases than a more complex one, and is thus more easily falsifiable. This is again comparing a simple theory to a more complex theory where both explain the data equally well.

Elliott Sober

The philosopher of science Elliott Sober once argued along the same lines as Popper, tying simplicity with “informa- tiveness": The simplest theory is the more informative, in the sense that it requires less information to a question.[36] He has since rejected this account of simplicity, purportedly because it fails to provide an epistemic justification for simplicity. He now that simplicity considerations (and considerations of parsimony in particular) do not count unless they reflect something more fundamental. Philosophers, he suggests, may have made the error of hypostatizing simplicity (i.e., endowed it with a sui generis existence), when it has meaning only when embedded in a specific context (Sober 1992). If we fail to justify simplicity considerations on the basis of the context in which we use them, we may have no non-circular justification: “Just as the question 'why be rational?' may have no non-circular answer, the same may be true of the question 'why should simplicity be considered in evaluating the plausibility of hypotheses?'"[37]

Richard Swinburne

Richard Swinburne argues for simplicity on logical grounds:

... the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth. —Swinburne 1997

According to Swinburne, since our choice of theory cannot be determined by data (see Underdetermination and Quine-Duhem thesis), we must rely on some criterion to determine which theory to use. Since it is absurd to have no logical method for settling on one hypothesis amongst an infinite number of equally data-compliant hypotheses, we should choose the simplest theory: “Either science is irrational [in the way it judges theories and predictions probable] or the principle of simplicity is a fundamental synthetic a priori truth.” (Swinburne 1997). 8.3. APPLICATIONS 61

Ludwig Wittgenstein

From the Tractatus Logico-Philosophicus:

• 3.328 If a sign is not necessary then it is meaningless. That is the meaning of Occam’s Razor.

(If everything in the symbolism works as though a sign had meaning, then it has meaning.)

• 4.04 In the proposition there must be exactly as many things distinguishable as there are in the state of af- fairs which it represents. They must both possess the same logical (mathematical) multiplicity (cf. Hertz’s Mechanics, on Dynamic Models). • 5.47321 Occam’s Razor is, of course, not an arbitrary rule nor one justified by its practical success. It simply says that unnecessary elements in a symbolism mean nothing. Signs which serve one purpose are logically equivalent; signs which serve no purpose are logically meaningless. and on the related concept of “simplicity":

• 6.363 The procedure of induction consists in accepting as true the simplest law that can be reconciled with our experiences.

8.3 Applications

8.3.1 Science and the scientific method

In science, Occam’s razor is used as a heuristic to guide scientists in developing theoretical models rather than as an arbiter between published models.[8][9] In physics, parsimony was an important heuristic in Albert Einstein's for- mulation of special relativity,[38][39] in the development and application of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler,[40] and in the development of quantum mechanics by Max Planck, Werner Heisenberg and Louis de Broglie.[9][41] In chemistry, Occam’s razor is often an important heuristic when developing a model of a reaction mechanism.[42][43] Although it is useful as a heuristic in developing models of reaction mechanisms, it has been shown to fail as a criterion for selecting among some selected published models.[9] In this context, Einstein himself expressed caution when he formulated Einstein’s Constraint: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”. An often-quoted version of this constraint (which cannot be verified as posited by Einstein himself)[44] says “Everything should be kept as simple as possible, but no simpler.” In the scientific method, parsimony is an epistemological, metaphysical or heuristic preference, not an irrefutable principle of logic or a scientific result.[1][10][45] As a logical principle, Occam’s razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often support more complex theories than do existing data. Science prefers the simplest explanation that is consistent with the data available at a given time, but the simplest explanation may be ruled out as new data become available.[8][10] That is, science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles.[1][10][11] When scientists use the idea of parsimony, it has meaning only in a very specific context of inquiry. Several back- ground assumptions are required for parsimony to connect with plausibility in a particular research problem. The reasonableness of parsimony in one research context may have nothing to do with its reasonableness in another. It is a mistake to think that there is a single global principle that spans diverse subject matter.[11] It has been suggested that Occam’s razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. There is little that the world is actually simple or that simple accounts are more likely to be true than complex ones.[46] Most of the time, Occam’s razor is a conservative tool, cutting out crazy, complicated constructions and assuring that hypotheses are grounded in the science of the day, thus yielding “normal” science: models of explanation and 62 CHAPTER 8. OCCAM’S RAZOR

prediction. There are, however, notable exceptions where Occam’s razor turns a conservative scientist into a reluctant revolutionary. For example, Max Planck interpolated between the Wien and Jeans radiation laws and used Occam’s razor logic to formulate the quantum hypothesis, even resisting that hypothesis as it became more obvious that it was correct.[9] Appeals to simplicity were used to argue against the phenomena of meteorites, ball lightning, continental drift, and reverse transcriptase. One can argue for atomic building blocks for matter, because it provides a simpler explanation for the observed reversibility of both mixing and chemical reactions as simple separation and rearrangements of atomic building blocks. At the time, however, the atomic theory was considered more complex because it implied the existence of invisible particles that had not been directly detected. and the logical positivists rejected John Dalton's atomic theory until the reality of atoms was more evident in Brownian motion, as shown by Albert Einstein.[47] In the same way, postulating the aether is more complex than transmission of light through a vacuum. At the time, however, all known waves propagated through a physical medium, and it seemed simpler to postulate the existence of a medium than to theorize about wave propagation without a medium. Likewise, Newton’s idea of light particles seemed simpler than Christiaan Huygens’s idea of waves, so many favored it. In this case, as it turned out, neither the wave—nor the particle—explanation alone suffices, as light behaves like waves and like particles. Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of natural laws, and the constancy of . Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified. Occam’s razor and parsimony support, but do not prove, these axioms of science. The general principle of science is that theories (or models) of natural law must be consistent with repeatable experimental observations. This ultimate arbiter (selection criterion) rests upon the axioms mentioned above.[10] There are examples where Occam’s razor would have favored the wrong theory given the available data. Simplicity principles are useful philosophical preferences for choosing a more likely theory from among several possibilities that are all consistent with available data. A single instance of Occam’s razor favoring a wrong theory falsifies the razor as a general principle.[10] Michael Lee and others[48] provide cases in which a parsimonious approach does not guarantee a correct conclusion and, if based on incorrect working hypotheses or interpretations of incomplete data, may even strongly support a false conclusion. Lee states, “When parsimony ceases to be a guideline and is instead elevated to an ex cathedra pronouncement, parsimony analysis ceases to be science.” If multiple models of natural law make exactly the same testable predictions, they are equivalent and there is no need for parsimony to choose a preferred one. For example, Newtonian, Hamiltonian and Lagrangian classical mechanics are equivalent. Physicists have no interest in using Occam’s razor to say the other two are wrong. Likewise, there is no demand for simplicity principles to arbitrate between wave and matrix formulations of quantum mechanics. Science often does not demand arbitration or selection criteria between models that make the same testable predictions.[10]

8.3.2 Biology

Biologists or philosophers of biology use Occam’s razor in either of two contexts both in evolutionary biology: the units of selection controversy and systematics. George C. Williams in his book Adaptation and Natural Selection (1966) argues that the best way to explain among animals is based on low-level (i.e., individual) selection as opposed to high-level group selection. Altruism is defined by some evolutionary biologists (e.g., R. Alexander, 1987; W. D. Hamilton, 1964) as behavior that is beneficial to others (or to the group) at a cost to the individual, and many posit individual selection as the mechanism that explains altruism solely in terms of the behaviors of individual organisms acting in their own self-interest (or in the interest of their genes, via kin selection). Williams was arguing against the perspective of others who propose selection at the level of the group as an evolutionary mechanism that selects for altruistic traits (e.g., D. S. Wilson & E. O. Wilson, 2007). The basis for Williams’ contention is that of the two, individual selection is the more parsimonious theory. In doing so he is invoking a variant of Occam’s razor known as Morgan’s Canon: “In no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development.” (Morgan 1903). However, more recent biological analyses, such as Richard Dawkins' The Selfish Gene, have contended that Morgan’s Canon is not the simplest and most basic explanation. Dawkins argues the way evolution works is that the genes propagated in most copies end up determining the development of that particular species, i.e., natural selection turns out to select specific genes, and this is really the fundamental underlying principle, that automatically gives individual and group selection as emergent features of evolution. 8.3. APPLICATIONS 63

Zoology provides an example. Muskoxen, when threatened by wolves, form a circle with the males on the outside and the females and young on the inside. This is an example of a behavior by the males that seems to be altruistic. The behavior is disadvantageous to them individually but beneficial to the group as a whole and was thus seen by some to support the group selection theory. However, a much better explanation immediately offers itself once one considers that natural selection works on genes. If the male musk ox runs off leaving his offspring to the wolves, his genes do not propagate. If, however, he fights, his genes may live on in his offspring. Thus, the “stay-and-fight” gene prevails. This is an example of kin selection. An underlying general principle thus offers a much simpler explanation, without retreating to special principles as group selection. Systematics is the branch of biology that attempts to establish genealogical relationships among organisms. It is also concerned with their classification. There are three primary camps in systematics: cladists, pheneticists, and evolutionary taxonomists. The cladists hold that genealogy alone should determine classification and pheneticists contend that similarity over propinquity of descent is the determining criterion while evolutionary taxonomists say that both genealogy and similarity count in classification.[49] It is among the cladists that Occam’s razor is to be found, although their term for it is cladistic parsimony. Cladistic parsimony (or maximum parsimony) is a method of phylogenetic inference in the construction of types of phylogenetic trees (more specifically, cladograms). Cladograms are branching, tree-like structures used to represent lines of descent based on one or more evolutionary changes. Cladistic parsimony is used to support the hypotheses that require the fewest evolutionary changes. For some types of tree, it consistently produces the wrong results, regardless of how much data is collected (this is called long branch attraction). For a full treatment of cladistic parsimony, see Elliott Sober's Reconstructing the Past: Parsimony, Evolution, and Inference (1988). For a discussion of both uses of Occam’s razor in biology, see Sober’s article “Let’s Razor Ockham’s Razor” (1990). Other methods for inferring evolutionary relationships use parsimony in a more traditional way. Likelihood methods for phylogeny use parsimony as they do for all likelihood tests, with hypotheses requiring few differing parameters (i.e., numbers of different rates of character change or different frequencies of character state transitions) being treated as null hypotheses relative to hypotheses requiring many differing parameters. Thus, complex hypotheses must predict data much better than do simple hypotheses before researchers reject the simple hypotheses. Recent advances employ information theory, a close cousin of likelihood, which uses Occam’s razor in the same way. Francis Crick has commented on potential limitations of Occam’s razor in biology. He advances the argument that because biological systems are the products of (an ongoing) natural selection, the mechanisms are not necessarily optimal in an obvious sense. He cautions: “While Ockham’s razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research.”[50] In biogeography, parsimony is used to infer ancient migrations of species or populations by observing the geographic distribution and relationships of existing organisms. Given the phylogenetic tree, ancestral migrations are inferred to be those that require the minimum amount of total movement.

8.3.3 Medicine

When discussing Occam’s razor in contemporary medicine, doctors and philosophers of medicine speak of diagnostic parsimony. Diagnostic parsimony advocates that when diagnosing a given injury, ailment, illness, or disease a doctor should strive to look for the fewest possible causes that account for all the symptoms. This philosophy is one of several demonstrated in the popular medical adage “when you hear hoofbeats behind you, think horses, not zebras". While diagnostic parsimony might often be beneficial, credence should also be given to the counter-argument modernly known as Hickam’s dictum, which succinctly states that, “Patients can have as many diseases as they damn well please.” It is often statistically more likely that a patient has several common diseases rather than a single rarer disease that explains myriad symptoms. Also, independently of statistical likelihood, some patients do in fact turn out to have multiple diseases, which by common sense nullifies the approach of insisting to explain any given collection of symptoms with one disease. These misgivings emerge from simple probability theory—which is already taken into account in many modern vari- ations of the razor—and from the fact that the loss function is much greater in medicine than in most of general science. Because misdiagnosis can result in the loss of a person’s health and potentially life, it is considered better to test and pursue all reasonable theories even if there is some theory that appears the most likely. Diagnostic parsimony and the counterbalance it finds in Hickam’s dictum have very important implications in medical 64 CHAPTER 8. OCCAM’S RAZOR practice. Any set of symptoms could be indicative of a range of possible diseases and disease combinations; though at no point is a diagnosis rejected or accepted just on the basis of one disease appearing more likely than another, the continuous flow of hypothesis formulation, testing and modification benefits greatly from estimates regarding which diseases (or sets of diseases) are relatively more likely responsible for a set of symptoms, given the patient’s environ- ment, habits, medical history, and so on. For example, if a hypothetical patient’s immediately apparent symptoms include fatigue and cirrhosis and they test negative for hepatitis C, their doctor might formulate a working hypothesis that the cirrhosis was caused by their drinking problem, and then seek symptoms and perform tests to formulate and rule out hypotheses as to what has been causing the fatigue; but if the doctor were to further discover that the patient’s breath inexplicably smells of garlic and they are suffering from pulmonary edema, they might decide to test for the relatively rare condition of selenium poisoning.

8.3.4 Religion

Main article:

In the , Occam’s razor is sometimes applied to the existence of God. William of Ockham himself was a Christian. He believed in God, and in the authority of Scripture; he writes that “nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture.”[51] Ockham believed that an explanation has no sufficient basis in reality when it does not harmonize with reason, experience, or the Bible. However, unlike many theologians of his time, Ockham did not believe God could be logically proven with arguments. To Ockham, science was a matter of discovery, but theology was a matter of revelation and faith. He states: “only faith gives us access to theological truths. The ways of God are not open to reason, for God has freely chosen to create a world and establish a way of salvation within it apart from any necessary laws that human logic or rationality can uncover.”[52] St. Thomas Aquinas, in the Summa Theologica, uses a formulation of Occam’s razor to construct an objection to the idea that God exists, which he refutes directly with a counterargument:[53]

Further, it is superfluous to suppose that what can be accounted for by a few principles has been produced by many. But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist. For all natural things can be reduced to one principle which is nature; and all voluntary things can be reduced to one principle which is human reason, or will. Therefore there is no need to suppose God’s existence.

In turn, Aquinas answers this with the quinque viae, and addresses the particular objection above with the following answer:

Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article.

Rather than argue for the necessity of a god, some theists base their belief upon grounds independent of, or prior to, reason, making Occam’s razor irrelevant. This was the stance of Søren Kierkegaard, who viewed belief in God as a leap of faith that sometimes directly opposed reason.[54] This is also the doctrine of Gordon Clark's presuppositional apologetics, with the exception that Clark never thought the leap of faith was contrary to reason (see also ). Various arguments in favour of God establish God as a useful or even necessary assumption. Contrastingly,some atheists hold firmly to the belief that assuming the existence of God introduces unnecessary complexity (Schmitt 2005, e.g., the Ultimate Boeing 747 gambit). Taking a nuanced position, philosopher Del Ratzsch[55] suggests that the application of the razor to God may not be so simple, least of all when we are comparing that hypothesis with theories postulating multiple invisible universes.[56] Another application of the principle is to be found in the work of (1685–1753). Berkeley was an idealist who believed that all of reality could be explained in terms of the mind alone. He invoked Occam’s razor against , stating that matter was not required by his metaphysic and was thus eliminable. One potential problem with this belief is that it’s possible, given Berkeley’s position, to find itself more in line with the razor than a God-mediated world beyond a single thinker. 8.3. APPLICATIONS 65

In his article “Sensations and Brain Processes” (1959), J. J. C. Smart invoked Occam’s razor with the aim to justify his preference of the mind-brain identity theory over spirit-body dualism. Dualists state that there are two kinds of substances in the universe: physical (including the body) and spiritual, which is non-physical. In contrast, identity theorists state that everything is physical, including consciousness, and that there is nothing nonphysical. Though it is impossible to appreciate the spiritual when limiting oneself to the physical, Smart maintained that identity theory explains all phenomena by assuming only a physical reality. Subsequently, Smart has been severely criticized for his use (or misuse of Occam’s razor and ultimately retracted his advocacy of it in this context. Paul Churchland (1984) states that by itself Occam’s razor is inconclusive regarding duality. In a similar way, Dale Jacquette (1994) stated that Occam’s razor has been used in attempts to justify eliminativism and in the philosophy of mind. Eliminativism is the thesis that the ontology of folk psychology including such entities as “”, “joy”, “desire”, “fear”, etc., are eliminable in favor of an ontology of a completed neuroscience.

8.3.5 Penal ethics

In penal theory and the philosophy of punishment, parsimony refers specifically to taking care in the distribution of punishment in order to avoid excessive punishment. In the utilitarian approach to the philosophy of punishment, Jeremy Bentham's “parsimony principle” states that any punishment greater than is required to achieve its end is unjust. The concept is related but not identical to the legal concept of proportionality. Parsimony is a key consideration of the modern restorative , and is a component of utilitarian approaches to punishment, as well as the prison abolition movement. Bentham believed that true parsimony would require punishment to be individualised to take account of the sensibility of the individual—an individual more sensitive to punishment should be given a proportionately lesser one, since otherwise needless pain would be inflicted. Later utilitarian writers have tended to abandon this idea, in large part due to the impracticality of determining each alleged criminal’s relative sensitivity to specific punishments.[57]

8.3.6 Probability theory and statistics

Marcus Hutter’s universal artificial intelligence builds upon Solomonoff’s mathematical formalization of the razor to calculate the expected value of an action. There are various papers in scholarly journals deriving formal versions of Occam’s razor from probability theory, ap- plying it in statistical inference, and using it to come up with criteria for penalizing complexity in statistical inference. Papers[58][59] have suggested a connection between Occam’s razor and Kolmogorov complexity.[60] One of the problems with the original formulation of the razor is that it only applies to models with the same explana- tory power (i.e., it only tells us to prefer the simplest of equally good models). A more general form of the razor can be derived from Bayesian model comparison, which is based on Bayes factors and can be used to compare models that don't fit the data equally well. These methods can sometimes optimally balance the complexity and power of a model. Generally, the exact Occam factor is intractable, but approximations such as Akaike information criterion, Bayesian information criterion, Variational Bayesian methods, false discovery rate, and Laplace’s method are used. Many artificial intelligence researchers are now employing such techniques, for instance through work on Occam Learning. Statistical versions of Occam’s razor have a more rigorous formulation than what philosophical discussions produce. In particular, they must have a specific definition of the term simplicity, and that definition can vary. For example, in the Kolmogorov–Chaitin minimum description length approach, the subject must pick a Turing machine whose operations describe the basic operations believed to represent “simplicity” by the subject. However, one could always choose a Turing machine with a simple operation that happened to construct one’s entire theory and would hence score highly under the razor. This has led to two opposing camps: one that believes Occam’s razor is objective, and one that believes it is subjective.

Objective razor

The minimum instruction set of a universal Turing machine requires approximately the same length description across different formulations, and is small compared to the Kolmogorov complexity of most practical theories. Marcus Hutter has used this consistency to define a “natural” Turing machine of small size as the proper basis for excluding arbitrarily complex instruction sets in the formulation of razors.[61] Describing the program for the universal pro- gram as the “hypothesis”, and the representation of the evidence as program data, it has been formally proven under 66 CHAPTER 8. OCCAM’S RAZOR

Zermelo–Fraenkel set theory that “the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized.”[62] Interpreting this as minimising the total length of a two-part message encoding model followed by data given model gives us the minimum message length (MML) principle.[63][64] One possible conclusion from mixing the concepts of Kolmogorov complexity and Occam’s razor is that an ideal data compressor would also be a scientific explanation/formulation generator. Some attempts have been made to re-derive known laws from considerations of simplicity or compressibility.[65][66] According to Jürgen Schmidhuber, the appropriate mathematical theory of Occam’s razor already exists, namely, Solomonoff’s theory of optimal inductive inference[67] and its extensions.[68] See discussions in David L. Dowe’s “Foreword re C. S. Wallace”[69] for the subtle distinctions between the algorithmic probability work of Solomonoff and the MML work of Chris Wallace, and see Dowe’s “MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness”[70] both for such discussions and for (in section 4) discussions of MML and Occam’s razor. For a specific example of MML as Occam’s razor in the problem of decision tree induction, see Dowe and Needham’s “Message Length as an Effective Ockham’s Razor in Decision Tree Induction”.[71]

8.4 Controversial aspects of the razor

Occam’s razor is not an embargo against the positing of any kind of entity, or a recommendation of the simplest theory come what may.[lower-alpha 3] Occam’s razor is used to adjudicate between theories that have already passed “theoretical scrutiny” tests and are equally well-supported by evidence.[lower-alpha 4] Furthermore, it may be used to prioritize empirical testing between two equally plausible but unequally testable hypotheses; thereby minimizing costs and wastes while increasing chances of falsification of the simpler-to-test hypothesis. Another contentious aspect of the razor is that a theory can become more complex in terms of its structure (or syntax), while its ontology (or semantics) becomes simpler, or vice versa.[lower-alpha 5] Quine, in a discussion on definition, referred to these two perspectives as “economy of practical expression” and “economy in grammar and vocabulary”, respectively.[73] The theory of relativity is often given as an example of the proliferation of complex words to describe a simple concept. lampooned the misuse of Occam’s razor in his Dialogue. The principle is represented in the dialogue by Simplicio. The telling point that Galileo presented ironically was that if one really wanted to start from a small number of entities, one could always consider the letters of the alphabet as the fundamental entities, since one could construct the whole of human knowledge out of them.

8.5 Anti-razors

Occam’s razor has met some opposition from people who have considered it too extreme or rash. Walter Chatton (c. 1290–1343) was a contemporary of William of Ockham (c. 1287–1347) who took exception to Occam’s razor and Ockham’s use of it. In response he devised his own anti-razor: “If three things are not enough to verify an affirmative proposition about things, a fourth must be added, and so on.” Although there have been a number of philosophers who have formulated similar anti-razors since Chatton’s time, no one anti-razor has perpetuated in as much notability as Chatton’s anti-razor, although this could be the case of the Late Italian motto of unknown attribution Se non è vero, è ben trovato (“Even if it is not true, it is well conceived”) when referred to a particularly artful explanation. For further information, see “Ockham’s Razor and Chatton’s Anti-Razor” (1984) by Armand Maurer. Anti-razors have also been created by Gottfried Wilhelm Leibniz (1646–1716), Immanuel Kant (1724–1804), and Karl Menger (1902–1985). Leibniz’s version took the form of a principle of plenitude, as Arthur Lovejoy has called it: the idea being that God created the most varied and populous of possible worlds. Kant felt a need to moderate the effects of Occam’s razor and thus created his own counter-razor: “The variety of beings should not rashly be diminished.”[74] Karl Menger found mathematicians to be too parsimonious with regard to variables, so he formulated his Law Against Miserliness, which took one of two forms: “Entities must not be reduced to the point of inadequacy” and “It is vain to do with fewer what requires more.” A less serious but (some might say) even more extremist anti-razor is 'Pataphysics, the “science of imaginary solutions” developed by Alfred Jarry (1873–1907). Perhaps the ultimate in anti-reductionism, "'Pataphysics seeks no less than to view each event in the universe as completely unique, subject to no laws but its own.” Variations on this theme were subsequently explored by the Argentine writer Jorge Luis Borges in his story/mock-essay "Tlön, Uqbar, Orbis Tertius". There is also Crabtree’s Bludgeon, which cynically states that 8.6. SEE ALSO 67

"[n]o set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated.”

8.6 See also

• Algorithmic information theory

• Chekhov’s gun

• Common sense

• Cladistics

• Falsifiability

• Greedy reductionism

• Hanlon’s razor

• Hitchens’s razor

• Inductive probability

• KISS principle

• Metaphysical

• Minimum description length

• Minimum message length

• Newton’s flaming laser sword

• Philosophy of science

• Principle of least astonishment

• Razor (philosophy)

• Scientific method

• Scientific reductionism

• Scientific skepticism

• Simplicity 68 CHAPTER 8. OCCAM’S RAZOR

8.7 Notes

[1] “The aim of appeals to simplicity in such contexts seem to be more about shifting the burden of proof, and less about refuting the less simple theory outright.”[1]

[2] “In analyzing simplicity, it can be difficult to keep its two facets – elegance and parsimony – apart. Principles such as Occam’s razor are frequently stated in a way which is ambiguous between the two notions ... While these two facets of simplicity are frequently conflated, it is important to treat them as distinct. One reason for doing so is that considerations of parsimony and of elegance typically pull in different directions.”[1]

[3] “Ockham’s razor does not say that the more simple a hypothesis, the better.”[72]

[4] “Today, we think of the principle of parsimony as a heuristic device. We don't assume that the simpler theory is correct and the more complex one false. We know from experience that more often than not the theory that requires more complicated machinations is wrong. Until proved otherwise, the more complex theory competing with a simpler explanation should be put on the back burner, but not thrown onto the trash heap of history until proven false.”[72]

[5] “While these two facets of simplicity are frequently conflated, it is important to treat them as distinct. One reason for doing so is that considerations of parsimony and of elegance typically pull in different directions. Postulating extra entities may allow a theory to be formulated more simply, while reducing the ontology of a theory may only be possible at the price of making it syntactically more complex.”[1]

8.8 References

[1] Alan Baker (2010) [2004]. “Simplicity”. Stanford Encyclopedia of Philosophy. California: Stanford University. ISSN 1095-5054.

[2] Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov JJ McCall - Metroeconomica, 2004 - Wiley Online Library.

[3] Foundations of Occam’s Razor and parsimony in learning from ricoh.comD Stork - NIPS 2001 Workshop, 2001.

[4] A.N. Soklakov (2002). “Occam’s Razor as a formal basis for a physical theory”. Foundations of Physics Letters (Springer).

[5] J. HERNANDEZ-ORALLO (2000). “Beyond the Turing Test” (PDF). Journal of Logic, Language, and ...

[6] M. Hutter (2003). “On the existence and convergence of computable universal priors”. Springer.

[7] Samuel Rathmanner; Marcus Hutter (2011). “A philosophical treatise of universal induction”. Entropy 13 (6): 1076–1136. doi:10.3390/e13061076.

[8] Hugh G. Gauch, Scientific Method in Practice, Cambridge University Press, 2003, ISBN 0-521-01708-4, ISBN 978-0-521- 01708-4.

[9] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham’s Razor and Chemistry, HYLE—International Journal for , Vol. 3, pp. 3–28, (1997).

[10] Courtney A, Courtney M (2008). “Comments Regarding “On the Nature Of Science"" (PDF). Physics in Canada 64 (3): 7–8. Retrieved 1 August 2012.

[11] Elliott Sober, Let’s Razor Occam’s Razor, pp. 73–93, from Dudley Knowles (ed.) Explanation and Its Limits, Cambridge University Press (1994).

[12] Vogel Carey, Toni (Oct 2010). Lewis, Rick, ed. “Parsimony (In as few words as possible)". Philosophy Now (UK) (81). Retrieved 27 October 2012.

[13] Johannes Poncius’s commentary on John Duns Scotus’s Opus Oxoniense, book III, dist. 34, q. 1. in John Duns Scotus Opera Omnia, vol.15, Ed. Luke Wadding, Louvain (1639), reprinted Paris: Vives, (1894) p.483a

[14] Aristotle, Physics 189a15, On the Heavens 271a33. See also Franklin, op cit. note 44 to chap. 9.

[15] Charlesworth, M. J. (1956). “Aristotle’s Razor”. Philosophical Studies (Ireland)

[16] Wikipedians, Complexity and Dynamics citing Richard McKeon (tr.) Aristotle’s Posterior Analytics (1963) p.150

[17] James Franklin (2001). The Science of Conjecture: Evidence and Probability before Pascal. The Johns Hopkins University Press. Chap 9. p. 241. 8.8. REFERENCES 69

[18] Alistair Cameron Crombie, Robert Grosseteste and the Origins of Experimental Science 1100–1700 (1953) pp. 85–86

[19] “SUMMA THEOLOGICA: The existence of God (Prima Pars, Q. 2)". Newadvent.org. Retrieved 2013-03-26.

[20] “What Ockham really said”. Boing Boing. 2013-02-11. Retrieved 2013-03-26.

[21] Bauer, Laurie (2007). The linguistics Student’s Handbook. Edinburgh: Edinburgh University Press. p. 155.

[22] Flew, Antony (1979). A Dictionary of Philosophy. London: Pan Books. p. 253.

[23] Alistair Cameron Crombie (1959), Medieval and Early , Cambridge, MA: Harvard, Vol. 2, p. 30.

[24] “Ockham’s razor”. Encyclopædia Britannica. Encyclopædia Britannica Online. 2010. Retrieved 12 June 2010.

[25] Hawking, Stephen (2003). On the Shoulders of Giants. Running Press. p. 731. ISBN 0-7624-1698-X.

[26] Primary source: Newton (2011, p. 387) wrote the following two “philosophizing rules” at the beginning of part 3 of the Principia 1726 edition.

Regula I. Causas rerum naturalium non plures admitti debere, quam quæ & veræ sint & earum phænomenis explicandis sufficiant. Regula II. Ideoque effectuum naturalium ejusdem generis eædem assignandæ sunt causæ, quatenus fieri potest.

[27] Stanford Encyclopedia of Philosophy, 'Logical Construction'

[28] On the existence and convergence of computable universal priors from arxiv.org M Hutter – Algorithmic Learning Theory, 2003 – Springer.

[29] Baker, Alan (Feb 25, 2010). Edward N. Zalta, ed, ed. “Simplicity”. The Stanford Encyclopedia of Philosophy (Summer 2011 Edition).

[30] Pegis 1945.

[31] Stanovich, Keith E. (2007). How to Think Straight About Psychology. Boston: Pearson Education, pp. 19–33.

[32] Carroll, Robert T. “Ad hoc hypothesis.” The Skeptic’s Dictionary. 22 June 2008.

[33] Swinburne 1997 and Williams, Gareth T, 2008.

[34] “Information Theory, Inference, and Learning Algorithms” (PDF).

[35] Jefferys, William H.; Berger, James O. (1991). “Ockham’s Razor and Bayesian Statistics (preprint available as “Sharpening Occam’s Razor on a Bayesian Strop”)" (PDF). American Scientist 80: 64–72.

[36] Sober, Elliott (1975). Simplicity. Oxford: Clarendon Press (an imprint of Oxford University Press). ISBN 978-0-19- 824407-3

[37] Sober, Elliott (2004). “What is the Problem of Simplicity?". In Arnold Zellner, Hugo A. Keuzenkamp & Michael McAleer. Simplicity, Inference and Modeling: Keeping it Sophisticatedly Simple. Cambridge, U.K.: Cambridge University Press. pp. 13–31. ISBN 0-521-80361-6. Retrieved 4 August 2012 ISBN 0-511-00748-5 (eBook [Adobe Reader]) paper as pdf

[38] Einstein, Albert (1905). “Annalen der Physik” (in German) (18). pp. 639–41. |chapter= ignored (help).

[39] L Nash, The Nature of the Natural Sciences, Boston: Little, Brown (1963).

[40] de Maupertuis, PLM (1744). “Mémoires de l'Académie Royale” (in French). p. 423..

[41] de Broglie, L (1925). “Annales de Physique” (in French) (3/10). pp. 22–128..

[42] RA Jackson, Mechanism: An Introduction to the Study of Organic Reactions, Clarendon, Oxford, 1972.

[43] BK Carpenter, Determination of Organic Reaction Mechanism, Wiley-Interscience, New York, 1984.

[44] Quote Investigator: “Everything Should Be Made as Simple as Possible, But Not Simpler”

[45] Sober, Eliot (1994). “Let’s Razor Occam’s Razor”. In Knowles, Dudley. Explanation and Its Limits. Cambridge University Press. pp. 73–93..

[46] Naomi Oreskes, Kristin Shrader-Frechette, Kenneth Belitz (Feb 4, 1994). “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences” (PDF). Science, 263 (5147): 641–646. Bibcode:1994Sci...263..641O. doi:10.1126/science.263.5147.641 see note 25 70 CHAPTER 8. OCCAM’S RAZOR

[47] Paul Pojman (2009). “Ernst Mach”. The Stanford Encyclopedia of Philosophy. California: Stanford University. ISSN 1095-5054. [48] Lee, M. S. Y. (2002): “Divergent evolution, hierarchy and cladistics.” Zool. Scripta 31(2): 217–219. doi:10.1046/j.1463- 6409.2002.00101.xPDF fulltext [49] Sober, Elliot (1998). Reconstructing the Past: Parsimony, Evolution, and Inference (2nd ed.). Massacusetts Institute of Technology: The MIT Press. p. 7. ISBN 0-262-69144-2. [50] Crick 1988, p. 146. [51] “Encyclopedia of Philosophy”. Stanford. |chapter= ignored (help). [52] Dale T Irvin & Scott W Sunquist. History of World Christian Movement Volume, I: Earliest to 1453, p. 434. ISBN 9781570753961. [53] “SUMMA THEOLOGICA: The existence of God (Prima Pars, Q. 2)". Newadvent.org. Retrieved 2013-03-26. [54] McDonald 2005. [55] “Ratzsch, Del”. Calvin.. [56] “Encyclopedia of Philosophy”. Stanford. |chapter= ignored (help). [57] Tonry, Michael (2005): Obsolescence and Immanence in Penal Theory and Policy. Columbia Law Review 105: 1233– 1275. PDF fulltext [58] Chris S. Wallace and David M. Boulton; Computer Journal, Volume 11, Issue 2, 1968 Page(s):185–194, “An information measure for classification.” [59] Chris S. Wallace and David L. Dowe; Computer Journal, Volume 42, Issue 4, Sep 1999 Page(s):270–283, “Minimum Message Length and Kolmogorov Complexity.” [60] Nannen, Volker. “A short introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length” (PDF). Retrieved 2010-07-03. [61] Algorithmic Information Theory [62] Paul M. B. Vitányi and Ming Li; IEEE Transactions on Information Theory, Volume 46, Issue 2, Mar 2000 Page(s):446– 464, “Minimum Description Length Induction, Bayesianism and Kolmogorov Complexity.” [63] Chris S. Wallace and David M. Boulton; Computer Journal, Volume 11, Issue 2, 1968 Page(s):185–194, “An information measure for classification.” [64] Chris S. Wallace and David L. Dowe; Computer Journal, Volume 42, Issue 4, Sep 1999 Page(s):270–283, “Minimum Message Length and Kolmogorov Complexity.” [65] 'Occam’s razor as a formal basis for a physical theory' by Andrei N. Soklakov [66] 'Why Occam’s Razor' by Russell Standish [67] Solomonoff, Ray (1964). “A formal theory of inductive inference. Part I.”. Information and Control 7 (1–22): 1964. doi:10.1016/s0019-9958(64)90223-2. [68] J. Schmidhuber (2006) “The New AI: General & Sound & Relevant for Physics.” In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, pp. 177–200 http://arxiv.org/abs/cs.AI/0302012 [69] David L. Dowe (2008): Foreword re C. S. Wallace; Computer Journal, Volume 51, Issue 5, Sept 2008 Pages:523–560. [70] David L. Dowe (2010): “MML, hybrid Bayesian network graphical models, statistical consistency, invariance and unique- ness. A formal theory of inductive inference.” Handbook of the Philosophy of Science – (HPS Volume 7) Philosophy of Statistics, Elsevier 2010 Page(s):901–982. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.709&rep= rep1&type=pdf [71] Scott Needham and David L. Dowe (2001):" Message Length as an Effective Ockham’s Razor in Decision Tree Induction.” Proc. 8th International Workshop on Artificial Intelligence and Statistics (AI+STATS 2001), Key West, Florida, U.S.A., Jan. 2001 Page(s): 253–260 http://www.csse.monash.edu.au/~{}dld/Publications/2001/Needham+Dowe2001_Ockham. pdf [72] Robert T. Carroll. “Occam’s Razor”. The Skeptic’s Dictionary Last updated 18 February 2012 [73] Quine, W V O (1961). “Two of ”. From a logical point of view. Cambridge: Harvard University Press. pp. 20–46. ISBN 0-674-32351-3. [74] Immanuel Kant (1929). Norman Kemp-Smith transl, ed. The . Palgrave Macmillan. p. 92. Retrieved 27 October 2012. Entium varietates non temere esse minuendas 8.9. FURTHER READING 71

8.9 Further reading

• Ariew, Roger (1976). Ockham’s Razor: A Historical and of Ockham’s Principle of Parsimony. Champaign-Urbana, University of Illinois.

• Charlesworth, M. J. (1956). “Aristotle’s Razor”. Philosophical Studies (Ireland) 6: 105–112. doi:10.5840/philstudies1956606.

• Churchland, Paul M. (1984). Matter and Consciousness. Cambridge, Massachusetts: MIT Press. ISBN 0-262- 53050-3. ISBN.

• Crick, Francis H. C. (1988). What Mad Pursuit: A Personal View of Scientific Discovery. New York, New York: Basic Books. ISBN 0-465-09137-7. ISBN.

• Dowe, David L.; Steve Gardner; Graham Oppy (December 2007). “Bayes not Bust! Why Simplicity is no Problem for Bayesians”. British J. for the Philosophy of Science 58 (4): 709–754. doi:10.1093/bjps/axm033. Retrieved 2007-09-24.

• Duda, Richard O.; Peter E. Hart; David G. Stork (2000). Pattern Classification (2nd ed.). Wiley-Interscience. pp. 487–489. ISBN 0-471-05669-3. ISBN.

• Epstein, Robert (1984). “The Principle of Parsimony and Some Applications in Psychology”. Journal of Mind Behavior 5: 119–130.

• Hoffmann, Roald; Vladimir I. Minkin; Barry K. Carpenter (1997). “Ockham’s Razor and Chemistry”. HYLE— International Journal for the Philosophy of Chemistry 3: 3–28. Retrieved 2006-04-14.

• Jacquette, Dale (1994). Philosophy of Mind. Engleswoods Cliffs, New Jersey: Prentice Hall. pp. 34–36. ISBN 0-13-030933-8. ISBN.

• Jaynes, Edwin Thompson (1994). “Model Comparison and Robustness”. Probability Theory: The Logic of Science. ISBN 0-521-59271-2.

• Jefferys, William H.; Berger, James O. (1991). “Ockham’s Razor and Bayesian Statistics (Preprint available as “Sharpening Occam’s Razor on a Bayesian Strop)",” (PDF). American Scientist 80: 64–72.

• Katz, Jerrold (1998). Realistic Rationalism. MIT Press. ISBN 0-262-11229-9.

• Kneale, William; Martha Kneale (1962). The Development of Logic. London: Oxford University Press. p. 243. ISBN 0-19-824183-6. ISBN.

• MacKay, David J. C. (2003). Information Theory, Inference and Learning Algorithms. Cambridge University Press. ISBN 0-521-64298-1. ISBN.

• Maurer, A. (1984). “Ockham’s Razor and Chatton’s Anti-Razor”. Medieval Studies 46: 463–475.

• McDonald, William (2005). “Søren Kierkegaard”. Stanford Encyclopedia of Philosophy. Retrieved 2006-04- 14.

• Menger, Karl (1960). “A Counterpart of Ockham’s Razor in Pure and Applied Mathematics: Ontological Uses”. Synthese 12 (4): 415. doi:10.1007/BF00485426.

• Morgan, C. Lloyd (1903). “Other Minds than Ours”. An Introduction to Comparative Psychology (2nd ed.). London: W. Scott. p. 59. ISBN 0-89093-171-2. Retrieved 2006-04-15.

• Newton, Isaac (2011) [1726]. Philosophiæ Naturalis Principia Mathematica (3rd ed.). London: Henry Pem- berton. ISBN 978-1-60386-435-0.

• Nolan, D. (1997). “Quantitative Parsimony”. British Journal for the Philosophy of Science 48 (3): 329–343. doi:10.1093/bjps/48.3.329.

• Pegis, A. C., translator (1945). Basic Writings of St. Thomas Aquinas. New York: Random House. p. 129. ISBN 0-87220-380-8.

• Popper, Karl (1992). “7. Simplicity”. The Logic of Scientific Discovery (2nd ed.). London: Routledge. pp. 121–132. ISBN 84-309-0711-4. 72 CHAPTER 8. OCCAM’S RAZOR

• Rodríguez-Fernández, J. L. (1999). “Ockham’s Razor”. Endeavour 23 (3): 121–125. doi:10.1016/S0160- 9327(99)01199-0. • Schmitt, Gavin C. (2005). “Ockham’s Razor Suggests ”. Archived from the original on 2007-02-11. Retrieved 2006-04-15. • Smart, J. J. C. (1959). “Sensations and Brain Processes”. Philosophical Review (The Philosophical Review, Vol. 68, No. 2) 68 (2): 141–156. doi:10.2307/2182164. JSTOR 2182164. • Sober, Elliott (1975). Simplicity. Oxford: Oxford University Press.

• Sober, Elliott (1981). “The Principle of Parsimony” (PDF). British Journal for the Philosophy of Science 32 (2): 145–156. doi:10.1093/bjps/32.2.145. Retrieved 4 August 2012.

• Sober, Elliott (1990). “Let’s Razor Ockham’s Razor”. In Dudley Knowles. Explanation and its Limits. Cam- bridge: Cambridge University Press. pp. 73–94. ISBN.

• Sober, Elliott (2002). Zellner et al., eds. “What is the Problem of Simplicity?" (PDF). Retrieved 4 August 2012. • Swinburne, Richard (1997). Simplicity as Evidence for Truth. Milwaukee, Wisconsin: Marquette University Press. ISBN 0-87462-164-X. • Thorburn, W. M. (1918). “The Myth of Occam’s Razor”. Mind 27 (107): 345–353. doi:10.1093/mind/XXVII.3.345.

• Williams, George C. (1966). Adaptation and natural selection: A Critique of some Current Evolutionary Thought. Princeton, New Jersey: Princeton University Press. ISBN 0-691-02615-7. ISBN.

8.10 External links

• What is Occam’s Razor? This essay distinguishes Occam’s Razor (used for theories with identical predictions) from the Principle of Parsimony (which can be applied to theories with different predictions).

• Skeptic’s Dictionary: Occam’s Razor • Ockham’s Razor, an essay at The Galilean Library on the historical and philosophical implications by Paul Newall. • The Razor in the Toolbox: The history, use, and abuse of Occam’s razor, by Robert Novella

• NIPS 2001 Workshop “Foundations of Occam’s Razor and parsimony in learning”

• Simplicity at Stanford Encyclopedia of Philosophy • Occam’s Razor at PlanetMath.org.

• Disproof of parsimony as a general principle in science 8.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 73

8.11 Text and image sources, contributors, and licenses

8.11.1 Text • Abductive reasoning Source: https://en.wikipedia.org/wiki/Abductive_reasoning?oldid=669135238 Contributors: Timo Honkasalo, Toby Bartels, Ellmist, Edward, Michael Hardy, Tez, Owl, Kku, Wapcaplet, Karada, Angela, Glenn, BAxelrod, Zoicon5, Markhurd, Max- imus Rex, Itai, Fredrik, Paul Murray, Ruakh, Filemon, Giftlite, Jason Quinn, Pgan002, Zuidervled, Chapplek, JimWae, Mathiasp, Æ, Robin klein, Thorwald, Dufekin, Robert Horning, Guanabot, FiP, Bender235, El C, Chalst, Killtacular, Sbarthelme, Flammifer, Jher- tel, Wicko3, Zerofoks, Velho, Sandius, Andrea.gf, Tlroche, Rjwilmsi, Tizio, Jweiss11, Quiddity, Spencerk, YurikBot, Jimp, Walkie, Ninly, Palthrow, Draicone, KnightRider~enwiki, SmackBot, Lifebaka, Josephprymak, Betacommand, Mhss, Ekoontz, Te24409nsp, Nbarth, Pertn, Kostmo, Lumiere, Jon Awbrey, Byelf2007, Dankonikolic, Antonielly, , Comicist, Iridescent, CmdrObot, Greg- bard, Cydebot, Heavydpj, Letranova, Thijs!bot, 271828182, Transhumanist, Good Vibrations, Kbthompson, Deflective, Magioladitis, Arno Matthias, MatthewJS, Cathalwoods, Gorbag42, Platonic Realm, Anarchia, R'n'B, Lmagnani, Heyitspeter, Pjmpjm, Peepeedia, Bo- rat fan, Rpeh, Robert Kowalski, VolkovBot, Kyle the bot, The Tetrast, Philogo, Robert1947, Andrewaskew, Lova Falk, Hrafn, SieBot, Gravitan, Fitzador, MiNombreDeGuerra, Sunrise, CharlesGillingham, Melcombe, Josang, Logperson, Fiora0, Mild Bill Hiccup, Sun Creator, SchreiberBike, Kruusamägi, Muspilli, Dthomsen8, WikHead, Kwjbot, Addbot, LaaknorBot, Denicho, Lightbot, Jan eissfeldt, CountryBot, Legobot, Luckas-bot, Yobot, Chiendemer, KamikazeBot, AnomieBOT, Xqbot, Rodmunday, LucienBOT, Paradoxreseach, HamburgerRadio, Tkuvho, Frabbler, Trappist the monk, Lam Kin Keung, Lotje, Dinamik-bot, LilyKitty, Arided, MegaSloth, Skade- fryd, EmausBot, Hrld11, Axxonnfire, ChuispastonBot, Just granpa, NTox, ClueBot NG, Ghb19, Frietjes, Thepigdog, Masssly, Rengas, COice6, Sadjesture, PsychiX0, Dennieswen, Psychology fun, Michelle.chui, Kogakoga11, Faiza93, Kenbausch, Monkbot, Lanzdsey and Anonymous: 101 • Condition (philosophy) Source: https://en.wikipedia.org/wiki/Condition_(philosophy)?oldid=657349371 Contributors: Michael Hardy, Stemonitis, Gareth E Kegg, BOOLE1847, Yobot, AnomieBOT, Omnipaedista, DivineAlpha, Adirlanz, POLY1956, Tyro13, Vieque and Anonymous: 1 • Deductive reasoning Source: https://en.wikipedia.org/wiki/Deductive_reasoning?oldid=668159391 Contributors: The Cunctator, Toby Bartels, Youandme, Mrwojo, DennisDaniels, Michael Hardy, TakuyaMurata, BenKovitz, EdH, DesertSteve, Charles Matthews, Dtgm, Hyacinth, Lumos3, Robbot, R3m0t, Romanm, AceMyth, Blainster, Tobias Bergemann, Ancheta Wis, Giftlite, Lethe, Guanaco, Bovlb, Archenzo, Jason Quinn, Piotrus, Karol Langner, Aequo, Stepp-Wulf, EricBright, TedPavlic, Kevinb, Stbalbach, PhilHibbs, Causa sui, Flammifer, Jumbuck, Ryanmcdaniel, CyberSkull, Nasukaren, Garrisonroo, SidP, Kenyon, Tariqabjotu, Stephen, Velho, OwenX, Mind- matrix, TheNightFly, Ruud Koot, Jon Harald Søby, ZephyrAnycon, Teemu Leisti, BD2412, Nightscream, Koavf, Gmelli, Jweiss11, Tan- gotango, YAZASHI, Ggfevans, DirkvdM, FlaBot, Nihiltres, Fresheneesz, Skc0001, Alphachimp, Chobot, YurikBot, Borgx, Erachima, DTRY, Rick Norwood, Holycharly, TriGen, EEMIV, Bota47, Shadro, Mjhy0926, Allens, Infinity0, GrinBot~enwiki, DVD R W, Sar- danaphalus, SmackBot, Aim Here, KocjoBot~enwiki, Thunderboltz, Stephensuleeman, WookieInHeat, Ieopo, The great kawa, Gilliam, Q1w2e3, Mhss, Psiphiorg, Bluebot, ViolinGirl, MalafayaBot, George Rodney Maruri Game, Octahedron80, DHN-bot~enwiki, Javalenok, Chlewbot, Mr.Z-man, ConMan, Cybercobra, Jon Awbrey, RossF18, Byelf2007, The Ungovernable Force, SashatoBot, Nishkid64, Dbtfz, JoseREMY, IronGargoyle, Extremophile, Penagate, Comicist, Quaeler, Iridescent, K, Zarex, Van helsing, ChristineD, Neelix, Greg- bard, Julian Mendez, Thijs!bot, LactoseTI, Marek69, Kborer, Noaa, AntiVandalBot, MaTT~enwiki, AaronY, IrishPete, Oliver Si- mon, BenMcLean, JAnDbot, Skomorokh, Agentnj, Hewinsj, GurchBot, Probios, Djradon, Kirrages, Rupes, VoABot II, Arno Matthias, Snowded, Oxford Dictionary, Illspirit, Vanished user ty12kl89jq10, Cathalwoods, HemRaj Singh, Pbroks13, Pomte, Stjeanp, N4nojohn, J.delanoy, Trusilver, Shawn in Montreal, OAC, Tparameter, Jaxha, CompuChip, Heyitspeter, DorganBot, Vinsfan368, Lallallal, Jon- williamsl, Pasixxxx, MARVINWAGNER, Rucha58, Leoremy, TXiKiBoT, Technopat, A4bot, Msviolone, Philogo, Broadbot, Abdul- lais4u, Dprust, Andrewaskew, Graymornings, Lova Falk, Kusyadi, MCTales, Cnilep, Sfmammamia, SieBot, Paradoctor, Meldor, Mankar Camoran, Svick, DesolateReality, Mygerardromance, Escape Orbit, Troy 07, Kenji000, De728631, ClueBot, R000t, Philosophy.dude, Bfeylia, Neverquick, Excirial, Rohit nit, GoldenGoose100, PaulKincaidSmith, SpikeToronto, Ember of Light, GlasGhost, Thingg, Van- ished User 1004, Zenwhat, XLinkBot, BodhisattvaBot, Kwork2, Gerhardvalentin, Saeed.Veradi, WikHead, Qwertytastic, Ewger, Ad- ,Legobot, Luckas-bot, Yobot ,سعی ,dbot, CanadianLinuxUser, H92Bot, Glane23, GlobalSequence, Tide rolls, Lightbot, ScienceApe 2D, Azcolvin429, AnomieBOT, Rubinbot, PresMan, Flewis, Prbclj25, ArthurBot, Xqbot, Doezxcty, S h i v a (Visnu), Lord Archer, Capricorn42, Xephras, Hartkiller, Hjurgen, Ordning, Lord Bane, Ruby.red.roses, RowanEvans, RibotBOT, SEASONnmr, FrescoBot, Beclp, Pinethicket, Rushbugled13, Mohehab, JimRech, Jandalhandler, NeuroBells123, Keri, Humble Rat, Difu Wu, Whisky drinker, Badelmann, Tesseract2, DASHBot, EmausBot, Hedonistbot4000, Mo ainm, Tommy2010, Winner 42, TheGeomaster, JSquish, Fæ, On- cenawhile, MindShifts, Foreverlove642, Wayne Slam, Donner60, Tziemer991, Jimmynudes2, ClueBot NG, Drdoug5, Kimberleyporter, Fauzan, Jj1236, Tideflat, Amr.rs, Dictabeard, O.Koslowski, Masssly, Widr, Chillllls, Helpful Pixie Bot, Nichole773, Hallows AG, Wiki13, Luke13579, Richard84041, Ninjagoat, Sopidex, Dhruv-NJITWILL, Sgilmore10, Courtneysfoster, ChrisGualtieri, GoShow, Oligocene, ShangTsung87, Lugia2453, 93, MostlyListening, BreakfastJr, EMBViki, Strikingstar, VogelsangLorenzo, Ginsuloft, Mauriziogeri2013, Monkbot, KasparBot, Sorte Slyngel and Anonymous: 366 • Inductive reasoning Source: https://en.wikipedia.org/wiki/Inductive_reasoning?oldid=665496094 Contributors: AxelBoldt, The Cunc- tator, The Anome, Ryguasu, DennisDaniels, Michael Hardy, Earth, Owl, Voidvector, BoNoMoJo (old), Jfitzg, Andres, Evercat, EdH, DesertSteve, Timwi, Trontonian, Bemoeial, Ike9898, Wolfgang Kufner, Radiojon, Markhurd, Peregrine981, Banno, Nufy8, Nurg, Ro- manm, Ojigiri~enwiki, Mikiher, Tobias Bergemann, Filemon, Ancheta Wis, Giftlite, Zigger, Peruvianllama, Bovlb, Jason Quinn, Jackol, ELApro, Guppyfinsoup, Lucidish, Archer3, Discospinster, Freestylefrappe, Ivan Bajlo, Bender235, Kbh3rd, El C, Aaronbrick, David Crawshaw, Bobo192, I9Q79oL78KiL0QTFHgyc, Flammifer, Samohyl Jan, Yuckfoo, Mikeo, Recury, Nightstallion, Kinema, Kazvorpal, Kenyon, Hq3473, Velho, Mindmatrix, Kzollman, Ruud Koot, Alfakim, Andrea.gf, Rjwilmsi, Jweiss11, Strake, Bryan H Bell, Reinis, Matt Deres, Chris Pressey, Latka, RexNL, Fresheneesz, NotJackhorkheimer, Spencerk, King of Hearts, Chobot, Dresdnhope, YurikBot, RussBot, Gaius Cornelius, Grafen, Holycharly, SAE1962, 24ip, Pkearney, Roy Brumback, Bota47, Shadro, Tomisti, Sethery, Fram, Curpsbot-unicodify, Teply, Infinity0, Bo Jacoby, Jer ome, Sardanaphalus, SmackBot, RedHouse18, David Kernow, Rtc, McGeddon, Ist- van, Eskimbot, Klokie, Gilliam, Duke Ganote, Ohnoitsjamie, Betacommand, Bluebot, Anthonzi, LaggedOnUser, DHN-bot~enwiki, Doc- torStrangelove, Can't sleep, clown will eat me, Go For It, Avb, Edivorce, Mr.Z-man, Jmnbatista, Richard001, Kalexander, Jon Awbrey, Ne- shatian, Andeggs, Vina-iwbot~enwiki, Byelf2007, Jonrgrover, Normalityrelief, RichMorin, Antonielly, Aleenf1, Lukemcgrath, Grumpyy- oungman01, Domokato, Levineps, Iridescent, K, Wjejskenewr, FleetCommand, CWY2190, Indigenius, El aprendelenguas, TMN, Greg- bard, Slazenger, Peterdjones, Khromatikos, Gogo Dodo, Wikipediarules2221, Miguel de Servet, Letranova, Gacggt, Ucanlookitup, Sec- ond Quantization, Danny Reese, Defeatedfear, Fotomatt, AntiVandalBot, Luna Santin, Minhtung91, Spencer, Salgueiro~enwiki, JAnD- bot, Davewho2, Dmar198, Coffee2theorems, Magioladitis, Bongwarrior, Equinexus, Hasek is the best, Arno Matthias, Farquaadhnchmn, 74 CHAPTER 8. OCCAM’S RAZOR

DAGwyn, Snowded, Moopiefoof, Cathalwoods, MetsBot, Chrisdone, WLU, Mommyzbrat, STBot, Dionysiaca, Pomte, TheSeven, Ot- toMäkelä, LordAnubisBOT, Mahewa, Touisiau, Chiswick Chap, Heyitspeter, Pianoman55~enwiki, MetsFan76, STBotD, Andy March- banks, Straw Cat, Zach425, VolkovBot, Thewolf37, Pasixxxx, Hotfeba, Shinju, Jimmaths, Tiktuk, Deleet, Katoa, Jazzwick, Philogo, Abdullais4u, Jackfork, PDFbot, Anarchangel, Jor344, Shifter95, Cnilep, Harmonicemundi, PhysPhD, Jammycaketin, AlleborgoBot, Newbyguesses, Dwandelt, Matthew Yeager, Mark Klamberg, Flyer22, Bobklahn, Oxymoron83, Vanished user oij8h435jweih3, Mi- NombreDeGuerra, Bagatelle, Sunrise, Linkboyz, Melcombe, Oneforlogic, ClueBot, Farras Octara, Eric Schoettle, Niceguyedc, Van- dalometer, Rbakels, Excirial, Jusdafax, Kikilamb, Estirabot, ChrisKalt, Hazzzzzz12, Lx 121, XLinkBot, Fastily, Gerhardvalentin, Tegiap, Saeed.Veradi, Skarebo, WikHead, Kwjbot, Kbdankbot, Tayste, Addbot, Tanhabot, Jtradke, Numbo3-bot, Tide rolls, ScienceApe, KUS- SOMAK, Legobot, Luckas-bot, Oilstone, THEN WHO WAS PHONE?, AnomieBOT, Doingmorestuffonline, Vanakaris, Bob Burkhardt, Parthian Scribe, Xqbot, Lord Bane, Hanberke, A157247, F-22 Raptored, Omnipaedista, RibotBOT, Delbertpeach, Alialiac, FieldOp- erative, Paine Ellsworth, SBA1870, Machine Elf 1735, Pinethicket, Kiefer.Wolfowitz, Mavit0, A8UDI, Cleon7177, NeuroBells123, Gamewizard71, TobeBot, Jonkerz, Miracle Pen, Dbmikus, Hyperbytev2, Ripchip Bot, Elspru, NerdyScienceDude, George Richard Leeming, EmausBot, Elanguescence, Grjoni88, Logical Cowboy, Gfoley4, RenamedUser01302013, Mo ainm, ZéroBot, Leminh91, CanonLawJunkie, Oncenawhile, Wagino 20100516, Erianna, EricWesBrown, L Kensington, Just granpa, 28bot, ClueBot NG, Gareth Griffith-Jones, MelbourneStar, Ek65, Millermk, Schicagos, Tsunamicharlie, Albertttt, Thepigdog, Masssly, Widr, Helpful Pixie Bot, HMSSolent, Curb Chain, BG19bot, Wiki13, MusikAnimal, Jander80, Wandwiki, Blue Mist 1, Will.Oliver, Trailspark, RichardMills65, Ctasa221, Fangli997376557, ChrisGualtieri, Lhu720, Hagrid da fifth, Watchpup32, Neurocitizen, Oligocene, Moonstroller-2, Jochen Burghardt, M strat17, 90b56587, Reid12345, Londomollari42, EMBViki, Cauzality, Aubreybardo, Liz, Logicman2, Hoffoholi, Super- ploro, Temprack5446, Hellerrrr, Pretty Panther 26 and Anonymous: 395 • Inference Source: https://en.wikipedia.org/wiki/Inference?oldid=664073695 Contributors: The Anome, Edward, Michael Hardy, Kku, SebastianHelm, Angela, BAxelrod, EdH, Ww, Dysprosia, Markhurd, Furrykef, Hyacinth, Phil Boswell, Robbot, Shoesfullofdust, Giftlite, 0x6D667061, Snowdog, Bovlb, Alan Au, Neilc, Andycjp, Gzuckier, Slartoff, Chapplek, Icairns, Discospinster, Vsmith, Bender235, Haxwell, Ntmatter, Sbarthelme, MPerel, Alansohn, Arthena, Bart133, Computerjoe, Camw, Ruud Koot, Paxsimius, BD2412, Men- daliv, Rjwilmsi, Koavf, Authr, Bhadani, Amelio Vázquez, FlaBot, Twipley, Crazycomputers, Ewlyahoocom, Chobot, Bgwhite, YurikBot, Phantomsteve, Rick Norwood, Ziel, Robert McClenon, Amakuha, Hans Oesterholt, Andrew Lancaster, Fang Aili, GraemeL, Leonar- doRob0t, Infinity0, Zvika, DVD R W, Sardanaphalus, SmackBot, KnowledgeOfSelf, NickShaforostoff, Rajah9, Gilliam, Chris the speller, MalafayaBot, Therandreedgroup, Go For It, Frap, Kjetil1001, DéRahier, KerathFreeman, Stevenmitchell, Cybercobra, StephenReed, Jon Awbrey, SashatoBot, General Ization, Gobonobo, Kransky, 16@r, Hetar, Iridescent, K, Igoldste, Tawkerbot2, JForget, Wolfdog, Gregbard, Eu.stefan, Julian Mendez, Scolobb, Letranova, Epbr123, Headbomb, Marek69, John254, Odoncaoa, Sean William, LachlanA, Mentifisto, Gioto, Venar303~enwiki, Albany NY, PhilKnight, VoABot II, Rederiksen, Twsx, Caesarjbsquitti, Whoop whoop, JaGa, Mar- tinBot, J.delanoy, Trusilver, Bogey97, Ginsengbomb, Cpiral, Chiswick Chap, NewEnglandYankee, Nadiatalent, Hanacy, Juliancolton, ACSE, VolkovBot, Aesopos, Lradrama, Philogo, Sylvank, C Chiara, Andy Dingley, Graymornings, Lova Falk, Spinningspark, Cnilep, Paracit, RHaden, Seraphita~enwiki, Newbyguesses, SieBot, Tiddly Tom, Storytellershrink, Exert, Oxymoron83, Iain99, CharlesGilling- ham, Melcombe, Escape Orbit, Sfan00 IMG, ClueBot, GorillaWarfare, The Thing That Should Not Be, Arakunem, Tomas e, Mild Bill Hiccup, CounterVandalismBot, Sambitbikaspal, PhySusie, AaronNGray, Versus22, Eroenj, Qwfp, Stickee, Gerhardvalentin, Badgernet, HexaChord, Addbot, Fgnievinski, Ronhjones, DutchDevil, Jblondin, Tide rolls, BrianKnez, JakobVoss, Legobot, Yobot, Tamiasciurus, Karnpatel18, IW.HG, Eric-Wester, Jim1138, IRP, Darolew, Ulric1313, Materialscientist, E2eamon, TheAMmollusc, Intelati, Capri- corn42, Forring, Grim23, Tuxponocrates, Govindjsk, Lancioni, Olexa Riznyk, Intelligentsium, Pinethicket, Ashimashi, Nurefsan, Aque- ousmatt, TBloemink, Hentzde, Mknomad5, Hriber, MegaSloth, El Mayimbe, DARTH SIDIOUS 2, Dhburns, Mean as custard, John of Reading, Honestrosewater, Jake, AsceticRose, Scandizzzle, Anir1uph, Fixblor, Lynette2c, Mr legumoto, Donner60, Peter Karlsen, Xanchester, ClueBot NG, Run54, Satellizer, Bped1985, Kevin Gorman, Masssly, Widr, Helpful Pixie Bot, Craighawkinson, Rm1271, BattyBot, GoShow, EuroCarGT, Davidlwinkler, Jochen Burghardt, Milesandkilometrestogo, Lockfox, I am One of Many, Harlem Baker Hughes, DavidLeighEllis, 126 rules, Acschenkel, Sam Sailor, Thennicke, Writers Bond, Monkbot, Jasminemarie647, LadyLeodia, Vol- wen, KasparBot and Anonymous: 367 • Logic Source: https://en.wikipedia.org/wiki/Logic?oldid=669517830 Contributors: AxelBoldt, Vicki Rosenzweig, The Anome, Toby Bartels, Ryguasu, Hirzel, Dwheeler, Stevertigo, Edward, Patrick, Chas zzz brown, Michael Hardy, Lexor, TakuyaMurata, Bagpuss, Looxix~enwiki, Ahoerstemeier, Notheruser, BigFatBuddha, Александър, Glenn, Marco Krohn, Rossami, Tim Retout, Rotem Dan, Ever- cat, EdH, DesertSteve, Caffelice~enwiki, Mxn, Michael Voytinsky, Renamed user 4, Rzach, Charles Matthews, Dcoetzee, Paul Stansifer, Dysprosia, Jitse Niesen, Xiaodai~enwiki, Markhurd, MikeS, C Fenijn, SEWilco, Samsara, J D, Shizhao, Olathe, Jusjih, Ldo, Banno, Chu- unen Baka, Robbot, Iwpg, Fredrik, R3m0t, Altenmann, MathMartin, Rorro, Rholton, Saforrest, Borislav, Robertoalencar, Michael Snow, Raeky, Guy Peters, Jooler, Filemon, Ancheta Wis, Exploding Boy, Giftlite, Recentchanges, Inter, Wolfkeeper, Lee J Haywood, COM- PATT, Everyking, Rookkey, Malyctenar, Andris, Bovlb, Jason Quinn, Sundar, Siroxo, Deus Ex, Rheun, LiDaobing, Roachgod, Quadell, Starbane, Piotrus, Ludimer~enwiki, Karol Langner, CSTAR, Rdsmith4, APH, JimWae, OwenBlacker, Kntg, Mysidia, Pmanderson, Ed- uardoporcher, Eliazar, Grunt, Guppyfinsoup, Mike Rosoft, Freakofnurture, Ultratomio, Lorenzo Martelli, Discospinster, KillerChihuahua, Rhobite, Guanabot, Leibniz, Hippojazz, Vsmith, Raistlinjones, Slipstream, ChadMiller, Paul August, Bender235, El C, Chalst, Mwan- ner, Tverbeek, Bobo192, Cretog8, Johnkarp, Shenme, Amerindianarts, Passw0rd, Knucmo2, Storm Rider, Red Winged Duck, Alansohn, Shadikka, Rh~enwiki, Chira, ABCD, Kurt Shaped Box, SlimVirgin, Batmanand, Yummifruitbat, Shinjiman, Velella, Sciurinæ, MIT Trekkie, Alai, CranialNerves, Velho, Mel Etitis, Mindmatrix, Camw, Kokoriko, Kzollman, Ruud Koot, Orz, MONGO, Apokrif, Jok2000, Wikiklrsc, CharlesC, MarcoTolo, DRHansen, Gerbrant, Tslocum, Graham87, Alienus, BD2412, Porcher, Rjwilmsi, Mayumashu, Саша Стефановић, GOD, Bruce1ee, Salix alba, Crazynas, Ligulem, Baryonic Being, Titoxd, FlaBot, Kwhittingham, Latka, Mathbot, Twipley, SportsMaster, RexNL, AndriuZ, Quuxplusone, Celendin, Influence, R Lee E, JegaPRIME, Malhonen, Spencerk, Chobot, DVdm, Ea- monnPKeane, Roboto de Ajvol, Wavelength, Deeptrivia, KSmrq, Endgame~enwiki, Polyvios, CambridgeBayWeather, KSchutte, Nawlin- Wiki, Rick Norwood, SEWilcoBot, Mipadi, Brimstone~enwiki, LaszloWalrus, AJHalliwell, Trovatore, Pontifexmaximus, Chunky Rice, Cleared as filed, Nick, Darkfred, Wjwma, Googl, Mendicott, StuRat, Open2universe, ChrisGriswold, Nikkimaria, OEMCUST, Nahaj, Extreme Unction, Allens, Sardanaphalus, Johndc, SmackBot, Lestrade, InverseHypercube, Pschelden, Jim62sch, Jagged 85, WookieIn- Heat, Josephprymak, Timotheus Canens, Srnec, LonesomeDrifter, Collingsworth, Gilliam, Skizzik, RichardClarke, Heliostellar, Chris the speller, Jaymay, Da nuke, Unbreakable MJ, MK8, Andrew Parodi, Kevin Hanse, MalafayaBot, Clconway, Sciyoshi~enwiki, Go for it!, Mikker, Zsinj, Can't sleep, clown will eat me, Misgnomer, Grover cleveland, Fuhghettaboutit, Cybercobra, Nakon, Jiddisch~enwiki, Richard001, MEJ119, Kabain52, Lacatosias, Jon Awbrey, DMacks, Henning Makholm, Ged UK, Ceoil, Byelf2007, SashatoBot, Lam- biam, Dbtfz, Deaconse, UberCryxic, FrozenMan, Heimstern, Shlomke, Shadowlynk, F15 sanitizing eagle, Prince153, WithstyleCMC, Hvn0413, Meco, RichardF, Novangelis, Vagary, Pamplmoose, KJS77, Hu12, Levineps, BananaFiend, K, Lottamiata, Catherineyron- wode, Mrdthree, Igoldste, Themanofnines, Adambiswanger1, Satarnion, Tawkerbot2, Galex, SkyWalker, CRGreathouse, CBM, Edi- torius, Rubberchix, Gregbard, Kpossin, Cydebot, [email protected], Jasperdoomen, Samuell, Quinnculver, Peterdjones, Travelbird, Pv2b, 8.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 75

Drksl, JamesLucas, Julian Mendez, Dancter, Tawkerbot4, Shirulashem, Doug Weller, DumbBOT, Garik, Progicnet, Mattisse, Letranova, Thijs!bot, Epbr123, Kredal, Smee, Marek69, AgentPeppermint, OrenBochman, Escarbot, Eleuther, Mentifisto, Vafthrudnir, AntiVandal- Bot, Peoppenheimer, Majorly, Gioto, Hidayat ullah, GeePriest, Dougher, Sluzzelin, JAnDbot, Narssarssuaq, MER-C, The Transhumanist, Avaya1, Zizon, Frankie816, Savant13, Dr mindbender, LittleOldMe, Bongwarrior, VoABot II, SDas, JNW, Arno Matthias, Appraiser, Gamkiller, Smihael, Caesarjbsquitti, Midgrid, Bubba hotep, Moopiefoof, GeorgeFThomson, Virtlink, David Eppstein, Epsilon0, Der- Hexer, Waninge, Exbuzz, MartinBot, Wylve, CommonsDelinker, EdBever, C.R.Selvakumar, J.delanoy, Trusilver, Jbessie, Fictionpuss, Cpiral, RJMalko, McSly, Lightest~enwiki, Classicalsubjects, Mrg3105, Daniel5Ko, The Transhumanist (AWB), Policron, MetsFan76, Kenneth M Burke, Steel1943, Idioma-bot, Spellcast, WraithM, VolkovBot, Cireshoe, Rucha58, Macedonian, Hotfeba, Indubitably, Fun- damental metric tensor, Jimmaths, Djhmoore, Aesopos, Rei-bot, Llamabr, Ontoraul, Philogo, Leafyplant, Sanfranman59, Abdullais4u, Cullowheean, Wikiisawesome, Maxim, Myscience, LIBLAHLIBLAHTIMMAH, Synthebot, Rurik3, Koolo, Nagy, Symane, PGWG, W4chris, Prom2008, NHRHS2010, Radagast3, Demmy, JonnyJD, Newbyguesses, Linguist1, SieBot, StAnselm, Maurauth, Gerakibot, RJaguar3, Yintan, Bjrslogii, Soler97, Til Eulenspiegel, Flyer22, DanEdmonds, Undead Herle King, Crowstar, Redmarkviolinist, Spine- thetic, Thelogicthinker, DancingPhilosopher, Svick, Valeria.depaiva, Adhawk, Sginc, Tognopop, CBM2, 3rdAlcove, PsyberS, Francvs, Classicalecon, Athenean, Atif.t2, Martarius, ClueBot, Andrew Nutter, Snigbrook, The Thing That Should Not Be, Taroaldo, Ukabia, TheOldJacobite, Boing! said Zebedee, Niceguyedc, Blanchardb, DragonBot, Jessieslame, Excirial, Alexbot, Jusdafax, Watchduck, AE- NAON, NuclearWarfare, Arjayay, SchreiberBike, Thingg, JDPhD, Scalhotrod, Budelberger, Skunkboy74, Gerhardvalentin, Duncan, Saeed.Veradi, Mcgauley08, NellieBly, Noctibus, Aunt Entropy, Jjfuller123, Spidz, Addbot, Rdanneskjold, Proofreader77, Atethnekos, Sully111, Logicist, Vitruvius3, Glane23, Uber WoMensch!, Chzz, Favonian, LinkFA-Bot, AgadaUrbanit, Numbo3-bot, Ehrenkater, Tide rolls, Lightbot, Macro Shell, Zorrobot, Jarble, JEN9841, Aarsalankhalid, GorgeUbuasha, Arcvirgos 08, Jammie101, Francos22, Azcolvin429, MassimoAr, AnomieBOT, Jim1138, IRP, AdjustShift, Melune, NickK, Neurolysis, ArthurBot, Gemtpm, Blueeyedbomb- shell, Junho7391, Xqbot, Gilo1969, The Land Surveyor, Tyrol5, A157247, Petropoxy (Lithoderm Proxy), Uarrin, GrouchoBot, Hifcelik, ,Tales23, GhalyBot, Aaron Kauppi, GliderMaven, FrescoBot, Liridon, D'Artagnol, Tobby72 ,, مهدی جمشیدی78.26פריימן, ,Omnipaedista D'ohBot, Mewulwe, Itisnotme, Rhalah, Citation bot 1, Chenopodiaceous, AstaBOTh15, Gus the mouse, Pinethicket, Vicenarian, A8UDI, Ninjasaves, Seryred123, Maokart444, Gamewizard71, FoxBot, TobeBot, Burritoburritoburrito, Mysticcooperfox, Lotje, GregKaye, Vis- ,Merlinsorca, Literateur, Jarpup, Whisky drinker, Mean as custard, Rlnewma, TjBot, Walkinxyz ,בן גרשון ,tascan, Vrenator, Duoduoduo EmausBot, Orphan Wiki, The Kytan Apprentice, Pologic, Jedstamas, Wham Bam Rock II, Solarra, ZéroBot, Leminh91, Josve05a, Shuipzv3, Mar4d, Wayne Slam, Frigotoni, Rcsprinter123, FrankFlanagan, L Kensington, Danmuz, Eranderson, Donner60, Chewings72, Puffin, GKaczinsky, ChuispastonBot, NTox, Poopnubblet, Xanchester, Rememberway, ClueBot NG, W.Kaleem, Jack Greenmaven, SusikMkr, Quantamflux, Validlessness, Bastianperrot, Wdchk, Snotbot, Masssly, Widr, Lawsonstu, ESL75, Helpful Pixie Bot, Anav2221, BG19bot, Darekaon, Norma Romm, PTJoshua, Northamerica1000, Geegeeg, JohnChrysostom, Frze, Chjohnson39, Alex.Ramek, CJ- Macalister, CitationCleanerBot, Jilliandivine, Flosfa, Chrisct1993, Brad7777, Mewhho18, A.coolmcfly, Compulogger, Cyberbot II, Roger Smalling, The Illusive Man, NanishaOpaenyak, Rhlozier, EagerToddler39, Dexbot, Marius siuram, Табалдыев Ысламбек, Oman- chandy007, RideLightning, Jochen Burghardt, Wieldthespade, Hippocamp, Wickid123, Matticusmadness, JMCF125, NIXONDIXON, ,Tentinator, EvergreenFir, Babitaarora, Ugog Nizdast, Melody Lavender, JustBerry, Skansi.sandro ,גלעד ניר ,CsDix, I am One of Many Ginsuloft, Robf00f1235, Calvinator8, The Annoyed Logician, Liz, GreyWinterOwl, ByDash, Jbob13, Henniepenny, Matthew Derick B Cruz, Filedelinkerbot, Sherlock502, Fvdedphill, Norwo037, Karnaoui, Pat132, The Expedia, Sbcdave, Muneeb Masoud, Jacksplay, Asdklf;, Esicam, Ntuser123, Cthulhu is love cthulhu is life, Adamrobson28, Josmust222, Layfi, KcBessy, SamiLayfi, Lanzdsey, SoSivr, ZanderEdmunds, KasparBot, Bestusername-ign, Sparky Macgillicuddy and Anonymous: 756 • Necessity and sufficiency Source: https://en.wikipedia.org/wiki/Necessity_and_sufficiency?oldid=662038792 Contributors: The Anome, Patrick, Michael Hardy, Wshun, Dante Alighieri, Kku, TakuyaMurata, AugPi, Evercat, Charles Matthews, Dcoetzee, Markhurd, Gakmo, Chris Roy, Raeky, Tobias Bergemann, Giftlite, Pmanderson, Abdull, FT2, Paul August, Aranel, Polluks, Alansohn, Agquarx, Krubo, Kbolino, Shreevatsa, Madmardigan53, Uncle G, Kzollman, Tlroche, Mayumashu, Tim!, KYPark, Quiddity, Joonasl, Roboto de Ajvol, Pagrashtak, Maunus, Black Falcon, Tomisti, NorsemanII, Arthur Rubin, That Guy, From That Show!, SmackBot, Motorneuron, Gif32, The Gnome, Paxfeline, Bluebot, JMSwtlk, Epastore, Javalenok, Cybercobra, Richard001, Drphilharmonic, Vulakh, Gandalfxviv, Jeiki Rebirth, DabMachine, Rschwieb, JoeBot, CBM, Gregbard, Cydebot, Karho.Yau, Wolfgang hofkirchner, JAnDbot, Skomorokh, Alastair Haines, Fredlighthall, K95, Penpen~enwiki, R'n'B, CommonsDelinker, Yonitdm, SFinside, Yoak, Kenneth M Burke, Akakios, VolkovBot, Pasixxxx, Gwib, Orie0505, PaulTanenbaum, Habaneroman, Lamro, Spinningspark, RubySS, SieBot, Matuku, Anchor Link Bot, Clue- Bot, EeepEeep, EpicDream86, Spitfire, MystBot, Flash94, Cibbic, Duttler, Addbot, Luckas-bot, TaBOT-zerem, Empro2, LilHelpa, Con- structive editor, FrescoBot, Nicolas Perrault III, D'ohBot, Machine Elf 1735, December21st2012Freak, Miracle Pen, Igor Yalovecky, Maximilianklein, Jpgill86, Dgh725, DUStof.zaidan, Hanlon1755, Attleboro, Sensorsweep, Seppi333, Acorley, POLY1956, JHU1959, Jerrymc2, Loraof and Anonymous: 92 • Occam’s razor Source: https://en.wikipedia.org/wiki/Occam’{}s_razor?oldid=669484149 Contributors: Damian Yerrick, AxelBoldt, Paul Drye, Trelvis, MichaelTinkler, The Cunctator, Derek Ross, Eloquence, Mav, Bryan Derksen, Zundark, The Anome, Jan Hidders, Ed Poor, Eclecticology, Josh Grosse, Youssefsan, Tommy~enwiki, Ortolan88, SimonP, Heron, GrahamN, Hirzel, Jaknouse, Mintguy, Youandme, R Lowry, Modemac, Bernfarr, Olivier, Someone else, Yves Junqueira, Leandrod, Mkmcconn, Lir, Michael Hardy, Cprompt, Fred Bauder, Dante Alighieri, DIG~enwiki, LenBudney, Liftarn, MartinHarper, Nferrier, Bcrowell, Minesweeper, Kosebamse, Snoyes, Morken (usurped), Lupinoid, Glenn, Whkoh, Bogdangiusca, LouI, Andres, Cimon Avaro, Jiang, Evercat, Jacquerie27, Rob Hooft, Artei- tle, Adam Conover, Hike395, Hashar, Renamed user 4, Novum, Dying, Charles Matthews, Timwi, Stet, Ww, Dandrake, Gutza, Lord Kenneth, Markhurd, Lfwlfw, Brantgoose, Charlesdarwin, OverZealousFan, Maximus Rex, Hyacinth, Fairandbalanced, Xaven, Optim, Raul654, Banno, ThereIsNoSteve, Dmbaguley, Gentgeen, Robbot, JD Jacobson, Moncrief, Lowellian, Meduz, Chris Roy, Gkochanowsky, Henrygb, AceMyth, Rursus, Geogre, Hadal, Anthony, Nagelfar, Alerante, Albatross2147, Giftlite, Smjg, Achurch, ShaunMacPherson, Wolfkeeper, Halda, Fastfission, Dissident, Curps, Michael Devore, FeloniousMonk, Pashute, Jfdwolff, Duncharris, Tom-, Joshuapaquin, Prosfilaes, Eequor, Khalid hassani, Jabowery, Abu el mot~enwiki, Tagishsimon, Wmahan, Gugganij, Vadmium, CryptoDerk, Quadell, An- tandrus, Zootalures, Salasks, Loremaster, Elembis, Jeshii, Kaldari, Karol Langner, TylerD, Histrion, Urhixidur, Burschik, Mschlindwein, Sonett72, Epimetreus, Armeck, Grunt, ELApro, Reflex Reaction, Lacrimosus, Jcamenisch, Blokhead, Skal, Discospinster, The PNM, Rich Farmbrough, Phil O. Cetes, Jesper Laisen, Vsmith, Eric Shalov, Gonzalo Diethelm, Stbalbach, ESkog, ZeroOne, Sunborn, Bernard- Sumption, Janderk, Kjoonlee, Melamed, Nabla, Brian0918, Clement Cherlin, CanisRufus, El C, PhilHibbs, Bookofjude, Sf, Guettarda, Causa sui, Wee Jimmy, Panzuriel, Hurricane111, Smalljim, John Vandenberg, Lore Sjoberg, Mtruch, Shenme, Jung dalglish, Solem- navalanche, A1kmm, Obradovic Goran, Jonathunder, Justinc, Mdd, Wayfarer, Licon, Dovy, Orangemarlin, Lycanthrope, OGoncho, Preuninger, Alansohn, Gary, JYolkowski, Elpincha, Mackinaw, Misodoctakleidist, Arthena, Atlant, Jeltz, Nurban, Hinotori, SlimVir- gin, Fwb44, Ciaran H, Fritzpoll, Sp00n17, Jaardon, Bantman, DreamGuy, Ombudsman, Wtmitchell, Ronark, Jheald, Sciurinæ, DSatz, Johntex, Galaxiaad, RPIRED, Stephen, Feezo, Dmitry Brant, Stemonitis, BadLeprechaun, Weyes, Joelpt, OleMaster, Joriki, Richard Arthur Norton (1958- ), Mel Etitis, Woohookitty, Linas, Havermayer, Drostie, Robert K S, Pol098, Before My Ken, Meeso, Bkwillwm, 76 CHAPTER 8. OCCAM’S RAZOR

Tickle me, Isnow, Kriegman, Pictureuploader, Fxer, Doric Loon, Btyner, Mandarax, Fleetham, Pete142, Graham87, Zeromaru, Super7, Cuchullain, BD2412, Teflon Don, JIP, Mendaliv, Rjwilmsi, Mayumashu, Valentinejoesmith, WCFrancis, XP1, Sdornan, MZMcBride, Fishanthrope, Darksasami, Bubba73, Brighterorange, The wub, Wikifier, Sango123, Fish and karate, Billjefferys, Wragge, FlaBot, SchuminWeb, Musical Linguist, Nihiltres, Mindloss, ReSearcher, Nickpowerz, Offkilter, RexNL, Ayla, Acyso, Jrtayloriv, Nuge, Ex- elban, Pevernagie, Preslethe, Svanhoosen, Evands, King of Hearts, MeI Etitis, Chobot, Bornhj, DVdm, Bgwhite, Poorsod, Wiserd911, The Rambling Man, YurikBot, Wavelength, RobotE, RussBot, Exir, Guslacerda, Bhny, Greenyoda, Pelago, Alset, Bachrach44, Welsh, Twin Bird, Chunky Rice, SAE1962, Corbmobile, Randolf Richardson, LaraCroft NYC, Jpbowen, Ehrick, Ospalh, Nate1481, Bucket- sofg, Evie em, Kewp, Ms2ger, Light current, Zero1328, Nikkimaria, Fiskus~enwiki, Fonny, SMcCandlish, Seventy-one, Wikiwawawa, Brz7, TheQuaker, Palthrow, Afn, Geoffrey.landis, Argo Navis, Eaefremov, Suburbanslice, RG2, Alexandrov, Tom Morris, Karora, Psichron, SmackBot, Avonhungen, TobyK, ElectricRay, Tom Lougheed, Jasy jatere, GregChant, Alex1011, DCDuring, Dfeig, Uny- oyega, Lawrencekhoo, Mishaweis, KocjoBot~enwiki, Jagged 85, Tbonnie, Arny, DLH, Ordinant, Hoov, Sebesta, Bonanza Jellybean, Tod- dDeLuca, InGearX, BenAveling, Valley2city, Bluebot, Kurykh, Keegan, Baldghoti, Spilla, Jprg1966, Thumperward, Jojo 1, CSWarren, Elyk53, Whispering, Pete4winds, Go for it!, Alphathon, Ioscius, Fife Club, Xyzzyplugh, Stevenmitchell, Riose, MrRadioGuy, Cyberco- bra, Xibe, Doberman Pharaoh, Weregerbil, Occultations, Jon Awbrey, Henning Makholm, Kendrick7, Ashi Starshade, Maas, Byelf2007, Esrever, Johnny Logic, ArglebargleIV, Bcasterline, Anlace, Cesium 133, Loodog, Robofish, JoshuaZ, JorisvS, Joffeloff, Cmh, Dave Carter, Cielomobile, Silvescu, Thomas Gilling, Pfold, Grumpyyoungman01, MaximvsDecimvs, Doczilla, NThurston, Dr.K., Novangelis, DaBjork, Levineps, K, Michaelbusch, Antonio Prates, Ascensionblade, StephenBuxton, Octane, V0rt3x, Courcelles, Schlagwerk, Tauʻol- unga, Davidbspalding, MonkeeSage, Taowizard, JForget, Wolfdog, Ddarby14, Markjoseph125, CmdrObot, Tanthalas39, David s graff, Sntjohnny, CBM, Nunquam Dormio, Gebrelu, Wws, Terence Lewis, WeggeBot, Maiya, Fcforrest, Gregbard, Jasperdoomen, Peterdjones, Hebrides, Tkynerd, Skittleys, DiScOrD tHe LuNaTiC, RobGo, Danogo, Arcayne, SteveMcCluskey, Daniel Olsen, Pro Grape, Maziotis, Imprevu, Talgalili, Letranova, Malleus Fatuorum, Jdvelasc, Thijs!bot, Chacufc, Mystar, Daniel, Mojo Hand, Marek69, Wildthing61476, Maadal, Davidhorman, Petiejoe, Gergprotect, Bethpage89, Michael A. White, Escarbot, Dalliance, EmRunTonRespNin, AntiVandal- Bot, WinBot, Luna Santin, Seaphoto, Dwightwiki, Ronja Addams-Moring, Clan-destine, Perakhantu, William Knorpp, Mikenorton, Daedalus2097, Narssarssuaq, Krishvanth, Ichaer, Sonicsuns, Fetchcomms, Hamsterlopithecus, 100110100, TallulahBelle, Dclose73, Acroterion, Aphoxema, Magioladitis, Almuayyad, VoABot II, Clivestaples, Swpb, Snowded, NigelCunningham, Boffob, PeterJWagner3, Dr.Gurge, Vssun, TehBrandon, ChazBeckett, AlmoKing, Gun Powder Ma, ClubOranje, CommonsDelinker, Dr. t, J.delanoy, Majorcats, Filll, Tylercantango, Svetovid, DannyBoy2k, AstroHurricane001, Tlatito, TyrS, Wilsbadkarma, Maurice Carbonaro, Dkmak, Apollo8fan, Cpiral, Rumpuscat, Dispenser, It Is Me Here, Janus Shadowsong, Bmoinlbc, Touisiau, JBFrenchhorn, OAC, HiLo48, Hpcoder, NewEng- landYankee, Antony-22, Nwbeeson, 4granite, SacredCheese, Milogardner, SemblaceII, Diego, SteveMerrick, IceDragon64, Speciate, Alan U. Kennington, Sparklism, Jrugordon, King Lopez, VolkovBot, Morenooso, The Wild Falcon, Orthologist, Butwhatdoiknow, Feath- erofmaat, Tomer T, Philip Trueman, Paulscho, Mtanti, Zamphuor, Paddling bear, Malinaccier, The Bone III, Mark v1.0, Vipinhari, Myles325a, Lk9984, Rei-bot, Anonymous Dissident, Liko81, VoxRobotica, Saibod, Rugbychica707, LeaveSleaves, SGT141, Modocc, RiverStyx23, Cash cash, Lova Falk, Enviroboy, Vinhtantran, Bakerstmd, EmxBot, Neparis, Kbrose, Nschoem, SieBot, Mycomp, Moon- riddengirl, Malcolmxl5, Ori, Iamthedeus, Vexorg, Dawn Bard, Yintan, Til Eulenspiegel, JohnManuel, RucasHost, Oda Mari, Arbor to SJ, Antzervos, Avnjay, R0uge, Michael Courtney, Sunrise, DancingPhilosopher, S2000magician, Ymeta731, Hamiltondaniel, Denisarona, Escape Orbit, Francvs, Emptymountains, Asher196, Mgothard, Invertzoo, Faithlessthewonderboy, ClueBot, Binksternet, Chicagoshim, The Thing That Should Not Be, Runesrule, Drmies, Krazymann, Mild Bill Hiccup, SuperHamster, DragonBot, Kitsunegami, Excirial, Alexbot, Three-quarter-ten, Watchduck, Erebus Morgaine, Noca2plus, Sun Creator, Brews ohare, Psinu, Mikaey, Darren23, StevenDH, Wkboonec, Crowsnest, Editortothemasses, DumZiBoT, Chris1834, Yurizuki, XLinkBot, Fastily, Spitfire, Le Ptomaine, Wertuose, Ger- hardvalentin, Bert Carpenter, WikHead, MarxistRebel, Billwhittaker, Parsonas, Mimarx, NCDane, VanishedUser ewrfgdg3df3, Addbot, Power.corrupts, Latinist~enwiki, DOI bot, Tcncv, Dgroseth, OmniaMutantur, Ronhjones, Dranu, Download, Proxima Centauri, Sub- verted, Jellevc, Favonian, LinkFA-Bot, Tvljohn, Rebelinside, AAFall, Unibond, Tide rolls, SDJ, Lightbot, Jan eissfeldt, Ben Ben, Luckas- bot, Cowperc, Yobot, L4UR13, Ht686rg90, Freikorp, Eiger3970, KamikazeBot, Ningauble, Imnotdoingit, AnomieBOT, LlywelynII, Bosonic dressing, Forturas, Citation bot, Steven120965, SCIENCE4EV, Clark89, Xqbot, Ekwos, Falsodar, JimVC3, Wayne Roberson, Austin, Texas, Millahnna, Smk65536, The Land Surveyor, Petropoxy (Lithoderm Proxy), GrouchoBot, Riggedfallacy, Omnipaedista, Naftoligug, RibotBOT, ProfGiles, Smallman12q, TSW94, FrescoBot, LucienBOT, Paine Ellsworth, Nujjer, MikeParniak, Snorklin, Py- roman2133, Meishern, Citation bot 1, Anthony on Stilts, Symplectic Map, Pink Bull, 10metreh, Jonesey95, Kungfukats2, SpaceFlight89, Fantantric, Nerdified, Mudguppy, Damnedfan1234, Wotnow, ItsZippy, Kdascheller, Styxnsoon, WikiTome, The Pink Oboe, Chriss.2, Mchcopl, Becritical, Salvio giuliano, Tesseract2, Wfunction, EmausBot, Bua333, Jimmygu3, Hpvpp, Slightsmile, Pablodox, Solomon- fromfinland, Professionaleducator, Djfj, SunOfErat, Knight1993, A930913, SporkBot, Wikignome0530, AtomicEddy, OnePt618, Hi- ernonymous, Donner60, Abulhawa89, HandsomeFella, Teapeat, DASHBotAV, Support.and.Defend, Rememberway, ClueBot NG, Ptrb, ClaretAsh, Michaelmas1957, Rverma1993, JimsMaher, Jesspiper, Albertttt, Braincricket, Thepigdog, Kevin Gorman, Helpful Pixie Bot, Tholme, HMSSolent, BG19bot, Brentworks, Richard Tester, CitationCleanerBot, Harizotoh9, Rjcripe, MrBill3, FeralOink, Pikachu Bros., Rodaen, Ultimaterializer, Fosburyflop, BattyBot, Giganticube, ChrisGualtieri, SD5bot, Isaidnoway, JYBot, Dexbot, Psr1995, Wenjanglau, Mogism, Cerabot~enwiki, Czech is Cyrillized, The Quirky Kitty, EnamTTmane, Jochen Burghardt, 90b56587, Reatlas, BreakfastJr, François Robere, Harlem Baker Hughes, Comp.arch, Lesser Cartographies, Ameshan, Yadsalohcin, JaconaFrere, Monkbot, Radath, SJ2010SJ2010, Dorgotron333, Vidauty, Bad perm, Barklestork, Ashenderflickin, Hicham kotob, Loraof, Magicyle, May22freed, Fourpermutations, Alex e e alex, Elisionnovice, Nøkkenbuer, KasparBot, Tejas Subramaniam, BachGirl89, Sean12712, Ephemerance, Srednuas Lenoroc, AuveBopSmoke and Anonymous: 974

8.11.2 Images • File:Aristotle_Altemps_Inv8575.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Aristotle_Altemps_Inv8575.jpg License: Public domain Contributors: Jastrow (2006) Original artist: Copy of Lysippus • File:Brain.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Nicolas_P._Rougier%27s_rendering_of_the_human_ brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist: Nicolas Rougier • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi- nal artist: ? • File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” 8.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 77

• File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? • File:Heliocentric.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/57/Heliocentric.jpg License: Public domain Contrib- utors: first upload to de:wikipedia 22:42, 5. Apr 2004 by de:UserRivi . . 570 x 480 (63.606 Byte) (Heliozentrisches Weltbild) Original artist: Andreas Cellarius • File:ICE_3_Fahlenbach.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/ICE_3_Fahlenbach.jpg License: CC BY- SA 2.5 Contributors: Own work Original artist: Sebastian Terfloth User:Sese_Ingolstadt • File:Leprechaun_or_Clurichaun.png Source: https://upload.wikimedia.org/wikipedia/commons/5/55/Leprechaun_or_Clurichaun.png License: Public domain Contributors: ? Original artist: ? • File:Logic_portal.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Logic_portal.svg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk) • File:Necessary_and_sufficient_venn_(set)_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Necessary_ and_sufficient_venn_%28set%29_diagram.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Valentin Tihhomirov • File:Nuvola_apps_kalzium.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8b/Nuvola_apps_kalzium.svg License: LGPL Contributors: Own work Original artist: David Vignoni, SVG version by Bobarino • File:Office-book.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a8/Office-book.svg License: Public domain Contrib- utors: This and myself. Original artist: Chris Down/Tango project • File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open- Clipart Original artist: OpenClipart • File:Pluralitas.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cf/Pluralitas.jpg License: Public domain Contributors: Own work (Original text: self-made) Original artist: Latinist • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Socrates.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Socrates.png License: Public domain Contributors: Transferred from en.wikipedia to Commons. Original artist: The original uploader was Magnus Manske at English Wikipedia Later versions were uploaded by Optimager at en.wikipedia. • File:Solar_eclipse_1999_4_NR.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Solar_eclipse_1999_4_NR.jpg Li- cense: CC-BY-SA-3.0 Contributors: Own work www.lucnix.be Original artist: Luc Viatour • File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_ with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg from the Tango project. Original artist: Benjamin D. Esham (bdesham) • File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CC-BY-SA-3.0 Contributors: • Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen • File:Wikibooks-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikibooks-logo.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al. • File:Wikinews-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Wikinews-logo.svg License: CC BY-SA 3.0 Contributors: This is a cropped version of Image:Wikinews-logo-en.png. Original artist: Vectorized by Simon 01:05, 2 August 2006 (UTC) Updated by Time3000 17 April 2007 to use official Wikinews colours and appear correctly on dark backgrounds. Originally uploaded by Simon. • File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain Contributors: ? Original artist: ? • File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA 3.0 Contributors: Rei-artur Original artist: Nicholas Moreau • File:Wikiversity-logo-Snorky.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Snorky • File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs), based on original logo tossed together by Brion Vibber

8.11.3 Content license

• Creative Commons Attribution-Share Alike 3.0