Goodman's New Problem of Induction
Total Page:16
File Type:pdf, Size:1020Kb
Goodman’s New Problem of Induction 1. The Old Problem: Recall Hume’s argument that, ultimately, inductive inferences are unjustified. For instance, if I observe 1,000 copper wires, and all of them conduct electricity, I might be led to believe that ALL copper wires conduct electricity. But, how can I know this without first examining ALL of the copper in the ENTIRE universe!? Answer: We don’t think we need to examine every piece of copper to know that all of it conducts electricity. We simply assume that future (yet-to-be-made) observations will conform to past ones! i.e., all as-yet-unobserved copper will ALSO be a good conductor. But, what justifies this assumption that the future will conform to the past? Only our past observation that, every time we arrived at times that were once future but were now present, we observed that the future did in fact conform to the past, in just the way we had expected it to. And we assume that things will keep happening in this way. But what justifies THAT assumption? …And we’re off on an infinite regress. Ultimately, all we ever really observe is one thing and then another, one thing and then another, etc.—but never any sort of NECESSARY connection between the two things. Yet, this repetition of two things found together (e.g., copper and conductivity) leads us to—for better or worse—form a HABIT, where we expect that they will ALWAYS be found together. But, “it’s a habit I have” isn’t a good justification for belief. So, it looks like all inductive inferences are ultimately unjustified. Goodman’s Solution: What would be required for inductive inferences to be justified? What more is wanted? Some sort of justification for The Assumption (that the future will conform to the past). But, how in the heck do we get that? There must be some way to reason from PARTIAL observation to universal claims. Perhaps it will help to consider the nature of our DE-ductive inferences first. Time for an aside. ***Digression: Achilles and the Tortoise*** In this story by Lewis Carroll (author of Alice in Wonderland), the very method of deductive reasoning is challenged. Consider this triangle: A B C If I told you that A=C, and B=C, then you’d reasonably deduce that A=B. Right? (1) A=C (2) B=C (C) Therefore, A=B. 1 We say that this argument is valid. That is, if (1) and (2) are true, then (C) MUST also be true. But, is the truth of (C) really GUARANTEED by the truth of (1) and (2)? It seems like we’re missing a claim—namely, some axiom about transitivity, which TELLS US that (C) is guaranteed by (1) and (2), like this: (1) A=C (2) B=C (3) Transitivity of Identity: If A=C, and B=C, then A=B. In other words: If (1) and (2), then (C). (C) Therefore, A=B. Ah. Surely NOW the argument is valid. That is, if (1) and (2) and (3) are true, then (C) MUST also be true. But, is that right? Is the truth of (C) really GUARANTEED by these three statements? It seems like we’re missing a claim—namely one which TELLS US that (C) is guaranteed by those three claims, like this: (1) A=C (2) B=C (3) Transitivity of Identity: If A=C, and B=C, then A=B. (4) If A=C, and B=C, and if Identity is Transitive, then A=B. In other words: If (1) and (2) and (3), then (C). (C) Therefore, A=B. Ah. Surely NOW the argument is valid! But wait. Is the truth of (C) really guaranteed by the truth of premises (1) – (4)? It seems that we need a premise (5)… And so on… It seems that we’re off on an infinite regress, such that, we need an infinite number of premises to be established before we may conclude that A=B. The result is that deductive inferences are impossible! Solution: The paradox above exposes the fact that deductive reasoning requires us to make some assumptions about what sorts of relationships between premises and conclusions count as valid, and which do not. The infinite regress gets started once we insist that this assumption needs to be INCLUDED as an additional PREMISE. The traditional response is to suggest that we resist this move. We should not include claims about which logical relationships are valid and which are not as additional premises. Rather, these claims are just the background, foundational AXIOMS of logic. ***End Digression*** 2 Back to Goodman. Now consider the following: This argument has the following form: 1. If today is Friday, then class meets today. 1. If P, then Q. 2. Today is Friday. 2. P. 3. Therefore, class meets today. 3. Therefore, Q. This is a good inference. What makes it good? Easy: It has the right FORM. Those who study logic learn that, actually, ANY argument with this form is a good argument. Substitute ANY meaningful statements in for P and Q, and this inference will always be a good one. But, we do not need an additional premise which tells us that the relationship between (1)+(2) and (3) is truth-preserving—i.e., ‘If both “if P, then Q” and “P”, then “Q”’—rather, it is just taken as a basic rule or axiom of deductive reasoning that the inference above is a good one. But, what makes the rule above a GOOD rule? How did we determine this? Goodman’s answer: The rule conforms to accepted practices of reasoning. [Is that a good reason? What justifies our deductive inference rules? Could we maybe appeal to the PC-Principle?] A-ha! But, then, surely we can say the same about IN-ductive reasoning! For instance, 1. F1 is a G; F2 is a G; F3 is a G; And so on. (e.g., this copper wire is conductive; etc.) 2. Therefore, all F’s are G’s. (therefore, all copper wire is conductive.) What makes an inductive inference a good one? Answer: It has the right FORM, following the RULES of inductive logic. And what makes something a good RULE? Answer: It conforms to accepted practices of reasoning. [In other words, just as we take it to be permissible to derive ‘Q’ from ‘If P, then Q’ and ‘P’ without stating the inference rule as an additional premise, Goodman seems to think that we may permissibly derive ‘All F’s are G’s from ‘F1 is a G; F2 is a G; F3 is a G; etc.’ without stating ‘Unobserved instances will conform to observed instances’ as an additional premise. Does that seem correct to you?] The Rule for Inductive Inferences: In short, the following inference is inductively valid: 1. F1 is a G; F2 is a G; F3 is a G; etc. In short: All observed F’s have been G’s. 2. Therefore, all F’s are G’s. For instance, imagine that I find an emerald, and observe that it is green. (E1 is green.) I find another, and it is green. (E2 is green.) I find another, and it too is green. (E3 is green.) And so on, such that All OBSERVED emeralds have been green. In practice, generally accept that this evidence confirms or justifies the claim that ALL emeralds are green. (This statement is not made 100% certain, of course, but all of those observations are thought to make it very LIKELY to be true.) 3 2. Accidental Regularities: But, can we plug in WHATEVER we want for F and G (as we did for deduction)? Are ALL arguments with the form above good arguments? Consider: 1. Looking at the people in this room: This person goes to W&M; And this person goes to W&M; And so does this one; And this one; And so on. 2. Therefore, Everyone goes to W&M. Here’s another one: 1. The turkey says: This day is a day when the farmer feeds me. And so is this one. And this one. And this one. 2. Therefore, All days are days when the farmer feeds the turkey. Won’t that turkey be surprised when Thanksgiving arrives! (On a related note, does my repeated observation that I have never died justify belief in my future immortality?) With DE-ductive inferences, for any valid argument form, ALL instances of that form will be valid. Apparently it’s not like that for IN-ductive inferences. It seems that SOME instances of the form above are valid, while other instances of that VERY SAME FORM are not! But, what separates the good inductive inferences from the bad ones? It is typically said that the good ones (e.g., about emeralds) are lawlike statements which identify REAL necessary connections in the world, while the bad ones are merely accidental regularities (where any perceived connection is not really a necessary one). …But, what distinguishes lawlike statements from mere accidental regularities‼? 3. Grue and Bleen: The problem just identified gets even worse. Define ‘grue’ as: Grue = Something is grue if and only if it is first observed before 2020 and is green, or is first observed after 2020 and is blue. It turns out that every emerald we have ever observed has been grue. So, by induction: 1. E1 is grue; E2 is grue; E3 is grue; etc.; i.e., All observed emeralds have been grue. 2. Therefore, All emeralds are grue. Here’s the kicker: Our past observations (that every single emerald we have ever observed has been green) EQUALLY supports the following two hypotheses: (a) Every emerald we observe after 2020 will be green.