Goodman’s New

1. The Old Problem: Recall Hume’s argument that, ultimately, inductive inferences are unjustified. For instance, if I observe 1,000 copper wires, and all of them conduct electricity, I might be led to believe that ALL copper wires conduct electricity. But, how can I know this without first examining ALL of the copper in the ENTIRE universe!? Answer: We don’t think we need to examine every piece of copper to know that all of it conducts electricity. We simply assume that future (yet-to-be-made) observations will conform to past ones! i.e., all as-yet-unobserved copper will ALSO be a good conductor.

But, what justifies this assumption that the future will conform to the past? Only our past observation that, every time we arrived at times that were once future but were now present, we observed that the future did in fact conform to the past, in just the way we had expected it to. And we assume that things will keep happening in this way. But what justifies THAT assumption? …And we’re off on an infinite regress.

Ultimately, all we ever really observe is one thing and then another, one thing and then another, etc.—but never any sort of NECESSARY connection between the two things. Yet, this repetition of two things found together (e.g., copper and conductivity) leads us to—for better or worse—form a HABIT, where we expect that they will ALWAYS be found together. But, “it’s a habit I have” isn’t a good justification for belief. So, it looks like all inductive inferences are ultimately unjustified.

Goodman’s Solution: What would be required for inductive inferences to be justified? What more is wanted? Some sort of justification for The Assumption (that the future will conform to the past). But, how in the heck do we get that? There must be some way to reason from PARTIAL observation to universal claims. Perhaps it will help to consider the nature of our DE-ductive inferences first. Time for an aside.

***Digression: Achilles and the Tortoise***

In this story by Lewis Carroll (author of Alice in Wonderland), the very method of deductive reasoning is challenged.

Consider this triangle: A B

C

If I told you that A=C, and B=C, then you’d reasonably deduce that A=B. Right?

(1) A=C (2) B=C (C) Therefore, A=B. 1

We say that this argument is valid. That is, if (1) and (2) are true, then (C) MUST also be true. But, is the truth of (C) really GUARANTEED by the truth of (1) and (2)? It seems like we’re missing a claim—namely, some axiom about transitivity, which TELLS US that (C) is guaranteed by (1) and (2), like this:

(1) A=C (2) B=C (3) Transitivity of : If A=C, and B=C, then A=B. In other words: If (1) and (2), then (C). (C) Therefore, A=B.

Ah. Surely NOW the argument is valid. That is, if (1) and (2) and (3) are true, then (C) MUST also be true. But, is that right? Is the truth of (C) really GUARANTEED by these three statements? It seems like we’re missing a claim—namely one which TELLS US that (C) is guaranteed by those three claims, like this:

(1) A=C (2) B=C (3) Transitivity of Identity: If A=C, and B=C, then A=B. (4) If A=C, and B=C, and if Identity is Transitive, then A=B. In other words: If (1) and (2) and (3), then (C). (C) Therefore, A=B.

Ah. Surely NOW the argument is valid! But wait. Is the truth of (C) really guaranteed by the truth of premises (1) – (4)? It seems that we need a premise (5)… And so on…

It seems that we’re off on an infinite regress, such that, we need an infinite number of premises to be established before we may conclude that A=B. The result is that deductive inferences are impossible!

Solution: The above exposes the fact that deductive reasoning requires us to make some assumptions about what sorts of relationships between premises and conclusions count as valid, and which do not. The infinite regress gets started once we insist that this assumption needs to be INCLUDED as an additional PREMISE. The traditional response is to suggest that we resist this move. We should not include claims about which logical relationships are valid and which are not as additional premises. Rather, these claims are just the background, foundational AXIOMS of logic.

***End Digression***

2

Back to Goodman. Now consider the following: This argument has the following form:

1. If today is Friday, then class meets today. 1. If P, then Q. 2. Today is Friday. 2. P. 3. Therefore, class meets today. 3. Therefore, Q.

This is a good inference. What makes it good? Easy: It has the right FORM. Those who study logic learn that, actually, ANY argument with this form is a good argument.

Substitute ANY meaningful statements in for P and Q, and this inference will always be a good one. But, we do not need an additional premise which tells us that the relationship between (1)+(2) and (3) is truth-preserving—i.e., ‘If both “if P, then Q” and “P”, then “Q”’—rather, it is just taken as a basic rule or axiom of deductive reasoning that the inference above is a good one.

But, what makes the rule above a GOOD rule? How did we determine this? Goodman’s answer: The rule conforms to accepted practices of reasoning. [Is that a good reason? What justifies our deductive inference rules? Could we maybe appeal to the PC-Principle?]

A-ha! But, then, surely we can say the same about IN-ductive reasoning! For instance,

1. F1 is a G; F2 is a G; F3 is a G; And so on. (e.g., this copper wire is conductive; etc.) 2. Therefore, all F’s are G’s. (therefore, all copper wire is conductive.)

What makes an inductive inference a good one? Answer: It has the right FORM, following the RULES of inductive logic. And what makes something a good RULE? Answer: It conforms to accepted practices of reasoning. [In other words, just as we take it to be permissible to derive ‘Q’ from ‘If P, then Q’ and ‘P’ without stating the inference rule as an additional premise, Goodman seems to think that we may permissibly derive ‘All F’s are G’s from ‘F1 is a G; F2 is a G; F3 is a G; etc.’ without stating ‘Unobserved instances will conform to observed instances’ as an additional premise. Does that seem correct to you?]

The Rule for Inductive Inferences: In short, the following inference is inductively valid:

1. F1 is a G; F2 is a G; F3 is a G; etc. In short: All observed F’s have been G’s. 2. Therefore, all F’s are G’s.

For instance, imagine that I find an emerald, and observe that it is green. (E1 is green.) I find another, and it is green. (E2 is green.) I find another, and it too is green. (E3 is green.) And so on, such that All OBSERVED emeralds have been green. In practice, generally accept that this evidence confirms or justifies the claim that ALL emeralds are green. (This statement is not made 100% certain, of course, but all of those observations are thought to make it very LIKELY to be true.)

3

2. Accidental Regularities: But, can we plug in WHATEVER we want for F and G (as we did for deduction)? Are ALL arguments with the form above good arguments? Consider:

1. Looking at the people in this room: This person goes to W&M; And this person goes to W&M; And so does this one; And this one; And so on. 2. Therefore, Everyone goes to W&M.

Here’s another one:

1. The turkey says: This day is a day when the farmer feeds me. And so is this one. And this one. And this one. 2. Therefore, All days are days when the farmer feeds the turkey.

Won’t that turkey be surprised when Thanksgiving arrives! (On a related note, does my repeated observation that I have never died justify belief in my future immortality?)

With DE-ductive inferences, for any valid argument form, ALL instances of that form will be valid. Apparently it’s not like that for IN-ductive inferences. It seems that SOME instances of the form above are valid, while other instances of that VERY SAME FORM are not! But, what separates the good inductive inferences from the bad ones? It is typically said that the good ones (e.g., about emeralds) are lawlike statements which identify REAL necessary connections in the world, while the bad ones are merely accidental regularities (where any perceived connection is not really a necessary one). …But, what distinguishes lawlike statements from mere accidental regularities‼?

3. Grue and Bleen: The problem just identified gets even worse. Define ‘grue’ as:

Grue = Something is grue if and only if it is first observed before 2020 and is green, or is first observed after 2020 and is blue.

It turns out that every emerald we have ever observed has been grue. So, by induction:

1. E1 is grue; E2 is grue; E3 is grue; etc.; i.e., All observed emeralds have been grue. 2. Therefore, All emeralds are grue.

Here’s the kicker: Our past observations (that every single emerald we have ever observed has been green) EQUALLY supports the following two hypotheses:

(a) Every emerald we observe after 2020 will be green. (b) Every emerald we observe after 2020 will be blue.

This case is WORSE than those of the W&M students or the turkey above, for—unlike those examples—it UNDERMINES our existing predictions/inductive inferences!

4

This is the new problem of induction. The old worry is back: NONE of our inductive inferences are justified, because our past observations seem to EQUALLY support mutually exclusive, incompatible conclusions. …That is, unless we can identify some relevant difference between the inference from greenness and the one from grueness.

Solution: Perhaps, in order to count as a lawlike statement, the properties attributed must be properly basic. It just seems like properties such as ‘being grue’ are not BASIC, whereas ‘being green’ IS basic. Thus, ‘All emeralds are grue’ is not a lawlike claim.

Reply: In reply, Goodman defines another term, a relative of grue:

Bleen = Something is bleen if and only if it is first observed before 2020 and is blue, or is first observed after 2020 and is green.

The objection has been that it is GREEN that is the more basic property, and that GRUE is derivative, or non-basic. However, that is only because we defined Grue in terms of green and blue. We might instead have done it the other way around! For instance:

Green = Something is green if and only if it is first observed before 2020 and is grue, or is first observed after 2020 and is bleen.

Blue = Something is blue if and only if it is first observed before 2020 and is bleen, or is first observed after 2020 and is grue.

Therefore, green and blue are only more basic than grue and bleen if we ASSUME that they are—but that begs the question.

[Still, there’s SOMETHING intuitive here, right? Scientists strive to “carve nature at its joints”, identifying the REAL categories in nature (or ‘natural kinds’). For instance, when we say that some things are copper, some are ravens, some things are green, etc., don’t we take ourselves to be making claims about the ways things REALLY are, and identifying the categories into which they REALLY fall? Perhaps grue and bleen are not real categories, no more than—to use an example from Ted Sider—it would be true to say that some things are ‘inpieces’ (i.e., things that are indoors) and others are ‘outpieces’ (i.e., things that are outdoors). These might be subjective labels we can put on things, but they’re not REAL categories. Yet, surely COPPER (and other periodic table categories) is a REAL category. …Right? Even so, what distinguishes so-called real categories from made up ones?]

Great video on this topic here.

5

The Paradox of the Ravens (A Problem for Inductive Confirmation)

1. Intro: Imagine that you’re working in your lab, trying to cure pnemonia. So far, 100 patients with the infection who have been given penicillin have been cured. Based on these observations, you form the hypothesis: Pneumonia is cured by penicillin.

This is an inductive inference. From repeated past observations of pneumonia being cured by penicillin, you speculate that FUTURE observations will CONTINUE to conform to this pattern. In short, you form an inference of the form: All F’s are G’s. In this case,

All [cases of pneumonia] are [illnesses that are cured by penicillin].

Seemingly, your observation of an F (case of pneumonia) being a G (illness cured by penicillin) CONFIRMS this hypothesis, at least a little; i.e., it raises your confidence in it.

But, now consider something that is neither an F nor a G: For instance, blindness. In your lab, you observe that blindness is NOT cured by penicillin. Does this raise your confidence in your earlier hypothesis about pneumonia? Seemingly, it doesn’t, right? Intuitively, this observation is IRRELEVANT, right? WRONG!

2. Three Plausible Assumptions: We have said that inductive inferences have the form: All F’s are G’s. Call statements of this form: P. What counts as confirmation of P? That is, what sorts of observations would RAISE our credence in (or justification for believing) P, and what sorts of observations would LOWER it? The answer seems to be:

(1) Observation (O) of an F that is a G counts as evidence in favor of P (at least a little). That is, O raises one’s justification for believing P by at least a little.

But, now consider the following claims:

(2) If two statements are logically equivalent, then any evidence in favor of one of them counts as evidence in favor of the other.

(3) is logically equivalent to In symbolic form: (ꓯx)(Fx  Gx)  (ꓯx)(¬Gx  ¬Fx) Examples: All dogs are mammals  All non-mammals are non-dogs. Everyone 21+ can legally drink  Everyone who CAN’T legally drink is NOT 21+.

Claim (2) is quite plausible, and (3) is a logically necessary truth. Yet, together the three claims above entail the following:

6

(4) An observation of a non-G that is a non-F counts as evidence in favor of P.

3. The Paradox of the Ravens: Claim (4) delivers some extremely counter-intuitive results. To see why, consider the following instance of P: All ravens are black. There are four possible combinations of F and G. Here are some examples:

G (black) Not-G (not black) F (raven) (a) A black raven (b) A red raven Not-F (not a raven) (c) A black spider (d) A red apple

Inuitively, the four boxes above confirm or disconfirm P as follows:

(a) partially confirms P. Observing one black raven raises my confidence in the claim that by at least a little. (b) totally disconfirms P. If I observe a single red raven, I know for sure that is false. (c) neither confirms or disconfirms P. Observing a black spider doesn’t make me any more or less confident in the claim that . It is irrelevant. (d) neither confirms or disconfirms P. Observing a red apple doesn’t make me any more or less confident in the claim that . It is irrelevant.

WRONG! Claim (4) says the OPPOSITE about box (d). In fact, observation of a red apple DOES confirm the hypothesis that , at least a little bit. How so?

By (3): is logically equivalent to .

By (1): Observation of a red apple (i.e., a non-black thing that is also a non-raven) confirms the statement .

By (2): Anything that confirms ALSO confirms , since these two statements are logically equivalent.

Therefore, (4): Observation of a red apple DOES partially confirm .

This seems absurd. And, similarly, because the following are logically equivalent:

(i) All [cases of pneumonia] are [illnesses that are cured by penicillin]. (ii) All [NON-illnesses that are cured by penicillin] are [NON-cases of pneumonia].

7 it would also turn out that observing a case of blindness uncured by penicillin (or even a red apple or dirty sock) would—since it confirms (ii)—confirm (i). Again, this is absurd.

The result is absurd because it entails that I can do science from my armchair. I can gain confidence in my hypotheses about gold by examining my socks, or about the Andromeda galaxy by examining the contents of my nose, and so on. That’s crazy!

Reply: Should we just accept the “absurd” implication? Consider: What sort of thing DIS- confirms (i.e., REFUTES) ‘All Ravens are black’? Answer: a non-black raven, of course!

So, any item which is known to have ONE of these two attributes is a potential falsifier. For instance, here is a raven. Yikes! What color is it? Black? Whew! Our hypothesis is safe. I just went out into the world and made an observation which COULD have, but did NOT refute my hypothesis. Now I’m a little more confident in my hypothesis, P.

But now, here is a non-black thing. Yikes! What is it? A NON-raven? Whew! Our hypothesis is safe. Again, this observation COULD have refuted my hypothesis (namely, if the non-black thing WAS a raven). So, again, this should raise my credence in P.

The only box which has NEITHER of the potentially falsifying attributes is box (c), full of black, non-ravens. (Michael Huemer offers this style of reply in his book, Paradox Lost.)

[Alternatively, imagine that you’re in a bar, testing the hypothesis, The legal drinking age is 21. If you see someone who IS 21+ and IS drinking alcohol, this should raise your confidence in your hypothesis. But, it seems plausible that it is ALSO the case that, if you observe, say, a 19 year old drinking Coke, that this TOO should raise your confidence.]

Great video on this topic here.

***

[My Brainstorm on Goodman’s New Riddle of Induction: Imagine a world where grue and bleen were the basic predicates, or properties, where—like our world—every emerald ever pulled out of the ground, for all time, LOOKS exactly the same (namely, what we’d call ‘green’). Now imagine you’re watching a film of someone being the first to observe an emerald in 1900, and then a film of someone being the first to observe an emerald in 2100. The films would look exactly the same. And yet, the FIRST film is of a grue emerald, and the SECOND is of a bleen emerald. In short, with NO qualitatively discernible difference, at the stroke of midnight in 2020, every emerald ever discovered after that would have a totally different property than all of the ones discovered prior to that moment. I propose that a property is not basic (or, is not a ‘natural kind’ if it is such that two qualitatively indiscernible objects or events can differ with respect to it.

Worry: Am I ? I’m treating time as if it’s NOT a discernible property. It IS.

Possible Reply: Temporal location is a discernible, but extrinsic/relational property, not intrinsic.] 8