<<

The Puzzle of Metacoherence Michael Huemer

1. The Metacoherence Requirement

Suppose you have a –say, the belief that it is raining. And suppose that you come to reflect explicitly on whether your belief constitutes . What conclusion might you come to? You must either conclude that you know that it is raining, or withhold judgment about whether you know this, or conclude that you do not know it. If you conclude that you do not know it, then it seems that there will be a kind of tension between your first-order belief that it is raining, and your second-order belief about that first-order belief. If you hold on to your first-order belief while simultaneously denying that it constitutes knowledge, then, I think, you are guilty of some sort of irrationality. This is suggested by the that, were you to express your attitudes in words, you would utter a Moore-paradoxical sentence along the lines of, “It is raining, but I don’t know whether it is raining.”1 What of the option of withholding judgment concerning one’s knowledge of the weather? Though this case is less clear, it seems to me that you would also evince irrationality if you were to say (or think) something to the effect of, “It is raining, and I may or may not know that.”2 If, then, your belief that it is raining rationally precludes you from taking the view that you don’t know it is raining, and it also precludes you from taking the attitude that you may or may not know this, then it seems that you must hold that you do know that it is raining. This is the idea I want to examine here. More precisely, I propose the following principle, which I call the Metacoherence Requirement (MR):

MR Categorically believing that P commits one, on reflection, to the view that one knows that P.3

1For discussion of the hinted at in the text, see section 3 below. See Moore 1993 for discussion of Moore-paradoxical sentences.

2I take it that one may express an attitude of suspended judgment regarding P by saying something of the form, “It may or may not be that P.”

3I defend the Metacoherence Requirement in my 2007b, where I formulate the principle more broadly, as requiring comprehensive, epistemic endorsement of one’s own beliefs (p. 148). Since I view knowledge attribution as the most comprehensive, epistemic endorsement of a belief (pp. 148-9), the formulation in the text above results. David Owens (2000, pp. 37-41) also defends the idea, while Klein (2004) criticizes it.

1 Ordinary coherence is an epistemically desirable relationship that typically holds among beliefs of the same order–for instance, first-order beliefs cohere or clash with other first-order beliefs, second-order beliefs cohere or clash with other second-order beliefs, and so on. Metacoherence, as I use the term, is an epistemically desirable relationship between beliefs and metabeliefs–for instance, first-order beliefs will be metacoherent or meta-incoherent with second-order beliefs about those first-order beliefs. If I believe that P, then the belief that I know that P would fit together with my belief that P; the belief that I do not know that P would clash with my belief that P. So if I think it is raining, I exemplify metacoherence if I accept that I know it is raining, and I exemplify meta-incoherence if I think I do not know it is raining. A number of aspects of my statement of the Metacoherence Requirement call for comment. First, there is the technical term “categorical belief.” Categorical belief is a strong form of belief, to be contrasted with tentative or qualified belief. In particular, I assume that there is a species of belief of which two things are true: first, that it is the attitude normally expressed by assertion; second, that it is the doxastic attitude required for knowledge. When epistemologists describe knowledge as warranted, true belief, “belief” presumably must be taken in a particularly strong sense–one must have a very firm belief, a belief not beset by doubts, in order to count as knowing what one . And when one makes a flat-out assertion of P, one expresses a similarly confident belief that P4–someone harboring doubts about whether P could not appropriately assert P outright, but might instead say only that he believed that P, or that P was most likely correct. Assuming, then, that there is a doxastic attitude that is expressed by outright assertions and that is implicated in the analysis of knowledge, I call that attitude “categorical belief.” Hereafter, unless otherwise indicated, “belief” shall mean categorical belief. I shall use “acceptance” for the broader notion; thus, I shall say one accepts that P when one has a belief of some kind in P, not necessarily a belief strong enough for knowledge. My assumptions about categorical belief are minimal. I do not assume, for instance, that categorical belief requires absolute , or 100% credence. That is a matter open to epistemological debate: those who hold that knowledge requires 100% credence will maintain that categorical belief requires 100% credence; but those who demur at the strong claim regarding knowledge will likewise demur at the corresponding claim about categorical belief. Nor do I assume that categorical belief requires the sort of dogmatic stance that Peter Unger describes in his account of knowledge–I do not assume that a categorical believer must reject in advance, as not even the least bit

4I take this to be true even when one is lying–in the case of lies, one expresses beliefs one does not have.

2 relevant, all possible future evidence that might seem to disconfirm P.5 Again, whether categorical belief involves such dogmatism is a matter for epistemological debate. The second thing to clarify about my formulation of the Metacoherence Requirement is the notion of “commitment.” The notion of commitment invoked in MR is intended to be the same as the notion in play when one says, for example, “Reliabilists are committed to denying internalism.” The statement, “Doing A commits one to doing B,” does not entail that those who do A will in fact do B, nor does it entail that those who do A ought to do B. It entails only that there is some sort of incoherence or irrationality involved in doing A while refusing to do B.6 If one believes that P, and one either denies or withholds that one knows that P, then, according to MR, one exhibits a sort of irrationality in virtue of the clash between one’s two attitudes. Perhaps one ought to accept that one knows that P, or perhaps one ought to withdraw one’s belief that P. But one rationally ought not to maintain the belief that P while at the same time refusing to accept that one knows that P. Third, why the qualifier “on reflection”? Suppose an individual categorically believes that P but has not reflected on whether his belief constitutes knowledge. This might be because the subject lacks the of knowledge, or because the subject is not consciously aware of his belief that P, or simply because it has not occurred to the subject to wonder whether his belief constitutes knowledge. Whatever the , the subject in such a case does not take himself to know that P. But intuitively, this need not betoken any irrationality or incoherence on his part. For this reason, the Metacoherence Requirement should be understood as applying only to individuals who can and do consider, of a particular, conscious belief of theirs, whether it is knowledge. Finally, how strongly should we read “the view the one knows that P”? When one reflects on one’s categorical belief that P, is one committed to categorically believing that one knows that P, or merely to accepting (believing in the weak sense) that one knows that P? The stronger requirement that the agent’s second-order belief should also be categorical is more interesting than the weaker requirement, and the stronger requirement may well be correct; however, for present purposes I shall remain neutral on this question. The Metacoherence Requirement, as I discuss it herein, should thus be understood merely as positing a commitment to some form of acceptance of the that one knows that P.

5Unger 1975, pp. 30-31, 105.

6Cf. Alston (1993, p. 131) on the notion of commitment.

3 2. A Puzzle about Metacoherence

Suppose for the sake of argument that MR is a plausible norm. In that case, we have a puzzle. The proposition that one knows that P is much stronger than the proposition that P. It therefore seems that it ought to be possible for an individual to have adequate justification for believing that P, while lacking adequate justification for accepting that he knows that P. Suppose that such an individual reflects, both on whether P and on whether he knows that P: what is the rational set of for this person to have? Since the subject is justified in believing that P, he should believe that P; since he is not justified in accepting that he knows that P, he should not accept that he knows that P; but because of the Metacoherence Requirement, he ought not to both believe P and refuse to accept that he knows that P.7 The problem can be recast as a challenge for theories of epistemic justification. Suppose one offers an account of the necessary and sufficient conditions for one’s being justified in believing a proposition. It must be impossible for an individual to satisfy those conditions with respect to P, while at the same time lacking (upon reflection) justification for accepting the proposition that he knows that P. A complete defense of a theory of justification must therefore explain why, on the theory, this is impossible. This imposes a nontrivial constraint on theories of justification, one that some otherwise plausible theories may violate. To illustrate, consider a simple perceptual belief, my present belief that there is a hand here. And consider two views about why this belief is justified:

(1) Perhaps it is justified by virtue of its seeming to me that here is a hand. Or, (2) Perhaps it is justified by virtue of its having been produced by a reliable mechanism, the cognitive mechanism by which my perceptual beliefs are formed.

On either of these views, it is unclear why I might not have adequate justification for believing that here is a hand, while at the same time lacking justification for thinking that that belief constitutes knowledge. My perceptual makes it seem to me that here is a hand, but this does not guarantee that things seem any particular way with regard to my knowing that here is a hand. I may, in other words, have the first-order appearance without any relevant second-order appearance. Similarly, the reliability of my perceptual mechanism does not guarantee that I have a reliable mechanism for determining when I have knowledge. My belief that I know that here is a hand would presumably have to be formed by a different mechanism from my belief that here is a

7I assume hereinafter that it is rational for one to believe that P, on reflection, if and only if one is justified in believing that P.

4 hand. So there is no obvious reason why the first-order belief could not be justified while at the same time I had no justification for the second-order claim. This is not intended to pose an insuperable obstacle, either for phenomenal conservatism or for reliabilist theories of justification. But it raises an interesting question for partisans of such theories, or of any theory of justification: why is it that, when one justifiedly believes, say, that here is a hand, one also has justification for the proposition that one knows that very fact? In the remainder of this paper, I discuss four salient approaches to this question: a. The Anti-Metacoherence Approach: We could reject the presupposition of the puzzle, and hold that one could sometimes justifiably believe that here is a hand, while refusing to accept that one knows it. b. The Skeptical Approach: We could deny that one can be justified in accepting that here is a hand. c. The Bootstrapping Approach: We could hold that, when one has a justified belief that here is a hand, this somehow gives one a source of justification for the claim that one knows that here is a hand. d. The Happy Coincidence Theory: We could hold that, when one has a justified belief that here is a hand, it fortunately happens that (usually) one also has a separate source of justification for the claim that one knows this.

There may be other ways of approaching the metacoherence puzzle. But I shall focus on these four, as the ones that occur most naturally. Though I focus on my perceptually justified belief that here is a hand, the issue can be raised for any putative source of justification: why may not this source yield justification for one to believe that P, while one lacks justification for the claim that one knows that P? In the following, I shall try to indicate why I find approaches (a)-(c) unsatisfying, and how approach (d) might be developed.

3. Questioning Metacoherence

Why accept the Metacoherence Requirement? The most obvious motivation is the argument from Moore’s Paradox: suppose that the Metacoherence norm is false. Then it is possible for someone to rationally believe P and at the same time refuse to accept that he knows that P. Since this person rationally believes that P, and since assertion is the canonical expression of belief, it seems that (barring any unusual circumstances) this person could appropriately assert P. At the same time, this person refuses to accept that he knows that P; he either thinks he does not know that P, or withholds judgment concerning whether he knows that P. If one withholds judgment about whether Q, then it seems that one could

5 appropriately express that attitude of suspended judgment by saying, “It may or may not be that Q.” So it seems that the person who either disbelieves or withholds that he knows that P could appropriately say at least “I may not know that P,” and perhaps “I do not know that P.” Barring any unusual circumstances, then, it should be appropriate for this individual to say either “P but I do not know that” or “P but I may not know that.” But neither assertion seems acceptable; each seems to clash with itself in the same way as Moore’s famous sentence, “He has gone out but I don’t believe it.” I refer to these sentences–that is, sentences of the form “P but I don’t believe that P,” “P but I don’t know that P,” and other sentences with the same paradoxical air–as “Moore- paradoxical sentences.” Imagine, for example, that I ask you what the weather is like outside; is it still raining? You reply, “I do not know whether it is raining, but as a matter of fact, it is.” Or perhaps you reply, “I may not know whether it is raining, but as a matter of fact, it is.” I propose that the simplest explanation for why these assertions are defective, and for why they sound akin to contradictions, is that they express attitudes that violate the Metacoherence Requirement. There are other explanations for the absurdity of Moore-paradoxical utterances. G.E. Moore suggested that whenever one asserts that P, one thereby implies that one knows that P. Wittgenstein suggested that the utterance “I believe that P” functions as a tentative assertion of P, rather than as a report of the speaker’s . Williamson argues that knowledge is the norm of assertion. All of these views can be parlayed into accounts of Moore’s Paradox.8 But all three accounts rely solely on alleged about assertions or utterances, and any such account has at least one shortcoming: it fails to explain why it is irrational to think, silently, that one does not know whether it is raining, but that as a matter of fact it is.9 To explain why that thought is irrational, we must introduce a norm governing believing–a norm such as the Metacoherence Requirement. But what explains the Metacoherence Requirement itself? Why should there be such a norm? I think that the explanation rests on a principle about defeaters: if one has good reason to doubt that one’s belief that P constitutes knowledge, then one thereby has a defeater for one’s belief that P.10 On virtually any account of knowledge,

8Moore 1993; Wittgenstein 1980, pp. 472-8, 501; Williamson 2000, pp. 253-4.

9Shoemaker (1996, pp. 75-6) and de Almeida (2001, p. 33) press this point.

10In my view, MR explains the distinction between rebutting and undercutting defeaters: One has a rebutting defeater for P when one has grounds for doubting that P. One has an undercutting defeater for P when one has grounds for doubting that one satisfies one of the conditions (other than the condition) for knowing that P. Rebutting defeaters arise from the need to avoid ordinary incoherence; undercutting defeaters arise from the need to avoid meta-incoherence. Space limitations prevent further

6 a good reason for doubting that a belief satisfies any one of the conditions for knowledge would constitute a defeater for that belief. For instance, a good reason to doubt that P is true is clearly a defeater for the belief that P; likewise for a good reason to doubt that P is justified, or to doubt that P is fully grounded, or to doubt that there are no genuine defeaters for P, or to doubt that one’s belief forming mechanism is reliable, and so on. And if, for each condition in the analysis of knowledge, any good reason for suspecting that a given belief violates that condition would serve as a defeater for that belief, then it seems that also, any good reason for doubting that a given belief constitutes knowledge must serve as a defeater for that belief.11 Therefore, if one has good reason to doubt that one’s belief that P constitutes knowledge, then one has a defeater for one’s belief that P. What is meant by a “good reason” for doubting a proposition? I take the notion of for doubt broadly. A reason for doubting Q may involve specific evidence against Q. Or it may simply consist in Q’s having a significant a priori probability of being false, where that initial probability is not overcome by sufficient evidence in favor of P. In general, any consideration (or set of considerations) that suggests or entails that Q may not be true counts as a reason for doubting Q. A good reason for doubting Q will be a reason that suffices to explain why it in fact makes sense not to accept Q. Thus, in my view, as long as there are considerations in light of which it makes sense not to accept that a given belief constitutes knowledge, those considerations serve as defeaters for any prima facie justification that the given belief may have had. This takes us close to the Metacoherence Requirement. Assume that one believes that P, and one has reflected on whether one knows that P. Then there are the following possibilities: a. After reflecting on whether one knows that P, one does not doubt that one knows that P. I think that anyone who reflects on whether Q and thence has no doubt that Q must accept that Q. So I think that in this case, one accepts that one knows that P. b. After reflecting on whether one knows that P, one doubts that one knows that P, and one has good reason to do so. In this case, as I have suggested, one has a defeater for one’s belief that P. One should not categorically believe that P while having a defeater for that belief; thus, in this case, one should withdraw one’s categorical belief that P.

discussion of this account of defeaters here.

11I assume that if D is a defeater for P, and D is a defeater for P, then (D w D ) is a defeater for 1 w 2 1 2 w P. I also assume that if (D1 D2) is a defeater for P, and E is analytically equivalent to (D1 D2), then E is a defeater for P.

7 c. After reflecting on whether one knows that P, one doubts that one knows that P, but one has no good reason to do so. In this case, I think one ought to accept that one knows that P.

If I am right in my assessment of cases (a), (b), and (c), then there is no case in which one is fully rational in continuing to believe that P while refusing to accept that one knows that P: either one accepts that one knows that P, or one is to some degree irrational in believing that P, or one is to some degree irrational in failing to accept that one knows that P. Thus, the Metacoherence Requirement is vindicated. Perhaps the most controversial part of this argument is my assessment of case (c). Mightn’t one rationally doubt that Q, without having any specific reason to doubt that Q–perhaps just because one lacks sufficient reasons in favor of Q? To make my view of case (c) more plausible, recall that I take the notion of a reason for doubt very broadly. In case (c), one doubts that one knows that P, but there are no considerations in the light of which it makes sense for one not to accept that one knows that P (including such considerations as there being any significant chance that one doesn’t know that P). It seems to me that this really implies that it does not make sense for one not to accept that one knows P. All of this is by way of offering an explanation for why the Metacoherence Requirement holds: it holds because whenever it is reasonable to either disbelieve or doubt (and thus withhold12) that one knows that P, one has a defeater for P that precludes one’s justifiably, categorically believing that P. This is because, for any of the ways in which one’s belief that P might violate the conditions for knowledge, any significant chance of one’s lacking knowledge of P in that way, would constitute a defeater for (a categorical belief in) P. To the extent that this explanation is plausible, we have further grounds for accepting the Metacoherence Requirement. Again, the central motivation for MR was its ability to explain such things as why it is irrational to hold that although one does not know whether P is true, P is as a matter of fact true.

4. Metacoherence and

The simplest way to comply with the Metacoherence Requirement would be to suspend belief globally. If I neither believe that there is a hand here, nor believe that I know there is a hand here, then I have so far complied with MR; I have not adopted a meta- incoherent set of doxastic attitudes. A number of traditional skeptical can be construed as implicitly

12Perhaps one can doubt that Q while also (tentatively) accepting that Q. But the remark in the text pertains to a case in which one refrains from accepting that one knows P, because of one’s doubts about whether one knows P.

8 appealing to the Metacoherence Requirement to motivate suspense of judgment. Skepticism is traditionally characterized as the view that no one knows anything (or that no one knows any contingent, external-world , etc.) But skeptics are also known for advocating suspense of judgment; indeed, this is commonly seen as the point of skeptical arguments. The “modes” of are today seen as involving or suggesting arguments for skepticism, that is, arguments for the conclusion that we lack knowledge; but Sextus describes the skeptics’ aim as that of inducing suspense of judgment. And the famous “skeptical arguments” of Descartes’ First Meditation–today viewed, again, as arguments for the conclusion that most of our beliefs are not knowledge–were explicitly introduced as an expedient for inducing the suspension of belief.13 The Metacoherence Requirement explains how the twin theses of skepticism–that we lack knowledge, and that we ought to suspend judgment–are connected. Suppose that the belief that P commits one to the view that one knows that P. If a belief commits one to accepting something false, then one ought to renounce that belief. Since, in the skeptics’ view, we do not know most of the things we commonly believe, we ought to renounce those beliefs. The Metacoherence Requirement aids skeptics in another way as well. Skeptics will take MR as imposing an extra burden on believers: to be justified in believing that P, not only must one have sufficient evidence that P, but one must also have sufficient evidence for the stronger claim that one knows that P. MR thus makes it easier to argue for skepticism: if the skeptic can establish that we lack sufficient evidence that we know P, he can conclude–assuming that knowledge requires justified belief–that we in fact do not know P. I have nothing new to say here on the subject of refuting skepticism. But I will follow the tradition of assuming that skepticism is to be avoided if possible. It seems that there are some ordinary beliefs that are well justified–that I justifiedly believe that I have hands, for example. Let us consider how I might satisfy the Metacoherence Requirement without giving up that belief.

5. Bootstrapping

The “Bootstrapping Approach,” as I use the expression, includes any attempt to parlay

13Sextus Empiricus 2000, Book I, sections xii-xiii, pp. 10-12; Descartes 1984, p. 12. Hume is an unusual case, since he seems to disavow knowledge while embracing continued confident belief (e.g., Hume 1992, p. 187). It is interesting, however, that others took Hume to be committed to suspense of judgment, as in these remarks from (1983, p. 8): “It may perhaps be unreasonable to complain of this conduct in an author who neither believes his own existence nor that of his reader . . . . Yet I cannot imagine that the author of the ‘Treatise of Human Nature’ is so sceptical as to plead this apology. He believed, against his principles, that he should be read, and that he should retain his personal identity, till he reaped the honour and reputation justly due to his metaphysical acumen.”

9 one’s justification for P into justification, or a key part of the justification, for the claim that one has warrant for P (where warrant is the that makes the difference between true belief and knowledge). For example, one might cite the fact that P, as evidence against there being any genuine defeaters for one’s belief that P. Or one might cite the fact that P, together with the fact that one believes P as a result of the application of a certain belief-forming method, as evidence that one’s belief-forming method is reliable. Stewart Cohen discusses a case in which one uses one’s belief that P to rebut a potential defeater for that belief: he sees an apparently red table, whereupon he sincerely asserts that the table is red. His son asks, “How do you know that it isn’t a white table, illuminated by red lights?” If the table were white but illuminated by red lights, this would constitute a genuine defeater for Cohen’s belief that the table is red. But, as Cohen imagines, he might reply simply: “The table is red, as you can see. Therefore, it is not white. Therefore, it is not a white table illuminated by red light.” If one can be justified in believing that a table is red simply on the basis of one’s visual experience, then it is unclear why Cohen’s imagined reply here would not be cogent.14 Can this approach be generalized? Suppose that warrant consists in justification in the absence of genuine (non-misleading) defeaters.15 The proposition that Cohen’s table is red entails that a particular would-be defeater–that the table is white but illuminated by red lights–is false. But it does not entail that there are no genuine defeaters for Cohen’s belief that the table is red. There may be an undercutting defeater for Cohen’s belief that the table is red, even if the table is in fact red. For instance, suppose that the table is red, and it is also illuminated by red lights. These lights would make it appear red even if it weren’t red. The fact that the table was illuminated by these red lights would be an undercutting defeater for Cohen’s belief, preventing Cohen from knowing that the table is red. What justification does Cohen have for holding that neither this nor any other would-be defeater is true? Although the redness of the table does not entail that there are no genuine defeaters for Cohen’s belief that the table is red, perhaps the redness of the table renders it improbable that there are any such defeaters. Perhaps . . . but there is no obvious reason why this should be so. There is no obvious reason why red objects should be less likely to be irradiated by red light than are objects of any other color. Nor, in general, is there any obvious reason why the truth of P should render it less likely that an undercutting defeater for one’s belief that P exists. So it is unclear how one could use P to justify a denial of the existence of undercutting defeaters for P.

14Cohen 2002, p. 314. In my 2000, I suggested a similar approach to refuting the brain-in-a-vat scenario. Cohen regards the imagined reply to his son as clearly inadequate.

15See Klein 1971; Lehrer and Paxson 1969.

10 Now consider another account of warrant. Perhaps a belief’s being warranted is a matter of its having been formed by a reliable mechanism.16 Assume that Cohen has unproblematic knowledge (perhaps by introspection) of the mechanism by which he formed the belief that the table in front of him is red. The difficult question is what justifies him in believing that that mechanism is reliable. Perhaps Cohen could justify the claim that his color vision is reliable by reasoning along the following lines: “The table is red. My color vision tells me that the table is red. So my color vision got it right this time. This is evidence that my color vision is reliable.” If Cohen collects many such cases in which his color vision got things right, he might construct a strong inductive argument that his color vision is reliable.17 In this approach, citing P would not by itself suffice to justify the claim that one’s belief-forming mechanism (the very mechanism by which one arrived at the belief that P) is reliable. But P would be used as evidence for the claim that one’s belief-forming mechanism is reliable, and it would be just this sort of evidence that (when enough is accumulated) would justify the belief that one’s mechanism is reliable. Thus, I consider this as a form of the bootstrapping approach. Some philosophers contend that track record arguments of this sort can be a legitimate source of justification for their conclusions.18 I cannot do justice to these philosophers’ views here. But I suggest that the widespread that such arguments are illegitimate provides us with fairly strong prima facie grounds for rejecting these philosophers’ theories. For most people who consider the matter, track record arguments seem fallacious in approximately the way that circular reasoning is fallacious. It seems that one cannot use the supposed truth of a belief formed by some mechanism in an argument to show that that very mechanism tends to deliver true beliefs. We should look for a theory that accommodates this intuition. One account of the problem with track record arguments is that they fail to enhance the probabilities of their conclusions. More precisely:

1. If J can be a legitimate source of justification for believing P, then it can be rational to raise one’s credence in P as a result of one’s acquiring J. 2. It cannot be rational to raise one’s credence in the proposition that M is reliable, as a result of one’s acquiring a track record argument for that conclusion.

16See Goldman 1979.

17Fumerton (1995, pp. 178-9), Vogel (2000, pp. 613-15), and Cohen (2002, p. 316) discuss track record arguments of this kind, arguing that wrongly implies that such arguments are legitimate.

18Van Cleve 1984; Alston 1986; Bergmann 2004. For extended criticisms of the view, see Sanger 2000, chapter 2.

11 3. Therefore, a track record argument cannot be a legitimate source of justification for believing its conclusion.

Premise (1) sounds like an analytic truth–if J has no impact on the confidence one rationally ought to assign to P, in any possible circumstances, then what would be meant by saying that J provides justification for believing P? It would seem, rather, that J is irrelevant to P. Consider some examples. Many philosophers believe that sensory are a source of justification for corresponding external-world beliefs. These philosophers also hold that, if one seemingly perceives that P, and one has no grounds for doubting the veridicality of that experience, and one does not already believe that P, it is rational for one to adopt the belief that P. It would be bizarre to hold that sensory experience justifies beliefs about the external world, but yet that, when one has a sensory experience, this never has any impact on the confidence one should have in any external-world proposition. Similarly, consider the justificatory force of deductive arguments. An interesting issue is raised by the case of deductive arguments for necessary . Necessary truths, according to standard probability theory, already have probability 1; thus, one’s learning of an argument for such a conclusion cannot raise the conclusion’s objective probability. Nevertheless, anyone who thinks that a deductive argument can be a source of justification for its conclusion will also hold that it can be rational to raise one’s credence in the conclusion as a result of learning of the argument. For example, it is necessary that there are infinitely many prime numbers, so this proposition has a logical probability of 1. But, lacking logical omniscience, many people do not initially see that this is the case. So they may initially assign a credence much less than 100% to that proposition. But after seeing the that there are infinitely many prime numbers, it is rational for such a person to raise their degree of belief in the claim that there are infinitely many prime numbers, to something close to 100%. Michael Bergmann holds that a track record argument can justify its conclusion, even though it could not rationally persuade anyone who initially doubted the conclusion.19 This is an interesting position. But if one were to go on to say that, not only does a track record argument have no force for those who initially doubt its conclusion, but it also has no force for those who initially believe its conclusion, those who initially withhold its conclusion, nor indeed those who have any initial attitude toward its conclusion–then I think there would be no content left to the insistence that the argument “provided justification” for its conclusion. And that, according to premise (2), is indeed the case. Consider an analogy. I have

19Bergmann 2004.

12 an urn that I know to be filled with black and white marbles. Suppose that my estimate of the proportion of black marbles is r.20 There is a particular marble named “Bob,” about which I know nothing other than that it is one of the marbles in the urn. I believe, with confidence r, that Bob is black. Now, upon merely reflecting on Bob’s (alleged) blackness, would I be rational to increase my estimate of the proportion of black marbles in the urn? Plainly not–and this is true regardless of what r is. If r is only .1, then I should not use Bob’s alleged blackness as evidence that more than 10% of the marbles are black, since I am only 10% confident that Bob is black. But even if r is very high–say, .9–I still should not use Bob’s alleged blackness as evidence that more than 90% of the marbles are black–again, I am only 90% confident that Bob is black. What if I start out with an incoherent set of degrees of belief? Suppose my estimate of the proportion of black marbles in the urn is .9, yet I am 95% confident that Bob is black. On reflection, I realize that the sole reason I take Bob to be black is that Bob is in this urn; I have no other source of justification for any claims about Bob’s color. In this case, should I raise my estimate of the proportion of black marbles, to something closer to 95%? It seems to me that the answer is, again, no: what I should do is revise my irrational 95% confidence that Bob is black. I should lower it to 90%. Now let us apply the lesson to the case of sensory . Suppose that my estimate of the objective chance of my sensory perception’s delivering a true belief is 90%. That is to say, roughly, I take my perception to be 90% reliable.21 I come to believe that there is a hand here, and I know that this belief was formed solely on the basis of sensory perception (I have no other relevant evidence about whether there is a hand here). Should I use the alleged presence of the hand as grounds for raising my estimate of the reliability of perception, above 90%? Plainly not. If I am only 90% confident that there is a hand here, I cannot use that as grounds for increasing my estimate of the reliability of perception above 90%. If, somehow, I am 95% confident that there is a hand here, then as soon as I realize that this 95% confidence is based solely on the application of a belief-forming method that I myself take to be only 90% reliable, I should adjust my confidence downward to 90%. In no case should I take a belief produced by some mechanism, by itself, as grounds for changing my estimate of

20By this, I mean the expected value, based on my subjective probability distribution, of the ¥ ìn éæi ö ù ü proportion of black marbles, i.e., r is åíP()() N= n ´ å êç ÷ ´ P B = i ú ý , where N is the number of n=1î i = 0 ëèn ø û þ marbles in the urn, B is the number of black marbles in the urn, and P is my subjective probability function.

21More precisely, I refer here to the expected value of the reliability of my perception (the 1 objective chance of a given perceptual belief’s being correct), i.e., ò c´ r() c dc , where c ranges over c=0 the possible degrees of reliability, and D is my subjective probability density function.

13 that mechanism’s reliability.22 I conclude that track record arguments are not a legitimate source of justification for claims about the reliability of our belief-forming mechanisms. So much for the track record approach to establishing reliability. There may be other bootstrapping approaches I have not considered–that is, other ways of arguing that a given belief is warranted starting from that very belief or from what justifies that very belief. If there are, however, I suspect that they will be open to similar objections. I suspect that they, too, will strike us intuitively as unacceptable in a manner similar to the way circular arguments are unacceptable, and I suspect that they, too, will rely on an alleged mode of justification that is irrelevant to rational credence.

6. Happy Coincidences

What I have said so far leaves few options open. If we are to satisfy the demands of Metacoherence, without surrendering to skepticism or falling back on epistemic circularity, it looks as though we must rest our faith in some sort of happy coincidence: perhaps when I am justified in believing that there is a hand here, I just happen to also get, from another source, some justification for the additional claim that my belief about the hand is warranted. A happy coincidence account is likely to be (though it need not be) unsystematic. That is, for different classes of beliefs, the explanation of why we are justified in believing that they constitute knowledge will likely vary; there need be no single, all- purpose account of the source of justification for the meta-beliefs by which we maintain metacoherent belief systems. And a happy coincidence account is also likely to be contingent: that is, other possible agents with justified first-order beliefs similar to our own may not have the same justification for thinking that their beliefs constitute knowledge as we do, and may not have justified second-order beliefs at all. Consider how a reliabilist might give a happy coincidence theory. I believe that there is a hand here. This belief is justified, because my mechanism of forming beliefs based on my visual experiences is reliable. I also believe that my belief that there is a hand here is warranted. This metabelief is formed by a different mechanism–perhaps a mechanism that involves consulting my own and others’ about knowledge, testing the coherence of my perceptual beliefs, and otherwise engaging in

22For the sake of , I focus here on single outputs of a belief-forming method. If we consider sets of propositions, then the coherence of the set may rationally alter one’s estimate of the reliability of the mechanism that generates that set (see Olsson 2005 and my 2007c for conditions under which this occurs). But this, in my view, is very different from the epistemically circular track record arguments, for here we appeal to an independently ascertainable fact–that a certain set of beliefs is coherent–as evidence for the reliability of a belief-forming method. That our perceptual beliefs cohere is not itself a perceptual belief, so the appeal to coherence need not be epistemically circular.

14 epistemological reasoning. The latter mechanism is not guaranteed to be reliable merely because my vision is reliable–but perhaps, as it happens, the mechanism by which I form the metabelief is also reliable. Of more interest to me is what a Phenomenal Conservative might say. Phenomenal Conservatism holds that its seeming to one that P, in the absence of defeaters, provides one with some justification for believing that P. It seems to me now that there is a hand in front of me. Assuming I have no defeaters for the claim, this gives me justification for thinking that here is a hand. Now, when I consider the claim that I know that here is a hand, I might experience a second-order appearance: maybe it just seems to me that I know that here is a hand. This isn’t guaranteed always to occur, but in this case, for me at least, the second-order appearance does in fact occur. It does seem to me that I know that here is a hand, and I have no defeaters for the claim that I know this. Thus, I have justification for thinking that I know that here is a hand.23 Is this a viable strategy for addressing the original puzzle? As initially formulated, the Metacoherence Puzzle challenged us to explain why it is impossible for one to have adequate justification to believe that P, while lacking justification to accept that one knows that P. What I have just suggested is an account of why most of us in fact have justification for believing that our perceptual beliefs are knowledge. But suppose, as might happen, that the account fails to apply to some individual. Imagine that S has perceptual experiences just like my own, so that it seems to her that there is a hand in front of her. S then considers whether she knows that there is hand. But unlike the actual situation with myself, when S considers this, it does not seem to her that she has knowledge. And suppose that S has no other source of justification for thinking that she knows that there is a hand. In that case, why would S not be in a position to rationally violate the Metacoherence Requirement–why, that is, might S not rationally continue to believe that there is a hand before her, while refusing to accept that she knows it? The answer to this is suggested by my earlier account (section 3 above) of why the metacoherence norm holds: namely, in this situation, S would have a defeater for her belief that there is a hand. When she considers whether she knows that there is a hand, and finds that she does not seem to know this, the very fact that she does not seem to know it (and has no other source of justification for thinking that she knows it) undercuts S’s erstwhile justification for believing that there is in fact a hand there. At that point, S should cease categorically believing that there is a hand. On this account, the Metacoherence norm holds necessarily, since in any case in which one lacks justification, on reflection, for thinking that one knows that P, one is

23Cf. the Reidian account suggested by Bergmann (2006, pp. 206-11).

15 rationally required to suspend one’s initial belief that P. What is contingent is the avoidance of skepticism: in possible worlds in which, upon reflection, we lack the fortunate second-order appearances telling us that our first-order beliefs are knowledge, our first-order beliefs become unjustified. How plausible is this contingent anti-skeptical response? We have the requisite anti-skeptical appearances–we seem to ourselves to know things about the external world. Furthermore, when we consider skeptical scenarios (at least for most of us), the scenarios seem obviously false, even ridiculous. But imagine a person who, upon considering the brain in a vat scenario, had no such reaction. To Sue, it seems quite plausible that she is a brain in a vat. Furthermore, she cannot think of any arguments that she isn’t one, and she has no evidence (without relying on epistemic circularity) that she isn’t a brain in a vat. In that case, perhaps Sue really should withhold judgment concerning whether she is a brain in a vat. And after reflecting on this, it seems that she must also withhold judgment on whether she really has hands, whether the physical objects she seems to perceive around her are real, and so on. Finally, how is the approach I have suggested any better than the bootstrapping approach? In the bootstrapping approach as described in section 5, one uses P in an argument to establish that one’s belief that P is warranted. In the approach I have described in this section, one relies on appearances to establish that first-order beliefs, themselves based on other appearances, are warranted. Assuming that warrant requires reliability, this is very similar to using appearances to establish the reliability of appearances. Why is this not an objectionable form of epistemic circularity? First, it is important that there are two classes of appearances. One uses second- order, non-perceptual appearances to establish that beliefs based on perceptual appearances are warranted. This is not obviously a form of epistemic circularity. One could go on to ask for evidence that appearances in general are reliable–but it is not obvious why this would be needed. When I have a perceptual belief that P, it is plausible that, for this belief to count as knowledge, my perception must be reliable. But it is not necessary that appearances in general be reliable, in order for a perceptual belief to be warranted. (Nor is it clear that appearances in general must be reliable, in order for a second-order belief about a perceptual belief to be warranted.) So we need not meet the demand for a general vindication of appearances. Second, the objection to track record arguments that I raised in section 5 does not affect my own approach. In section 5, I said that standard track record arguments do not provide justification for their conclusions, because they never affect the subjective probabilities we should attach to those conclusions. The same cannot be said about the approach of relying on second-order appearances: there is no obvious reason why, when one has the experience of its seeming to one that one knows that P, one would not be rational to increase one’s credence in the claim that one knows that P. There is nothing absurd or even odd about an epistemic practice of that sort.

16 Of course, we could always demand a positive argument to show that one should increase one’s confidence that a belief constitutes knowledge upon finding that it appears to be knowledge. But this would be raising an issue separate from the Metacoherence Puzzle–the general issue of why one should accept Phenomenal Conservatism. I have discussed that issue elsewhere.24 The concern of the present paper is how one may deal with the Metacoherence Puzzle, assuming that one has a theory of justification that is acceptable in other respects. Assuming that one finds Phenomenal Conservatism plausible to begin with, one should also find it plausible that, when it seems to one that one knows that P, one thereby has some justification, in the absence of defeaters, for thinking that one knows that P.

7. Conclusion

When one believes something in the unqualified way required for knowledge and sincere assertion, one must, to be fully coherent, also take one’s own belief to constitute knowledge. This principle is the best explanation for the absurdity of such utterances as, “Although I may not know whether it is raining or not, as a matter of fact, it is.” But this principle seems to ratchet up the demands on a theory of justification: in explaining why we are justified in believing that P, the epistemologist must also explain why, on reflection, we would be justified in thinking that we know that P. In particular, the epistemologist must explain how we are justified in thinking that our belief that P is warranted. To give a perfectly general answer to this challenge, we would seem to need to rely on some form of epistemic circularity. That is, the only obvious way to guarantee that our prima facie justification for P is always matched by justification for the second- order claim that we know that P, would seem to be to use P itself, or our justification for P, to justify the claim that our belief that P is warranted. Approaches along those lines, however, strike most people as viciously circular. Our intuitions on this score are strengthened by reflection on the subjective probability that a rational person might attach to the claim that some belief-forming method is reliable. It seems clear that a rational person would not raise his estimation of the reliability of an information source, solely on the basis of some piece of information supported only by that source. At this point, the least unacceptable approach seems to be to hold that, for most ordinary beliefs, we happen to have some source of justification for thinking that they are warranted, separate from our source of justification for the beliefs themselves. This need not always be true–and when it is not, the result of our reflection on whether an ordinary belief constitutes knowledge will be to undercut that ordinary belief. This

24See my 2006 and 2007a.

17 leaves us in a precarious position. Our beliefs remain exposed to attack by skeptics who induce higher-order reflection on their status, and we have no general purpose way of disqualifying skeptical doubts. My observation that we just seem to have knowledge hardly gives us a knock-down response to skeptics. Then again, perhaps this explains the perennial challenge that philosophers have found in skepticism.

References

Alston, William P. 1986. “Epistemic Circularity,” Philosophy and Phenomenological Research 47: 1-30. Alston, William P. 1993. The Reliability of Sense Perception. Ithaca, N.Y.: Cornell University Press. Bergmann, Michael. 2004. “Epistemic Circularity: Malignant and Benign,” Philosophy and Phenomenological Research 69: 709-27. Berbmann, Michael. 2006. Justification without Awareness: A Defense of Epistemic Externalism. Oxford: Clarendon. Cohen, Stewart. 2002. “Basic Knowledge and the Problem of Easy Knowledge,” Philosophy and Phenomenological Research 65: 309-29. de Almeida, Claudio. 2001. “What Moore’s Paradox Is About,” Philosophy and Phenomenological Research 62: 33-58. Descartes, Rene. 1984. Meditations on First Philosophy in The Philosophical Writings of Descartes, vol. 2, edited by John Cottingham, Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge University Press. Fumerton, Richard. 1995. Metaepistemology and Skepticism. Lanham, Md.: Rowman and Littlefield. Goldman, Alvin. 1979. “What Is Justified Belief?” pp. 1-23 in Justification and Knowledge, edited by George S. Pappas. Dordrecht, The Netherlands: D. Reidel. Huemer, Michael. 2000. “Direct Realism and the Brain-in-a-Vat Argument,” Philosophy and Phenomenological Research 61: 397-413. Huemer, Michael. 2006. “Phenomenal Conservatism and the Internalist Intuition,” American Philosophical Quarterly 43: 147-58. Huemer, Michael. 2007a. “Compassionate Phenomenal Conservatism,” Philosophy and Phenomenological Research 74: 30-55. Huemer, Michael. 2007b. “Moore’s Paradox and the Norm of Belief,” pp. 142-57 in Themes from G.E. Moore: New Essays in and , edited by Susana Nuccetelli and Gary Seay. Oxford: Oxford University Press. Huemer, Michael. 2007c. “Weak Bayesian ,” Synthese 157: 337-46. Hume, David. 1992. Treatise of Human Nature. Buffalo, N.Y.: Prometheus. Klein, Peter. 1971. “A Proposed Definition of Propositional Knowledge,” Journal of Philosophy 68: 471-82.

18 Klein, Peter. 2004. “Skepticism: Ascent and Assent?” pp. 112-25 in and His Critics, edited by John Greco. Malden, Mass.: Blackwell. Lehrer, Keith and Thomas Paxson. 1969. “Knowledge: Undefeated Justified True Belief,” Journal of Philosophy 66: 225-37. Moore, G.E. 1993. “Moore’s Paradox,” pp. 207-12 in G.E. Moore: Selected Writings, edited by Thomas Baldwin. New York: Routledge. Olsson, Erik J. 2005. Against Coherence: Truth, Probability, and Justification. Oxford: Clarendon. Owens, David. 2000. Reason without Freedom: The Problem of Epistemic Normativity. London: Routledge. Reid, Thomas. 1983. Inquiry and Essays, edited by Ronald Beanblossom and Keith Lehrer. Indianapolis, Ind.: Hackett. Sanger, Larry. 2000. Epistemic Circularity: An Essay on the Problem of Meta-justification. Ph.D. dissertation, Ohio State University. URL=, accessed Feb. 1, 2009. Sextus Empiricus. 2000. Outlines of Scepticism, edited by Julia Annas and Jonathan Barnes. Cambridge: Cambridge University Press. Shoemaker, Sydney. 1996. “Moore’s Paradox and Self-Knowledge,” pp.74-93 in The First-Person Perspective and Other Essays. New York: Cambridge University Press. Unger, Peter. 1975. Ignorance: A Case for Scepticism. Oxford: Clarendon. Van Cleve, James. 1984. “Reliability, Justification, and the ,” pp. 555-67 in Midwest Studies in Philosophy, vol. 9. Minneapolis: University of Minnesota Press. Vogel, Jonathan. 2000. “Reliabilism Leveled,” Journal of Philosophy 97: 602-23. Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford: Oxford University Press. Wittgenstein, Ludwig. 1980. Remarks on the Philosophy of Psychology, vol. 1, edited by G.E.M. Anscombe and G.H. von Wright, translated by G.E.M. Anscombe. Oxford: Basil Blackwell.

19