Bakker 1

The Introspective Peepshow: and the ‘Dreaded Unknown Unknowns’

On February 12th, 2002, Secretary of Defence Donald Rumsfeld was famously asked in a DoD press conference about the American government’s failure to provide evidence regarding Iraq’s alleged provision of weapons of mass destruction to terrorist groups. His reply, which was lampooned in the media at the time, has since become something of a linguistic icon:

[T]here are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that we know we don’t know. But there are also unknown unknowns; there are things we don’t know we don’t know.1

In 2003, this comment earned Rumsfeld the ‘Foot in Mouth Award’ from the British-based Plain English Campaign. Despite the scorn and hilarity it occasioned in mainstream culture at the time, the concept of unknown unknowns, or ‘unk-unk’ as it is sometimes called, has enjoyed long-standing currency in military and engineering circles. Only recently has it found its way to business and economics (in large part due to the work of Daniel Kahneman), where it is often referred to as the ‘dreaded unknown unknown.’ For enterprises involving risk, the for this dread is quite clear. Even in daily life, we speak of being ‘blind-sided,’ of things happening ‘out of the blue’ or coming ‘out of left field.’ Our institutions, like our brains, have evolved to manage and exploit environmental regularities. Since knowing everything is impossible, we have at our disposal any number of rehearsed responses, precooked ways to deal with ‘known unknowns,’ or irregularities that are regular enough to be anticipated. Unknown unknowns refer to those events that find us entirely unprepared–often with catastrophic consequences. Given that few activities are quite so sedate or ‘risk free,’ unk-unk might seem out of place in the context of consciousness research and the philosophy of mind. But as I hope to show, such is not the case. The unknown unknown, I want to argue, has a profound role to play in developing our understanding of consciousness. Unfortunately, since the unknown unknown itself constitutes an unknown unknown within cognitive science, let alone consciousness research, the route required to make my case is necessarily circuitous. As John Dewey (1958) observed, “We cannot lay hold of the new, we cannot even keep it before our minds, much less understand it, save by the use of ideas and knowledge we already possess” (viii-ix). Blind-siding readers rarely pays. With this in mind, I begin with a critical consideration of Peter Carruthers (forthcoming, 2011, 2009a, 2009b, 2008) ‘innate self-transparency thesis,’ the account of entailed by his more encompassing ‘mindreading first thesis’ (or as he calls it in The Opacity of the Mind (2011), Interpretative Sensory-Access Theory (ISA)). I hope to accomplish two things with this reading: 1) illustrate the way explanations in the cognitive sciences so often turn on issues of informatic tracking; and 2) elaborate an alternative to Carruthers’ innate self-transparency thesis that makes, in a preliminary fashion at least, the positive role played of the unknown unknown clear. Since what I propose subsequent to this first leg of the article can only sound preposterous short of this preliminary, I will commit the essayistic sin (and rhetorical virtue) of leaving my final conclusions unstated–as a known unknown, worth mere curiosity, perhaps, but certainly not dread.

1 Retrieved from http://en.wikipedia.org/wiki/There_are_known_knowns Bakker 2

Follow the Information

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them (Bechtel and Abrahamson 2005, Bechtel 2008). In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process. Let’s call this process of information tracking the ‘Follow the Information Game’ (FIG).2 In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent computation. The theorist quite literally ‘follows the information’ from mechanism to mechanism, using a complex stew of evolutionary rationales, experimental results, and neuropathological case studies to warrant the various specifics of the resulting theoretical account. We see this quite clearly in the mindreading versus metacognition debate, where the driving question is one of how we attribute propositional attitudes to ourselves as opposed to others. Do we have direct ‘metacognitive’ access to our beliefs and desires? Is mindreading a function of metacognition? Is metacognition a function of mindreading? Or are they simply different channels of a singular mechanism? Any answer to these questions requires mapping the flow of information, which is to say, playing FIG. This is why, for example, Peter Carruthers’ “How we know our own minds” and the following Open Peer Commentary read like transcripts of the diplomatic feuding behind the Treaty of Versailles. It’s an issue of mapping, but instead of arguing coal mines in Silesia and ports on the Baltic, the question is one of how the brain’s informatic spoils are divided. Carruthers holds forth a ‘mindreading first’ account, arguing that our self-attributions of PAs rely on the same interpretative mechanisms we use to ‘mind read’ the PAs of others:

There is just a single metarepresentational faculty, which probably evolved in the first instance for purposes of mindreading... In order to do its work, it needs to have access to perceptions of the environment. For if it is to interpret the actions of others, it plainly requires access to perceptual representations of those actions. Indeed, I suggest that, like most other conceptual systems, the mindreading system can receive as input any sensory or quasi-sensory (eg., imagistic or somatosensory) state that gets “globally broadcast” to all judgment-forming, memory-forming, desire-forming, and decision-making systems. (2009b, 3-4)

2 ‘Information’ is here understood in the broadest, nonsemantic sense of ‘systematic differences making systematic differences.’ Bakker 3

In this article, he provides a preliminary draft of the informatic map he subsequently fleshes out in The Opacity of the Mind. He takes Baars (1988) Global Workspace Theory of Consciousness as a primary assumption, which requires him to distinguish between information that is and is not ‘globally broadcast.’ Consistent with the massive modularity endorsed in The Architecture of the Mind (2006), he posits a variety of informatically ‘encapsulated’ mechanisms operating ‘subpersonally’ or outside conscious access. The ‘mindreading system,’ not surprisingly, is accorded the most attention. Other mechanisms, when not directly recruited from preexisting cognitive scientific sources, are posited to explain various folk-psychological categories, such as belief.3 The tenability of these mechanisms turns on what might be called the ‘Accomplishment Assumption,’ the notion that all aspects of mental life that can be (or as in the case of folk , already are) individuated are the accomplishments of various discrete neural mechanisms. Given these mechanisms, Carruthers makes a number of ‘access ,’ each turning on the kinds of information required for each mechanism to discharge its function. To interpret the actions of others, the mindreading system needs access to information regarding those actions, which means it needs access to those systems dedicated to gathering that information. Given the apparently radical difference between self and other interpretation, Carruthers needs to delineate the kind of access characteristic of each:

Although the mindreading system has access to perceptual states, the proposal is that it lacks any access to the outputs of the belief-forming and decision-making mechanisms that feed off those states. Hence, self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self- interpretation. However, it isn’t just the subject’s overt behavior and physical circumstances that provide the basis for the interpretation. Data about perceptions, visual and auditory imagery (including sentences rehearsed in “inner speech”), patterns of attention, and emotional feelings can all be grist for the self-interpretative view. (2009b, 4)

So the brain does possess belief mechanisms and the like, but they are informatically segregated from the suite of mechanisms responsible for generating the self- of PAs. The former, it seems, do not ‘globally broadcast,’ and so their machinations must be gleaned the same way our brains glean the machinations of other brains, via their interpretative mindreading systems. Since, however, the mindreading system has no access to any information globally broadcast by other brains, he has to concede that the mindreading system is privy to additional information in instances of self-attribution, just not any involving direct access to the mechanisms responsible for PAs. So he lists what he presumes is available. The problem, of course, is that it just doesn’t feel that way. Assumptions of unmediated access or self-transparency, Carruthers writes, “seem to be almost universal across times and cultures” (2011 15), not to mention “widespread in philosophy.” If we are forced to rely on our environmentally-oriented mindreading systems to interpret, as opposed to intuit, the function of our own brains, then why should we have any notion of introspective access to our PAs, let alone the presumption of unmediated access? Why presume an incorrigible introspective access that we simply do not have? Carruthers offers what might be called a ‘less is more account.’ The mindreading system, he proposes, represents its self-application as direct rather than interpretative,. Our sense of self-

3 Carruthers is a realist about beliefs. See, “Knowing your own beliefs: a representationalist account” (forthcoming). Bakker 4 transparency is the product of a mechanism. Once we have a mechanism, however, we require some kind of evolutionary story warranting its development. Carruthers argues that the presumption of incorrigible introspective access spares the brain a complicated series of computations pertaining to reliability without any real gain in reliability. “The transparency of our minds to ourselves,” he explains in an interview, “is a simplifying but false heuristic...”4 Citing Gigarenzer and Todd (1999), he points out that heuristics, even deceptive ones, regularly out-perform more fine-grained computational processes simply because of the relation between complexity and error. So long as self-interpretation via the mindreading system is generally reliable, this ‘Cartesian assumption’ or ‘self-transparency thesis’ (Carruthers 2008) possesses the advantage of simplicity to the extent that it relieves the need for computational estimations of interpretative reliability. The functional adequacy of a direct access model, in other words, more than compensates for its epistemic inadequacy, once one considers the metabolic cost and ‘robustness,’ as they say in ecological rationality circles, of the former versus the latter. This explanation provides us with a clear-cut example of what I called the Accomplishment Assumption above. Given that ‘direct introspective access’ seems to be a discrete feature of mental life, it seems plausible to suppose that some discrete neural mechanism must be responsible for producing it. But there is a simpler explanation, one that draws out some of the problematic consequences of the ‘Follow the Information Game’ as it is presently played in cognitive science. A clue to this explanation can be found when Eric Schwitzgebel (2011) considers the selfsame problem:

Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence. [emphasis added] 130

Given his skepticism of ‘boxological’ mechanistic explanation (2011, 2012), Schwitzgebel can circumvent Carruthers’ dilemma (the mindreading system represents agent access either as direct or as interpretative) and simply pose the question in a far less structured way. Why do we possess unwarranted confidence in our introspective judgements? Well, no one tells us otherwise. But this simply begs the question of why. Why should we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own? The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The problem, in Schwitzgebel’s characterization, is that we have only a single perspective on our conscious experience, one lacking access to information regarding the limitations of introspection. In other words, the near universal presumption of self-transparency is an artifact of the near universal lack of any information otherwise. On this account, you could say the traditional, prescientific assumption of self-transparency is not so different from the traditional, prescientific assumption of geocentrism. We experience ‘vection,’ a sense of bodily displacement, whenever a large portion of our visual field moves. Short of that perceived motion (or other vestibular effects), a sense of motionless is the cognitive default. This was why the accumulation of so much (otherwise inaccessible) scientific knowledge was required to overturn geocentrism: not because we possessed an ‘innate representation’ of a motionless earth, but because of the interplay between our sensory limitations and our evolved capacity to detect motion. The self-transparency assumption, on this account, is simply a kind of ‘noocentrism,’ the result of a certain limiting relationship between the information available and the cognitive systems utilized.

4 Retrieved from http://www.philostv.com/peter-carruthers-and-eric-schwitzgebel/. Bakker 5

The problem with geocentrism was that we were all earthbound, literally limited to what meagre extraterrestrial information our native senses could provide. That information, given our cognitive capacities, made geocentrism intuitively obvious. Thus the revolutionary significance of Galileo and his Dutch Spyglass. The problem with noocentrism, on the other hand, is that we are all brainbound, literally limited to what neural information our introspective ‘sense’ can provide. As it turns out that information, given our cognitive capacities, makes noocentrism intuitively obvious. Why? Because short of any Neural Spyglass, we lack any information regarding the insufficiency of the information at our disposal. We assume self-transparency because there is literally no other assumption to make. One need only follow the information. Adopting a dual process perspective (Stanovich, 1999; Stanovich and Toplak, 2011), the globally broadcast information accessed for System 2 deliberation contains no information regarding its interpretative (and thus limited) status. Given that global broadcasting or integration operates within fixed bounds,5 System 2 has no way of testing, let alone sourcing, the information it provides. Thus, one cannot know whether the information available for introspection is insufficient in this or that respect. But since the information accessed is never flagged for insufficiencies (and why should it be, when it is generally reliable?) this suggests sufficiency will always be the assumptive default. Given that Carruthers’ innate self-transparency account is one that he has developed with great care and ingenuity over the course of several years, a full rebuttal of the position would require an article in its own right. It’s worth noting, however, that many of the advantages that he attributes to his self- transparency mechanism also fall out of the default self-transparency account proposed here, with the added advantage of exacting no metabolic or computational cost whatsoever. You could say it’s a ‘more for even less’ account. But despite its parsimony, there’s something decidedly strange about the notion of default self- transparency. Carruthers himself briefly entertains the possibility in The Opacity of the Mind, stating that “[a] universal or near-universal commitment to transparency may then result from nothing more than the basic principle or ‘law’ that when something appears to be the case one is disposed to form the belief that it is the case, in the absence of countervailing considerations or contrary evidence” (15). How might this ‘basic principle or law’ be characterized? Carruthers, I think, shies from pursuing this line of questioning simply because it presses FIG into hitherto unexplored territory. Parsimony alone motivates a sustained consideration of what lies behind default self- transparency. Emily Pronin (2009), for instance, in her consideration of the ‘introspection illusion,’6 draws an important connection between the assumption of self transparency and the so-called ‘ blind spot,’ the fact that we find obvious in others are almost entirely invisible to ourselves. She details a number of studies7 where subjects were even more prone to exhibit this ‘blindness’ when provided opportunities to introspect. Now why are these biases invisible to us? Should we assume, as Carruthers does in the case of self-transparency, that some mechanism or mechanisms are required to represent our intuitions as unbiased in each case? Or should we exercise thrift and suppose that something structural is implicit in each? In what follows, I propose to pursue the latter possibility, to argue that what I called ‘default

5 Meaning that only some information is broadcast or integrated.

6 The ‘introspective illusion,’ as she defines it, essentially “involves people’s treatment of their as a sovereign (or at least, uniquely valuable) source of information about themselves” (4).

7 Armor & Taylor, 1998; Ehrlinger et al, 2005; Helweg-Larsen & Shepperd, 2001; Taylor & Brown, 1988; Weinstein, 1980. Bakker 6 sufficiency’ above is an inevitable consequence of mechanistic explanation, or FIG, once we appreciate the systematic role informatic neglect plays in human cognition.

The Invisibility of Ignorance

Which brings us to Daniel Kahneman. In a New York Times (2011, October 19) piece entitled “Don’t Blink! The Hazards of Confidence,” he writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he later terms it, What-You-See-Is-All-There-Is, or WYSIATI. The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum. The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

These symptoms are almost tailor-made for FIG. Obviously, the blindness stems from the occlusion of Bakker 7 raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing.8 As Kahneman might say, since System 1 is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind. Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. And both, I want to argue, are analogous to the default self-transparency thesis I offered in lieu of Carruthers’ innate self-transparency thesis above. Consider the following ‘translation’ of Prigatano’s symptoms, only applied to what might be called ‘Carruthers’ Syndrome’:

First, the philosopher is introspectively blind to his PAs secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspectively access his PAs. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

Here we see how the default self-transparency thesis I offered above is capable of filling the explanatory shoes of Carruthers’ innate self-transparency thesis: it simply falls out of the structure of cognition. In FIG terms, what philosophers call ‘introspection’ possibly provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. Our sense of self-transparency, in other words, is a kind of ‘unk-unk effect,’ what happens when we can’t see that we can’t see. In the absence of information to the contrary, what is globally broadcast (or integrated) for System 2 deliberative uptake, no matter how attenuated, seems become everything there is to apprehend. But what does it mean to that say that default self-transparency ‘falls out of the structure of cognition’? Isn’t this, for instance, a version of ‘belief perseverance’? Prima facie, at least, something like Keith Stanovich’s (1999) ‘knowledge projection argument’ might seem to offer an explanation, the notion that “in a natural ecology where most of our prior beliefs are true, projecting our beliefs onto new data will lead to faster accumulation of knowledge” (Sa, 1999, 506). But as the analogy to Kahneman’s WYSIATI and Anton’s Syndrome should make clear, something considerably more profound than the ‘projection of prior beliefs’ seems to be at work here. The question is what. Consider the following: On Carruthers’ innate self-transparency account, the assumption seems to be that short of the mindreading system telling us otherwise, we would know that something hinky is afoot. But how? To paraphrase Plato, how could we, having never seen otherwise, know that we were simply guessing at a parade of shadows? What kind of cognitive resources could we draw on? We couldn’t source the information back to the mindreading system. Neither could we compare it with some baseline–some introspective yardstick of informatic sufficiency. In fact, it’s difficult to imagine

8 The Psychological Denial Hypothesis (Weinstein and Kahn, 1955), which offers a quite different model of anosognosia, seems to be discounted by the targeted nature of the deficits that patients refuse to acknowledge. See, Heilman and Harciarek, 2010. Bakker 8 how we might come to doubt introspectively accessed information at all, short of regimented, deliberative inquiry.9 So then why does Carruthers seem to make the opposite assumption? Why does he assume that we would know short of some representational device telling us otherwise? To answer this question we first need to appreciate the ubiquity of ‘unk-unk effects’ in the natural world. The exploitation of cognitive scotoma or blind spots has shaped the of entire species, including our own. Consider the apparently instinctive nature of human censoriousness, the implicit understanding that managing the behaviour of others requires managing the information they have available. Consider mimicry or camouflage. Or consider ‘obligate brood parasites’ such as the cuckoo, which lays its eggs in the nests of other birds to be raised to maturity by them. Looked at in purely biomechanical terms, these are all examples of certain organic systems exploiting (by operating outside) the detection/response thresholds of other organic systems. Certainly the details of these interactions remain a work in progress, but the principle is not at all mysterious. One might say the same of Anton’s syndrome or anosognosia more generally: disabling certain devices systematically impacts the capacities of the system in some dramatic ways, including deficit detection. The lack of information constrains computation, constrains cognition, period. It seems pretty straightforward, mechanically speaking. So why, then, does Anton’s jar against our epistemic intuitions the way it does? Why do we want to assume that somehow, even if we experienced the precise pattern of neural damage, we would be the magical exception, we would say, “Aha! I only think I see!” Because when we are blind to our blindnesses, we think we see, either actually or potentially, all that there is to be seen. Or as Kahneman would put it, because of WYSIATI. We think we would be the one Anton’s patient who would actually cognize their loss of sight, in other words, for the very same reason the Anton’s patient is convinced he can still see! The lack of information not only constrains cognition, it constrains cognition in ways that escape cognition. We possess, not a representational presumption of introspective omniscience, but a structural inability to cognize the limits of metacognition. You might say introspection is a kind of anosognosiac. So why does Carruthers assume the mindreading system needs an incorrigibility device? The Accomplishment Assumption forces his hand, certainly. He thinks he has an apparently discrete intuition–self-transparency–that has to be generated somehow. But in explaining away the intuition he is also paradoxically serving it, because even if we agree with Carruthers, we nonetheless assume we would know something is up if incorrigibility wasn’t somehow signalled. There’s a sense, in other words, in which Carruthers’ argument against self-transparency appeals to it!10 Now this broaches the question of how informatic neglect bears on our epistemic intuitions more generally. My goal here, however, is to simply illustrate that informatic neglect has to have a pivotal role to play in our understanding of cognition through an account of the role it plays in introspection. Suffice to say the ‘basic principle or law’ that Carruthers considers in passing is actually more basic than the ‘disposition to believe in the absence of countervailing considerations.’ Our cognitive systems simply cannot allow, to use Kahneman’s terms, for information they do not have. This is a brute fact of natural information processing systems.

9 And even then we seem prone to think it incorrigible, if the philosophical record is any indication. See Carruthers 2008, and 2011, Chapter 2.

10 This nicely illustrates what Schooler and Schreiber (2004) the ‘paradox of introspection,’ and the way the ‘self-evidential’ nature of our introspective intuitions seems an inevitable default. Bakker 9

Sufficiency is the default because information, understood as systematic differences making systematic differences, is effective. This is why, for instance, unknowns must be known, to effect changes in behaviour. And this is what makes research on cognitive biases and the neuropathologies of neglect so unsettling: they clearly show the way we are mere mechanisms, cognitive systems causally bound to the information available. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be. The potential severity of those limits remains to be seen.

Introspection and the Bayesian Brain

Since unknown unknowns offer FIG nothing to follow, it should perhaps come as no surprise that the potential relevance of unk-unks has itself remained an unknown unknown in cognitive science. The idea proposed here is that ‘naive introspection’11 be viewed as a kind of natural anosognosia, as a case where we think we see, even though we are largely blind. It stands, therefore, squarely in the ‘introspective unreliability’ camp most forcefully defended by Eric Schwitzgebel (2007, 2008, 2011a, 2011b, 2012). Jacob Hohwy (2011, 2012), however, has offered a novel defence of introspective reliability via a sustained consideration of Karl Friston’s (2006, 2012, for an overview) free energy elaboration of the Bayesian brain hypothesis, an approach which has been recently been making inroads due to the apparent comprehensiveness of its explanatory power.12 Hohwy (2011) argues that the introspective unreliability suggested by Schwitzgebel is in fact better explained by phenomenological variability. Introspection only appears as unreliable as it does on Schwitzgebel’s account because it assumes a relatively stable phenomenology. “The evidence,” Hohwy writes, “can be summarized like this: everyday or ‘naive’ introspection tells us that our phenomenology is stable and certain but, surprisingly, calm and attentive introspection tells us our phenomenology is not stable and certain, rather it is variable and uncertain” (265). In other words, either ‘attentive introspection’ is unreliable and phenomenology is stable, or ‘naive introspection’ is unreliable and phenomenology is in fact variable. Hohwy identifies at least three sources of potential phenomenological variability on Friston’s free energy account: 1) attenuation of the ‘prediction error landscape’ through ‘inferences’ that cancel out predictive success and allow unpredicted input to ascend; 2) change through ‘agency’ and movement; and 3) increase in precision and gain via attention. Thus, he argues “[i]f the brain is this kind of -machine, then it is a fundamental expectation that there is variability in the phenomenology engendered by perceptual inferences, and to which introspection in turn has access” (270). The problem with saving introspective reliability by arguing phenomenal variability, however, is that it becomes difficult to understand what in operational terms is exactly being saved. Is the target too quick? Or is the tracking too slow? Hohwy can adduce evidence and arguments for the variability of conscious experience, and Schwitzgebel can adduce evidence and arguments for the unreliability of introspection, but there is a curious sense in which their conclusions are the same: in a number of respects conscious experience eludes introspective cognition. Setting aside this argument, the real value in Hohwy’s account lies in his consideration of what

11 On this account, any view of introspection that assumes self-transparency.

12 The New Scientist (Huang, 2008) quotes Stanislas Dehaene as saying, “It is the first time we have had a theory of this strength, breadth, and depth in cognitive neuroscience.” Bakker 10 might be called introspective applicability and introspective interference. Regarding the first, applicability, Hohwy is concerned with distinguishing those instances where the researcher’s request, ‘Please, introspect,’ is warranted and where it is ‘suboptimal.’ He discusses the so-called ‘default mode network,’ the systems of the brain engaged when the subject’s thoughts and imagery are detached from the world, as opposed to the systems engaged when the subject is directly involved with his or her environment. He then argues that the variance in introspective reliability one finds between experiments can be explained by whether the mental tasks involve the default mode as opposed to mental tasks involving the environmental mode. Tasks involving the default mode evince greater reliability when compared to tasks involving the environmental mode, he suggests, simply because the request to introspect is profoundly artificial in the latter. His argument, in other words, is that introspection, as an adaptive, evolutionary artifact, is not a universally applicable form of cognition, and that the apparent unreliability of introspection is potentially a product of researchers asking subjects to apply introspection ‘out of bounds,’ in ways that it simply was not designed to be used. In ecological rationality terms (Todd and Gigarenzer, 2012), one might say introspection is a specialized cognitive tool (or collection of tools), a heuristic like any other, and as such will only properly function the degree to which it is properly matched to its ‘ecology.’ This possibility raises a host of questions. If introspection, far from being the monolithic, information-maximizing faculty assumed by the tradition, is actually a kind of cognitive tool box, a collection of heuristics adapted to discharge specific functions, then we seem to be faced with the onerous task of identifying the tools and matching them to the appropriate tasks. Regarding introspective interference, the question, to paraphrase Hohwy is whether introspection changes or leaves phenomenal states as they are (262). In the course of discussing the likelihood that introspection involves a plurality of processes pertaining to different domains, he provides the following footnote:

Another tier can potentially be added to this account, directed specifically at the cognitive mechanisms underpinning introspection itself. If introspection is itself a type of internal predictive inference taking phenomenal states as input, then introspective inference would be subject to the similar types of prediction error dynamics as perceptual inference itself. In this way introspective inference about phenomenality would add variability to the already variable phenomenality. This sketch of an approach to introspection is attractive because it treats introspection as also a type of unconscious inference; however, it remains to be seen if it can be worked out in satisfactory detail and I do not here want to defend introspection by subscribing to a particular theory about it. 270

By ascribing to Friston’s free energy account, Hohwy is committed to an account that conceives the brain as a mechanism that extracts information regarding the causal structure of its environment via the sensory effects of that environment. As Hohwy (2012) puts it, a ‘problem of representation’ follows from this, since the brain is stranded with sensory effects and so has no direct access to causes. As a result it needs to establish causal relations de novo, as he puts it. Sensory input contains patterns as well as noise, the repetition of which allows the formation of predictions, which can be ‘tested’ against further repetitions. Prediction error minimization (PEM) allows the system to automatically adapt to real causal patterns in the environment, which can then be said to ‘supervise’ the system. The idea is that the brain contains a hierarchy of ascending PEM levels, beginning with basic sensory and causal regularities, and with the ‘harder to predict’ signals being passed upward, ultimately producing representations of the world possessing ‘causal depth.’ All these levels exhibit ‘lateral connectivity,’ allowing the refinement of prediction via ‘contextual information.’ Bakker 11

Although the free energy account is not an account of consciousness, it does seem to explain what Floridi (2011) calls the ‘one dimensionality of experience,’ the way, as he writes, “experience is experience, only experience, and nothing but experience” (296). If the brain is a certain kind of Bayesian causal inference engine, then one might expect the generative models it produces to be utterly lacking any explicit neurofunctional information, given the dedication of neural structure and function to minimizing environmental surprise. One might expect, in other words, that the causal structure of the brain will be utterly invisible to the brain, that it will remain, out of structural necessity, a dreaded unknown unknown–or unk-unk. The brain, on this kind of prediction error minimization account,13 simply has to be ‘blind’ to itself. And this is where, far from ‘attractive’ as Hohwy suggests, the mere notion of ‘introspection’ modelled on prediction error minimization becomes exceedingly difficult to understand. Does introspection (or the plurality of processes we label as such) proceed via hierarchical prediction error minimization from sensory effects to build generative models of the causal structure of the human brain? Almost certainly not. Why? Because as a free energy minimizing mechanism (or suite of mechanisms), introspection would seem to be thoroughly hobbled for at least four different :

1) Functional dependence: On the free energy account, the human brain distills the causal structure of its environments from the sensory effects of that causal structure. One might, on this model, isolate two distinct vectors of , one, which might be called the ‘lateral,’ pertaining to the causal structure of the environment, and another, which might be call the ‘medial,’ pertaining to the causal structure of sensory inputs and the brain. As mentioned above, the brain can only model the lateral vector of environmental causal structure by neglecting the medial vector of its own causal structure. This neglect requires that the brain enjoy a certain degree of functional independence from the causal structure of its environment, simply because ‘medial interference’ will necessarily generate ‘lateral noise,’ thus rendering the causal structure of the environment more difficult, if not impossible, to model.14 The sheer interconnectivity of the brain, however, would likely render substantial medial interference difficult for any introspective device (or suite of devices) to avoid.

2) Structural immobility: Proximity complicates cognition. To get an idea of the kind of modelling constraints any neurally embedded introspective device would suffer, think of the difference between two anthropologists trying to understand a preliterate tribesman from the Amazon, the one ranging freely with her subject in the field, gathering information from a plurality of sources, the other locked with him in a coffin. Since it is functionally implicated–or brainbound–relative to its target, the ability of any introspective device (or suite of devices) to engage in the ‘active inferences’ would be severely restricted. On Friston’s free energy account the passive reception of sensory input is complemented by behavioural outputs geared to maximizing information from a variety of positions within the organism’s environment, thus minimizing the likelihood of ‘perspectival’ or angular illusions, false inferences due to the inability to test predictions from alternate angles and positions. Geocentrism is perhaps the most notorious example of such an illusion. Given structural immobility, one might suppose, any introspective device (or suite of devices) would suffer ‘phenomenal’ analogues to this and other illusions pertaining to limits

13 And, I would argue, any mechanistic account. See Bakker (2012).

14 The way the ‘observer effect’ in quantum mechanics, for instance, is due to the perturbances of sensory mediums (photons, etc.) that otherwise possess no appreciable ‘lateral effects’ on larger scales. Bakker 12 placed on exploratory information-gathering.15

3) Cognitive resources: If we assume that human introspective capacity is a relatively recent evolutionary adaptation, we might expect any introspective device (or suite of devices) to exploit preexisting cognitive resources, which is to say, cognitive systems primarily adapted to environmental prediction error minimization. For instance, one might argue that both (1) and (2) fairly necessitate the truth of something like Carruther’s mindreading account, particularly if (as seems to be the case) mindreading antedates introspection. Functional dependence and structural immobility suggest that we are actually in a better position mechanically to accurately predict the behaviour of others than ourselves, as indeed a growing body of evidence indicates (Carruthers (2009) provides an excellent overview). Otherwise, given our apparent ability to attend to the whole of experience, does it make sense, short of severe evolutionary pressure, to presume the evolution of entirely novel cognitive systems adapted to the accurate modelling second-order, medial information? It seems far more likely that access to this information was incremental across generations, and that it was initially selected for the degree to which it proved advantageous given our preexisting suite of environmentally oriented cognitive abilities.

4) Target complexity: Any introspective device (or suite of devices) modelled on the PEM (or, for that matter, any other mechanistic) account must also cope with the sheer functional complexity of the human brain. It is difficult to imagine, particularly given (1), (2), and (3) above, how the tracking that results could avoid suffering out-and-out astronomical ‘resolution deficits’ and distortions of various kinds.

The picture these complicating factors paint is sobering. Any introspective device (or suite of devices) modelled on free energy Bayesian principles would be almost fantastically crippled: neurofunctionally embedded (which is to say, functionally entangled and structurally imprisoned) in the most complicated machinery known, accessing information for environmentally biased cognitive systems. Far from what Hohwy supposes, the problems of applicability and interference, when pursued through a free energy lens, at least, would seem to preclude introspection as a possibility. But there is another option, one that would be unthinkable were it not for the pervasiveness and profundity of the unk-unk effect: that this is simply what introspection is, a kind of near blindness that we confuse for brilliant vision, simply because it’s the only vision we know. The problem facing any mechanistic account of introspection can be generalized as the question of information rendered and cognitive system applied: to what extent is the information rendered insufficient, and to what extent is the cognitive system activated misapplied? This, I would argue, is the great fork in the FIG road. On the ‘information rendered’ side of the issue, informatic neglect means the assumption of sufficiency. We have no idea, as a rule, whether we have the information we need for effective deliberation or not. One need only consider the staggering complexity of the brain–complex enough to stymy a science that has puzzled through the origins of the universe in the meantime–to realize the astronomical amounts of information occluded by metacognition. On the ‘cognitive system applied’

15 Pronin (2009) discusses the way the introspection illusion possibly informs ‘lay beliefs in ,’ not realizing that she is actually describing a kind of perspectival illusion:

When people introspect, they are compelled by feelings of possibilities, intentions, and choice, all providing them with the sense that they have free will. Yet, when they look at others’ actions and outcomes, they are compelled by the notion of determinism. That asymmetry captures the everyday experiences people have when they feel their own choices are very real (and often stressful, thrilling, and heart-wrenching), while at the same time they are surprised by others tormenting themselves over choices for which their ultimate decision seems obvious from the outset. 44 Bakker 13 side, informatic neglect means the assumption of universality. We have no idea, as a rule, whether we’re misapplying ‘introspection’ or not. One need only consider the heuristic nature of human cognition, the fact that heuristics are adaptive and so matched to specific sets of problems, to realize that introspective misapplications, such as those argued by Hohwy, are likely an inevitability.16 This is the turn where unknown unknowns earn their reputation for dread. Given the informatic straits of introspection, what are the chances that we, blind as we are, have anything approaching the kind of information we require to make accurate introspective judgments regarding the ‘nature’ of mind and consciousness? Given the heuristic limitations of introspection, what are the chances that we, blind as we are, somehow manage to avoid colouring far outside the cognitive lines? Is it fair to assume that the answer is, ‘Not good’? Before continuing to consider this question in more detail, it’s worth noting how this issue of informatic availability and cognitive applicability becomes out-and-out unavoidable once you acknowledge the problem of the ‘dreaded unknown unknowns.’ If the primary symptom of patients suffering neuropathological neglect is the inability to cognize their cognitive deficits, then how do we know that we don’t suffer from any number of ‘natural’ forms of metacognitive neglect? The obvious answer is, We don’t. Could what we call ‘philosophical introspection’ simply be a kind of mitigated version of Anton’s Syndrome? Could this be the reason why we find consciousness so stupendously difficult to understand? Given millennia of assuming the best of introspection and finding only perplexity, perhaps, finally, the time has come to assume the worst, and to reconceptualize the problematic of consciousness in terms of privation, distortion, and neglect.

Conclusion: Introspection, Tangled and Blind

Cognitive science and philosophy of mind suffer from a profound scotoma, a blindness to the structural role blindness plays in our intuitive assumptions. As we saw in passing, FIG actually plays into this blindness, encouraging theorists and researchers to conceive the relationship between information and experience exclusively in what I called Accomplishment terms. If self-transparency is the ubiquitous assumption, then it follows that some mechanism possessing some ‘self-transparency representation’ must be responsible. Informatic neglect, however, allows us to see it in more parsimonious, structural terms, as a positive, discrete feature of human cognition possessing no discrete neurofunctional correlate. And this, I would argue, counts as a game-changer as far as FIG is concerned. The possibility that certain, various discrete features of cognition and consciousness could be structural expressions of various kinds of informatic neglect not only rewrites the rules of FIG, it drastically changes the field of play. That FIG needs to be sensitive to informatic neglect I take as uncontroversial. Informatic neglect seems to be one of those peculiar issues that everyone acknowledges but never quite sees, one that goes without saying because it goes unseen. Schwitzgebel (2012), for instance, provides a number of examples of the complications and ambiguities attending ‘acts of introspection’ to call attention to the artificial division of introspective and non-introspective processes, and in particular, to what might be called the

16 On the Blind Brain Theory (Bakker, 2012), this provides the principled grounds for ‘explaining away’ various long-standing conundrums regarding consciousness. It suggests, for instance, that the so-called ‘unity of consciousness’ is an artifact of the same cognitive processes that mistakenly identify aggregates as individuals in suboptimal informatic conditions, the way congregated ants, for instance, appear to be spilled paint from a distance. The difference with introspection, of course, is that structural immobility means it has no way of seeing past the illusion. Bakker 14

‘transparency problem,’ the way judgments about experience effortlessly slip into judgments about the objects/contents of experience.17 Given this welter of obscurities, complicating factors, not to mention the “massive interconnection of the brain,” he advocates what might be called a ‘tangled’ account of introspective cognitive processes:

What we have, or seem to have, is a cognitive confluence of crazy spaghetti, with aspects of self-detection, self-shaping, self-fulfilment, spontaneous expression, priming and association, categorical assumptions, outward perception, memory, inference, hypothesis testing, bodily activity, and who only knows what else, all feeding into our judgments about current states of mind. To attempt to isolate a piece of this confluence as the introspective process – the one true introspective process, though influenced by, interfered with, supported by, launched or halted by, all the others – is, I suggest, like trying to find the one way in which a person makes her parenting decisions... 19

Given that you accept his conclusion as a mere possibility (or as I would argue, a distinct probability), you implicitly accept much of what I’m saying here regarding informatic neglect. You accept that introspection could be massively plural while appearing to be unitary. You accept that introspection could be skewed and distorted while appearing to be the very rule. How could this be, short of informatic neglect? Recall Pronin’s (2009) ‘bias blind spots,’ or Hohwy’s (2011) mismatched ‘plurality of processes.’ How could it be that we swap between cognitive systems oblivious, with nothing, no intuition, no feel, to demarcate any transitions, let alone their applicability? As I hope should be clear, this question is simply a version of Carruthers’ question from above: How could it be we once unanimously thought that introspection was incorrigible? Both questions ask the same thing of introspection, namely, To what extent are the various limits of introspection available to introspection? The answer, quite simply, is that they are not. Introspection is out-and-out blind to its internal structure, its cognitive applicability, and its informatic insufficiencies–let alone to its neurofunctionality. To the extent that we fail to recognize these blindnesses, we are effectively introspective anosognosiacs, simply hoping that things are ‘just so.’ And this is just to say that informatic neglect, once acknowledged, constitutes a genuine theoretical crisis, for philosophy of mind as well as for cognitive science, insofar as their operational assumptions turn on interpretations of information gleaned, by hook or by crook, from ‘introspection.’ Of course, the ‘problem of introspection’ is nothing new (in certain circles, at least). The literature abounds with attempts to ‘sanitize’ introspective data for scientific consumption. Given this, one might wonder what distinguishes informatic neglect from the growing army of experimental confounds already identified.18 Perhaps the appropriate methodological precautions will allow us to quarantine the problem. Schooler and Schreiber (2004), for instance, offer one such attempt to ‘massage’ FIG in such a way to preserve the empirical utility of introspection. After considering a variety of ‘introspective failures,’ they pin the bulk of the blame on what they call ‘translation dissociations’ between consciousness and meta-consciousness, the idea being that the researcher’s demand, ‘Please, introspect,’ forces the subject to translate information available for introspection into action. They categorize three kinds of translation dissociations: 1) detection, where the ‘signal’ to be introspected is too weak or ambiguous; 2) transformation, where tasks “require intervening operations for which the

17 As he writes, “In such cases, introspection might be best regarded as perception with a twist or with a slightly different aim that can be half forgotten” (10).

18 See Irvine (2012) for a thorough review. Bakker 15 system is ill-equipped” (32); and 3) substitution, where the information rendered has no connection to the information experimentally targeted. Once these ‘myopias’ are identified, the assumption is, methodologies can be designed to act as corrective lenses. The problem that informatic neglect poses for FIG, however, is far and away more profound. To see this, one need only consider the dichotomy of ‘consciousness versus metaconsciousness,’ and the assumption that there is some fact of the matter pertaining to the first that is in principle accessible to the latter. The point isn’t that no principled distinction can be made between the two, but rather that even if it can, the putative target, consciousness, is every bit as susceptible to informatic neglect as any metaconscious attempt to cognize it. The assumption is simply this: Information that finds itself globally broadcast or integrated will not, as a rule, include information regarding its ‘limits.’ Insofar as we can assume this, we can assume that informatic neglect isn’t so much a ‘problem of introspection’ as it is a problem afflicting consciousness as whole. Our sketch of Friston’s Bayesian brain above demonstrated why this must be the case. Simply ask: What would the brain require to accurately model itself from within itself? On the PEM account, the brain is a dedicated causal inference engine, as it must be, given the difficulties of isolating the causal structure of its environment from sensory effects. This means that the brain has no means of modelling its own causal structure, short of either 1) analogizing from brains found in its environment, or 2) developing some kind of onboard ‘secondary inference’ system, one which, as was argued above, we should expect would face a number of dramatic informatic and cognitive obstacles. Functionally entangled with, structurally immured in, and heuristically mismatched to the most complicated machinery known, such a secondary inference system, one might expect, would suffer any number of deficits, all the while assuming itself incorrigible simply because it lacks any direct means of detecting otherwise. Consciousness could very well be a cuckoo, an imposter with ends or functions all its own, and we would never be able to intuit otherwise. As we have seen, from the mechanistic standpoint this has to be a possibility. A given this possibility, informatic neglect plainly threatens all our assumptions. Once again: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Bracket, as best you can, your introspective assumptions, and ask yourself how many ways these questions cogently answered. Far more than is friendly to our intuitive assumptions–these little blind men who wander out of the darkness telling fantastic and incomprehensible tales. Even apparent boilerplate intuitions like efficacy become moot. The argument that the brain is generally efficacious is trivial. Given that the targets of introspective tracking are systematically related to the function of the brain, informatic neglect (and the illusion of sufficiency in particular) suggests that what we introspect or intuit will evince practical efficacy no matter how drastically its actual neural functions differ or even contradict our manifest assumptions. Neurofunctional dissociations, as unknown unknowns, simply do not exist for metacognition. “[T]he absence of representation,” as Dennett (1991) famously writes, “is not the same as the representation of absence” (359).19 Since the ‘unk-unk effect’ has no effect, cognition is stranded with assumptive sufficiency on the one hand, and the efficacy of our practices on the other. Informatic neglect, in other words, means that our manifest intuitions (not to mention our traditional assumptions) of efficacy are all but worthless. The question of the efficacy of what philosophers think they intuit or introspect is what it has always been: a question that only a mature neuroscience can resolve. And given that nothing biases intuition or introspection ‘friendly’ outcomes over unfriendly outcomes, we need to grapple with the fact that any future neuroscience is far more likely

19 Much of the argument here is foreshadowed by Dennett’s argument in Chapter 11 of Consciousness Explained. Among other things, however, Dennett failed to see the consequences that obviously arise from an information account of neglect. As a result he found the efficacy argument (in the guise of the ‘intentional stance’) quite convincing, even going so far as to assert “you can get close” (2002, 1) to infallibility regarding consciousness. Bakker 16 to be antagonistic to our intuitive, introspective assumptions than otherwise. There are far more ways for neurofunctionality to contradict our manifest and traditional assumptions than to rescue them. And perhaps this is precisely what we should expect, given the dismal history of traditional discourses once science colonizes their domain. It is worth noting that a priori arguments simply beg the question, since it is entirely possible (likely probable given the free energy account) that evolution stranded us with suboptimal metacognitive capacities. One might simply ask, for instance, from where do our intuitions regarding the a priori come?20 Evolutionary arguments, on the other hand, cut both ways. Everyone agrees that our general metacognitive capacities are adaptations of some kind, but adaptations for what? The accurate second- order appraisals of cognitive structure or ‘mind’ more generally? Seems unlikely. As far as we know, our introspective capacities could be the result of very specific evolutionary demands that required only gross distortions to be discharged.21 What need did our ancestors have for ‘theoretical descriptions of the mental’? Given informatic neglect (and the spectre of ‘Carruthers’ Syndrome’), evolutionary appeals would actually seem to count against the introspectionist, insofar as any story told would count as ‘just so,’ and thus serve to underscore the improbability of that story. Again, the two question to be asked are: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Informatic neglect, the dreaded unknown unknown, allows us to see how many ways these questions can be answered. By doing so, it makes plain the dramatic extent of our anosognosia, to think that we had won the magical introspection lottery. Short of default self-transparency, why would anyone trust in any intuitions incompatible with those that underwrite the life sciences? If it is the case that evolution stranded us with just enough second-order information and cognitive resources to discharge a relatively limited repertoire of processes, then perhaps the last two millennia of second-order philosophical perplexity should not surprise us. Maybe we should expect that science, when it finally provides a detailed picture of informatic availability and cognitive applicability, will be able to diagnose most traditional philosophical problematics as the result of various, unavoidable cognitive illusions pertaining to informatic depletion, distortion and neglect. Then, perhaps, we will at last be able to see the terrain of perennial philosophical problems as a kind of ‘free energy landscape’ sustained by the misapplication of various, parochial cognitive systems to insufficient information. Perhaps noocentrism, like biocentrism and geocentrism before it, will become the purview of historians, a third and final ‘narcissistic wound.’

20 On the Blind Brain account (Bakker 2012), the ‘a priori’ is itself a cognitive illusion pertaining to introspective neglect, and a particularly pernicious one, given the incontestable power of the formal sciences we think it makes possible. We need to seriously consider the possibility that what we conceive of as ‘inference structures’ are the ‘informatic shadow’ of neural interaction patterns. This would allow us to conceive of mathematics as an empirical domain (a science of hyper-applicable combinatorial patterns, perhaps), one that we explore performatively (thus the ‘distinctive nature’ of mathematical cognition) via the capacities of our own brain (and more and more, the capacities of our machines). This interpretation (which essentially turns the so-called ‘compatibility proof’ on its head) could go a long way to explain what Wigner (1960) termed the ‘the unreasonable effectiveness’ of mathematics in natural science. Either way, given the computational trends in mathematics and our near complete ignorance regarding mathematical cognition (Sklar et al, 2012), this naturalistic approach, as extreme as it is, deserves serious consideration.

21 If not outright deception. See von Hippel and Trivers (2011), Lopez and Fuxjager (2012). Bakker 17

References

Armor, D., Taylor, S. (1998). Situated optimism: specific outcome expectancies and self-regulation. In M. P. Zanna (ed.), Advances in Experimental Social Psychology. 30. 309-379. New York, NY: Academic Press.

Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.

Bakker, S. (2012). The last magic show: a blind brain theory of the appearance of consciousness. Retrieved from http://www.academia.edu/1502945/The_Last_Magic_Show_A_Blind_Brain_Theory_of_the_Appearance _of_Consciousness

Bechtel, W, and Abrahamson, A. (2005). Explanation: a mechanist alternative. Studies in the History of Biological Biomedical Sciences. 36. 421-441.

Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience. New York, NYPs:y chology Press.

Carruthers, P. (forthcoming). On knowing your own beliefs: a representationalist account. Retrieved from http://www.philosophy.umd.edu/Faculty/pcarruthers/On%20knowing%20your%20own%20beliefs.pdf * [In Nottelman (ed.). New Essays on Belief: Structure, Constitution and Content. Palgrave MacMillan]

Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press.

Carruthers, P. (2009a). Introspection: divided and partly eliminated. Philosophy and Phenomenological Research. 80(1). 76-111.

Carruthers, P. (2009b). How we know our own minds: the relationship between mindreading and metacognition. Behavioral and Brain Sciences. 1-65. doi:10.1017/S0140525X09000545

Carruthers, P. (2008). Cartesian epistemology: is the theory of the self-transparent mind innate? Journal of Consciousness Studies. 15(4). 28-53.

Carruthers, P. (2006). The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Clarendon Press.

Dennett, D. C. (2002). How could I be wrong? How wrong could I be? Journal of Consciousness Studies. 9. 1-4.

Dennett, D. C. (1991). Consciousness Explained. Boston, MA: Little Brown.

Dewey, J. (1958). Experience and Nature. New York, NY: Dover Publications.

Ehrlinger, J., Gilovich, T., and Ross, L. (2005). Peering into the : people’s assessments of Bakker 18 bias in themselves and others. Personality and Social Psychology Bulletin, 31. 680-692.

Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford University Press.

Friston, K. (2012). A free energy principle for biological systems. Entropy, 14. doi: 10.3390/e14112100.

Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology - Paris, 100(1-3). 70-87.

Gigarenzer, G., Todd, P. and the ABC Research Group. (1999). Simple Heuristics that Make Us Smart. Oxford: Oxford University Press.

Heilman, K. and Harciarek, M. (2010). Anosognosia and anosodiaphoria of weakness. In G. P. Prigatano (ed.), The Study of Anosognosia. 89-112. Oxford: Oxford University Press.

Helweg-Larsen, M. and Shepperd, J. (2001). Do moderators of the optimistic bias affect personal or target risk estimates? A review of the literature. Personality and Social Psychology Review, 5. 74-95.

Hohwy, J. (2012). Attention and conscious perception in the hypothesis testing brain. Frontiers in Psychology, 3(96) 1-14. doi: 10.3389/fpsyg.201200096.

Hohwy, J. (2011). Phenomenal variability and introspective reliability. Mind & Language, 26(3). 261- 286.

Huang, G. T. (2008). Is this a unified theory of the brain? The New Scientist. (2658). 30-33.

Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA: MIT Press.

Irvine, E. (2012). Consciousness as a Scientific Concept: A Philosophy of Science Perspective. New York, NY: Springer.

Kahneman, D. (2011, October 19). Don’t blink! The hazards of confidence. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/magazine/don’t-blink-the-hazards-of-confidence.html?pagewanted= all&_r=0

Kahneman, Daniel (2011). Thinking, Fast and Slow. Toronto, ON: Doubleday Canada.

Lopez, J. K., and Fuxjager, M. J. (2012). Self-deception’s adaptive value: effects of positive thinking and the winner effect. Consciousness and Cognition. 21. 315-324.

Prigatano, G. and Wolf, T. (2010). Anton’s Syndrome and unawareness of partial or complete blindness. In G. P. Prigatano (ed.), The Study of Anosognosia. 455-467. Oxford: Oxford University Press.

Pronin, E. (2009). The introspection illusion. In M. P. Zanna (ed.), Advances in Experimental Social Psychology, 41. 1-68. Burlington: Academic Press. Bakker 19

Sa, W. C., West, R. F. and Stanovich, K. E. (1999). The domain specificity and generality of . Journal of Educational Psychology, 91(3). 497-510.

Schooler, J. W., and Schreiber, C. A. (2004). Experience, meta-consciousness, and the paradox of introspection. Journal of Consciousness Studies. 11. 17-39.

Schwitzgebel, E. (2012). Introspection, what? In D. Smithies & D. Stoljar (eds.), Introspection and Consciousness. Oxford: Oxford University Press.

Schwitzgebel, E. (2011a). Perplexities of Consciousness. Cambridge, MA: MIT Press.

Schwitzgebel, E. (2011b). Self-Ignorance. In J. Liu and J. Perry (eds.), Consciousness and the Self. Cambridge, MA: Cambridge University Press.

Schwitzgebel, E. (2008). The unreliability of naive introspection. Philosophical Review, 117(2). 245-273.

Sklar, A. Y., Levy, N., Goldstein, A., Mandel, R., Maril, A., and Hassin, R. R. (2012). Reading and doing arithmetic nonconsciously. Proceedings of the National Academy of Sciences. 1-6. doi: 10.1073/pnas.1211645109.

Stanovich, K. E. (1999). Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Lawrence Erlbaum Associates.

Stanovich, K. E. and Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind and Society. 11(1). 3-13.

Taylor, S. and Brown, J. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin, 103. 193-210.

There are known knowns. (2012, November 7). In Wikipedia. Retrieved from http://en.wikipedia.org/wiki/There_are_known_knowns

Todd, P., Gigarenzer, G., and the ABC Research Group. (2012). What is ecological rationality? Ecological Rationality: Intelligence in the World. 3-30. Oxford: Oxford University Press. von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34, 1–56.

Weinstein, E. A. and Kahn, R. L. (1955). Denial of Illness: Symbolic and Physiological Aspects. Springfield, IL: Charles C. Thomas.

Weinstein, N. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39. 806-820.

Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959. Communication on Pure and Applied Mathematics. 13. 1-14. doi: 10.1002/cpa.3160130102.