<<

BIEGANSKI, BRIAN PIOTR, M.A. MAY 2018 PHILOSOPHY

CONSCIOUSNESS RESTRAINED: DOES HAVE ANY ADAPTIVE

FUNCTION? (63 pgs.)

Thesis Advisor: Dr. David Pereplyotchik

In this work, I examine the question, “Does consciousness have any necessary known function in our lives?” I look at three specific theories of consciousness: (GWT), attention theory, and higher-order thought (HOT) theory. I look at how each of these theories bear on whether consciousness has any known adaptive utility and I show that, under the strongest theory of consciousness, viz. HOT theory, it does not.

CONSCIOUSNESS RESTRAINED:

DOES CONSCIOUSNESS HAVE ANY ADAPTIVE FUNCTION?

A thesis submitted

To Kent State University in partial

Fulfillment of the requirements for the

Degree of Master of Arts

by

Brian Piotr Bieganski

May, 2018

© Copyright

All rights reserved

Except for previously published materials

Thesis written by

Brian Piotr Bieganski

B.S., Grand Valley State University, 2014

M.A., Kent State University, 2018

Approved by

_____David Pereplyotchik______, Advisor

_____Deborah Barnbaum______, Chair, Department of Philosophy

_____James L. Blank______, Dean, College of Arts and Sciences

TABLE OF CONTENTS...………………………………………………………………….……iv

CHAPTERS

I. INTRODUCTION………………………………………………………………...1

Varieties of Consciousness………………………………………………………..1

Theories of Consciousness……………………….………………………………..6

II. THEORIES OF CONSCIOUSNESS…………………………………………….19

Global Workspace Theory…………………………………………………….....19

The Attended Intermediate-Level Representation Theory………………...…….21

The Higher-Order Thought Theory………………………………………….…..26

Conclusion…………………………………………………………………….....28

III. PROBLEMS WITH AIR THEORY, GWT, AND HOT THEORY(?)…….……29

Problems with AIR Theory ………………………………………………….…29

Problems with GWT ……………………………………………………………35

Problems with HOT Theory?...... 43

IV. CONSCIOUSNESS AND ADAPTIVE FUNCTION………………………..….47

REFERENCES…………………………………………………………………………………..5

iv

Chapter I: Introduction

§1. Introduction: Varieties of Consciousness

One of the most difficult debates in cognitive science concerns the issue of consciousness. What is consciousness? How do we become conscious? Are there different types or degrees of consciousness? What does it mean to be conscious? Several matters that fall under the umbrella of consciousness research become muddied due to how elusive it is to pin down what consciousness is.

Consciousness is something so familiar to us, yet is difficult to give an accurate and complete characterization of. It is when we are aware of things, such as the world around us, or how it feels to be oneself. Yet, this is an incomplete answer, because what it is like to be me and to have my perception and awareness is different from what it is like to be someone else with their perception and awareness. What we are looking for is a complete and accurate depiction of consciousness that would allow us to distinguish between objects who have the ability to be aware, mentally responsive, and receptive to the world around them as well as their own internal states (e.g. human beings) from objects that do not (e.g. tea cups).

A panoply of characterizations of consciousness have been offered. Block (1995) draws a controversial distinction between what he calls “access consciousness” and “phenomenal

1

consciousness.” Access consciousness refers to those cases in which information stored in a person’s brain is available to be reported, reflected upon, and rationally acted upon. For example, when I am thinking about what I want for dinner tonight: I can reflect and think about many foods and whether I would get satisfaction and satiation from eating them, but I would also be able to report my mental states—what I am thinking about. As I think about wanting nachos for dinner, I am also able to report that “I want nachos.” On the other hand, by “phenomenal consciousness” Block intends to pick out “what it is like for us” to experience e.g. the taste of coffee, or the visual experience of a work of art, or the way a symphony sounds. Phenomenal consciousness seems to be what we are really interested in, yet seems much more mysterious and puzzling than access consciousness.

Many reject this distinction. Prinz (2012) writes “I don’t believe there is any form of access that deserves to be called consciousness without phenomenality. After all, access is cheap.” (p. 5). Rosenthal (2002a) points out that this distinction tacitly favors certain theories of consciousness while disadvantaging others. Rosenthal (2002a) argues that access consciousness and phenomenal consciousness distinction is ultimately untenable, because first, access consciousness does not pick out any property of mental states that we take for their being conscious, and second, it makes ambiguous two types of phenomenality: phenomenal experiences in which we are aware of our being in a mental state, and phenomenal experiences in which we aren’t aware, or even deny our being in particular mental states (see Rosenthal 2002a, p. 655). Despite the influence of Block’s distinction, it is still a controversial one.

Rosenthal (2002b) advocates a distinction between three types of consciousness. The first type is “creature consciousness” (which I will call “c-consciousness” for short). We would say that something has c-consciousness when that thing is awake and responsive to stimuli. We

2

might say that if I am knocked out, asleep, or in a coma, I lack c-consciousness, because I am neither awake nor responsive. I would lack c-consciousness, but I would still be alive. C- consciousness is a sort of “bare minimum”; the concepts such as phenomenality and experience do not enter into the notion of c-consciousness. If we wanted to talk about some being as having any type of consciousness, at a minimum, it would require that that being was at least c- conscious.

The second type of consciousness Rosenthal distinguishes is transitive consciousness

(which I will abbreviate as “t-consciousness”). T-consciousness refers to when a being is conscious of something e.g. by sensing, perceiving, or thinking about it as present. When I am sitting at my desk, I am t-conscious of many things, such as my computer, the words on my screen, my cup of tea, the coaster it rests on, my desk, my keyboard, and so on. Importantly, I am also t-conscious of my mental states, such as my beliefs, desires, and intentions; even if I am not aware of all of them at one time. In cases of being aware of one’s own mental states, those states themselves become the stimuli one is conscious of. In other words, t-consciousness is the mental representation of the stimulus, in a sense that does not imply that the mental representation is itself a conscious state. One can think of t-consciousness in terms of receptivity: we are receptive to stimuli, even if we aren’t always aware, in a higher-order way, that those stimuli are affecting us.

The final element of Rosenthal’s distinction is state consciousness (which I’ll shorten to

“s-consciousness”). It refers to the property of mental states to which we refer when we say, of some particular mental state, that that state is conscious (note that s-consciousness refers specifically to mental states, not to the person who bears these mental states: a person can be t- conscious of something without his mental states being thereby s-conscious). It is also important

3

to note that when one has a mental state that is s-conscious, then that person is able to report that state. So, for instance, if I am aware that I have a desire for nachos, then I would consequently be able to report that desire by stay, e.g. “I want nachos”—a statement about my desire, which incidentally expresses my awareness of that desire.

Rosenthal (2005) goes on to argue that “[a] mental state’s being conscious consists, at least in part, in one’s being conscious of it.” I will rehearse some of his arguments for this claim in later sections, but for now let us use the term “s-consciousness” in a way that doesn’t beg any questions or favor any particular theory. Here’s what everyone can agree on: when I see the objects in my visual field, I am t-conscious of them, but when there’s something that it’s like for me to have particular sensory, perceptual, emotional, and cognitive states from seeing these objects, those states become s-conscious. S-consciousness doesn’t apply only to one’s sensory states—even my own intentional states (such as my beliefs, desires, intentions, appetites, hopes, and so on) can be s-conscious states.

One can think of this tripartite distinction as a sort of hierarchy. A creature whose states are s-conscious is necessarily t-conscious of something and a creature who is t-conscious of something is necessarily c-conscious. On the other hand, a creature who merely has c- consciousness doesn’t necessarily have t-consciousness or s-conscious states, and the states of a creature in virtue of which it is t-conscious aren’t necessarily s-conscious.

Adopting this tripartite distinction, allows us to ascribe different kinds or levels of consciousness to a creature while remaining agnostic as to whether it has phenomenal consciousness (lobsters and worms, for example, may not have s-conscious states, but they are often c-conscious and t-conscious). This distinction is a useful tool not only for how we reason

4

and talk about consciousness, but also for our psychological and neuroscientific practices, in which we have a more robust notion of consciousness that could be studied.

Some theorists have misrepresented this characterization. Prinz (2012), for example, only talks about c-consciousness and s-consciousness, neglecting to mention t-consciousness. He writes:

State consciousness refers to those mental states, such as supraliminal visual perceptions, that are

consciously experienced. Creature consciousness refers not to individual mental states but rather to

the global condition of an organism…One might think that creature consciousness is orthogonal to

state consciousness, but the former can be readily defined in terms of the latter…If we found a

creature that had no conscious states—insects might be examples—I don’t think it would make

sense to refer to them as conscious in any sense. When a fly falls after being swatter and then

recovers, we sometimes say it was stunned, but there is little temptation to say it was knocked

unconscious. (p. 7)

Prinz claims that c-consciousness can be defined in terms of s-consciousness. But this is not so.

C-consciousness involves no phenomenal character: we can ascribe this type of consciousness to beings when we aren’t sure whether they have any phenomenology, but act in ways that might suggest that they do. S-consciousness, on the other hand, has phenomenal character built into it.

Having s-consciousness entails having c-consciousness, but not vice versa. Even Prinz’s fly example can be challenged: when we swat a fly and render it dazed, we can look at whether it possess any type of consciousness. If, at the bare minimum, it lacks wakefulness or responsivity, we couldn’t say that it has even so much as c-consciousness. But when it perks up again, we can say that it has regained c-consciousness, (and perhaps even t-consciousness) while denying it any s-consciousness.

5

Let us define s-consciousness neutrally, such that it doesn’t favor one theory over another. S-consciousness, whose seeds were first planted by Nagel (1974), is simply “what it is like for one to be in a mental state.” When I experience a symphony, or experience the flavor of nachos, or have the visual experience of Malevich’s Black Square, I am hit with certain perceptions that cause me to be in certain perceptual states. This is the sort of consciousness that many are interested in: these stimuli give rise to the “what is it like for me to have the perceptual state” of experiencing an auditory, gustatory, or visual stimulus. Consequently, s-consciousness can be defined as what it’s like for me to be in a particular mental state.

These distinctions only apply to what we mean by “consciousness.” Failure to converge on a consistent definition of consciousness results in theorists’ talking past each other, and impedes progress on discovering the underlying neural mechanisms of consciousness. Different types of consciousness are handled by different neural mechanisms, and with a disjointed definition, we couldn’t make any headway into other important issues of consciousness studies.

What I have offered are different definitions of the term, not theories as to what consciousness is, how it arises, when it occurs, and what structures underpin it. To answer these questions, we have to look not at definitions, but rather at theories of consciousness.

A plethora of theories of consciousness have been put forth. I will outline a few theories, and then make explicit which theories I will consider in more detail in subsequent chapters.

§2. Theories of Consciousness

§2.1. Dualism

One theory of consciousness adheres to dualism. The dualist theory of consciousness tries to explain how we can have consciousness when it seems like consciousness can’t be reduced to

6

brain states. Descartes was the archetypical substance dualist, believing that there were two basic substances in the world: the physical and the mental. On this view, whereas objects such as bodies and brains are composed of physical matter, things such as the mind (and particularly, consciousness) were composed of nonphysical entities. Another type of dualism is property dualism, where there are only material objects, but the brain has nonphysical properties.1 Under this view, consciousness is a causal byproduct of brain activity, but in a way that is ineffable and irreducible to brain states. In other words, we know consciousness comes from certain processes of the brain, but we can’t talk about it in terms of (that is, the neural aspects of) those brain states; our only handle on the property of consciousness properties is our ineffable first-person apprehension of it.

Dualism faces a deluge of problems along with their commitments. One of the fatal wounds of dualism is that it has not yet provided an answer as to how the nonphysical interacts

(or arises from?) the physical.2 If the mind and body are distinct, how is it that mental properties are affected by changes in the physical when the two are purportedly independent? Descartes thought that it was the pineal gland that was the location of where the mind “met up” with the physical, but Descartes’s explanation didn’t actually offer any insight. After all, it doesn’t actually tell us how the mind interacts with the brain: Descartes only thought that it did, and offered a hypothesis about where it did—viz. in the pineal gland.3 Doubtless this doesn’t solve the mind-body problem; it only that it pushes it back.

1 See Kripke (1980) and Chalmers (1996). 2 See Churchland (1984), pp.16˗36. 3 Perhaps one who accepts a Humean picture of causality might not find this objection sufficient. That is, perhaps there is no causality per se, but constant conjunction of events that occur prior and events that occur after. I will not make an argument here as to whether this view is sufficient, but many find the Humean picture of causality counterintuitive and untenable (see Wilson, 2010).

7

Still, there are those who hold that dualism is true.4 Most contemporary dualists are property dualists, holding that the brain gives rise to nonphysical properties, ones that cannot be reduced to the brain states. One such property is consciousness. Property dualists might hold that certain brain states give rise to consciousness, but consciousness can’t be reduced to those brain states. This view is known as epiphenomenalism. The epiphenomenalist holds that brain states can cause mental states, but not vice versa. In effect, according to epiphenomenalist, consciousness is not reducible to the physical getting around the substance dualists’ problem of explaining how the mental interacts with the physical.

Yet, property dualists (and epiphenomenalists) aren’t free of problems. Property dualists still must explain how and why certain properties arise because of certain brain states as well as how and why we have consciousness. Moreover, the problem of causation still exists for the property dualist: why can physical states gives rise to mental states? While the property dualist avoids having to explain how the nonphysical can communicate with the physical, an explanation is still required as to how physical properties can interact with or give rise to nonphysical properties. However, the more formidable problem for the property dualist is explaining why these properties can’t be reduced to brain states. If consciousness occurs because of being in a certain brain state, why is it invulnerable to being reduced to a brain state? The main pull for property dualism is the persistent intuition that we don’t have a way to explain in materialistic terms how consciousness arises. Yet, ironically, the biggest obstacle for the property dualist is to explain why we can’t have such an explanation. The property dualist would contend that consciousness is irreducible to the brain; the opponent would contest why is

4 See Kripke (1980), Chalmers (1996), and Jackson (1982), and Nagel (1974).

8

consciousness irreducible? What gives it this status? The arguments which have been proposed have been unsatisfactory.

§2.2. Representationalism

Dualism isn’t the only theory of consciousness available. One can instead adopt a physicalist theory of consciousness. For example, representationalism is the view the way things seem to us is a function of how we represent them. The specific content of my subjective conscious experience (called “qualia”) is what constitutes my consciousness. It isn’t merely that I see a cup of tea, but that I see it a as a cup of tea, and the way I represent it becomes part of my conscious experience. Put another way, how the cup of tea is represented in me makes up my conscious experience of it.

The reason representationalism fits with physicalism is that it’s widely believed that we can give naturalistic account of representation.5 Consciousness is the way it is because the mechanisms in charge of perception and representation interpret items in our experience in certain ways. A detailed description of these mechanisms would constitute a sufficient explanation as to why we have the conscious experiences that we do.

Representationalism also faces difficulties. One worry is that because the representation is the privileged source of conscious access, representationalism draws no distinction between representations from actual objects and objects that are not actual. For instance, I can have a particular representation of the actual cup that is in front of me (it is blue, a particular size and orientation, and so on). However, I could have ingested some substance that could have given rise to some hallucination in which I get the same representation of the cup, but when the actual

5 See Dretske (1995), Dretske (1997), and Lycan (1996).

9

cup is absent. In this case, the two representations are identical, but the first representation comes

from an actual object while the second comes from a nonactual object. The issue here is that our

experiences of both actual and nonactual objects have the same intentional content, yet the

intentional object of one of the experiences is actual and the intentional object of the other one is

not. We don’t take nonactual things to exactly resemble actual things, yet, under

representationalism, both give rise to the same intentional content. Accepting

representationalism also means accepting this counterintuitive consequence.

Finally, and perhaps most problematic, representation can occur nonconsciously. For

instance, while driving, I can have representations of things such as the road, the nearby houses

on my route, traffic lights, stop signs, etc. Yet, I can be consciously unaware of these things. I

drive corresponding to the laws and signs but am unconsciously aware of the things around me.

This is what is known as “highway hypnotism”: when one can drive as if on autopilot, but be

unaware of his surroundings.6 It is evident that we can have representations without being aware

that we have them. The main criticism of representationalism as a theory of consciousness is that

it leaves unclear why we can have unconscious representations, and what exactly distinguishes

the two.

§2.3. Higher-order Representationalism

An alternative theory of consciousness is higher-order representationalism (HOR)7,

according to which consciousness occurs because our mental states are represented at higher

6 See Williams (1963). 7 There are multiple types of higher-order representationalism (HOR). I will limit discussion to only higher-order thought (HOT) theory, but another type of HOR is higher-order perception (HOP) theory. HOP theory holds that there is a sort of internal mechanism or “inner monitor” represents our mental states perceptually. When the inner

10

levels, either by perception or by having a thought about the lower level where the phenomena occurs. For instance, when I perceive the tea cup, I have a first-order (or first level) mental state of my perceiving the cup. In a second-order (or higher-order) mental state, that first-order mental state is represented, viz. I have a higher-order representation of perceiving the cup. At the first- level, I am aware of the cup, while at the second level, I am consciously aware of the cup. This differs from representationalism in that the latter does not require higher levels to represent our mental states for these states to be conscious. The tea would be represented as in the above case

(such as through the senses; I have a particular perceptual state after experiencing the cup of tea), but in order for it to be a conscious perception, I would need a corresponding higher-order representation—e.g. the thought “I am perceiving the cup of tea”. I have the perceptual representation of the tea, and a higher-order thought about that representation.

Thus, the main difference between HOR and representationalism is at what level they say where s-consciousness arises. For the representationalist, s-consciousness occurs at the first level. For the higher-order representationalist, s-consciousness occurs when those first order mental states are represented by those of higher-order mental states.

HOR, specifically higher-order thought (HOT) theory, may seem counterintuitive at the outset. “Wait, does this mean I’m only conscious when I’m aware that I’m conscious?” one might ask of the HOT theorist. “That doesn’t seem right! I’m conscious of lots of things without realizing my own awareness of them!” A few folks have raised this objection to HOT theory,8 which I will call the “concert argument.” Dehaene (2014) writes:

monitor perceives our mental states, only then do we become conscious of those them. HOR and HOT will be discussed further. 8 See Prinz (2012) and Dehaene (2014).

11

[T]he link between conscious perception and self-knowledge is unnecessary. Attending a concert or

watching a gorgeous sunset can put me in a heightened state of consciousness without requiring that

I constantly remind myself that “I am in the act of enjoying myself.” My body and self remain in the

background, like recurrent sounds or backdrop illumination: they are potential topics for my

attention, lying outside my awareness, that I can attend to and bring into focus whenever needed.

(p. 24, italics in original)

The objection here puts forth that we don’t need a sense of self when experiencing a stimulus

(such as a concert) in order for that perception to be conscious. According to this objection, if

HOT theory is true, then our HOTs must be conscious if we are to have any conscious experience of anything at all. This seems problematic because we take ourselves to have consciousness even when we aren’t aware of our own HOTs.

§2.4. Global Workspace Theory

Dehaene champions another account of consciousness—the global workspace theory

(GWT). It maintains that brain areas can communicate and exchange information to each other, forming a sort of network of systems (that is, a workspace) to assimilate and handle specific information. According to GWT, consciousness arises when information becomes available to this workspace. Information is stored and available to be accessed by different systems which are in charge of cognition, perception, and so on. When information is retrieved by the relevant systems from the workspace, consciousness happens.

One benefit of GWT is that it is draws on an extensive amount of empirical evidence.

Dehaene and Baars have each done voluminous research on GWT, and their evidence points to the frontal lobe being the storage in which information enters, and different systems (for example, the visual areas, language centers, and auditory areas) retrieve the information in order

12

to make use of it. Much neuroscientific research in the study of consciousness suggests that information that is sent to the frontal lobe storage holds it for a period of time for other cognitive systems to retrieve and utilize.

One shortcoming of GWT is that, while it offers an account as to how consciousness works, it does not tell us what consciousness is or why consciousness is the way it is (“Why does seeing the color green feel the way it does?”). It doesn’t tell us anything about the “what it’s like” aspect of s-consciousness.9 A robust theory of consciousness ought to be able to give an explanation of why stimuli give rise to s-consciousness, namely the phenomenal properties of our experience. GWT explains that we have consciousness when information is distributed throughout the workspace, but it does not tell us why consciousness feels the way it does.

§2.5. Attended Intermediate-level Representation Theory

The final theory I wish to describe is Prinz’s theory of consciousness, which he calls the attended intermediate-level representation (AIR) theory. According to AIR, attention is sufficient and necessary for consciousness. Prinz writes: “Consciousness arises when and only when intermediate-level representations undergo changes that allow them to become available to working memory.”(p. 97). The idea is that, while we experience many items at a time, the only items that make it into consciousness are those which we attend. When we attend to these things, they become available to working memory, which is how we become s-conscious of these objects. “Attended” and “intermediate-level” will be explained in the next chapter.

9 This is what Chalmers calls the “Hard Problem” of consciousness. It refers to the ability for a theory of consciousness to explain what it is like to have the phenomenal qualities of experiencing some stimuli. See Chalmers (1995)

13

It is interesting to note that AIR theory shares similarities with both GWT and HOT theory. Like GWT, it stresses the importance of working memory as the system which makes information available for use. GWT holds that working memory is handled by the frontal lobe, and allows for information to be accessed by other mental systems. AIR theory shares with HOT theory that consciousness is “above and beyond mere receptivity.” Both HOT and AIR theories require that some mediating mechanism is needed for consciousness to occur. The difference is that HOT theory says that consciousness happens when we have a HOT about our mental states;

AIR theory says that attention is the mechanism that allows consciousness to happen. Prinz writes of the difference: “But the AIR theory is not a higher-order theory; attention does not work by re-representing the attended states” (Prinz, p. 90).

AIR theory is enticing not only because it fits with commonsense intuitions—because often attention and consciousness are seen as intimately linked—but also because it has empirical backing. Studies regarding unconscious perception, for instance, suggest that stimuli which are unattended are also nonconscious. These studies which focus on unconscious perception will have a subject that is exposed to a briefly flashed stimulus, who is then later asked if she saw anything (or, in some studies, the subject engages another task that shows that the briefly flashed stimulus had a noticeable effect on the subject). This methodology suggests that a subject would perceive the stimulus, but due to the stimulus’ brief appearance, it would not make it into consciousness. Thus, we are conscious of items to which we attend, and conversely, unconscious of things to which we do not attend.

Despite the evidence supporting AIR theory and the idea that attention is both sufficient and necessary for consciousness, there have been studies that conclude attention and consciousness can occur independently of each other. Studies concerning blindsight, a rare

14

phenomenon in which a person is cortically blind but is still able to navigate and discriminate between objects, suggest that attention is possible without phenomenal consciousness. These studies have been able to show that we can have attention without consciousness (and vice versa), offering knock-down results against AIR theory. These studies will be explored more fully in the next chapter. In sum, there is empirical evidence that supports AIR theory, but the theory also faces potentially insurmountable empirical challenges.

Contrary to the initial appearances that make it seem as though these theories are unsalvageable, many of these theories of consciousness have found an audience among psychologists and philosophers. My project in these chapters to come is to examine three of the above theories of consciousness, namely GWT, AIR, and HOT theories. These three theories are widely discussed in the sciences, as they attempt to fit consciousness into a physicalistic framework, locating consciousness within our understanding of the brain and the underlying neural mechanisms. Examining these three theories opens up the ability to seriously consider scientific evidence pertaining to consciousness.

The central question that I will be addressing will be whether s-consciousness has any known adaptive function. I shall be looking at the three different theories of consciousness and determining which theory offers the most explanatory and accurate characterization of consciousness, and asking whether consciousness is a characteristic that provides a benefit to an organism’s biological fitness. If consciousness is necessary, important, or useful for an organism’s adaptability, then presumably one reason for human proliferation is because we are able to have phenomenal experiences. What it’s like to taste meat, to see the colors of a predator, and to feel the sting of fire would thus all be vital in our continued existence. But, as I will argue,

15

there are compelling reasons to think that s-consciousness, in particular, does not have any known adaptive function.

There must be a few misconceptions that I must clear up before I go any further. First, the question of whether consciousness has any known adaptive function is the focal point here. This question is not translatable to “Does consciousness exist?” or “Do we even have consciousness?”

This investigation purports to examine conditions of fitness and natural selection rather than to make a claim that consciousness is merely an illusion. Moreover, the topic here also is not examining whether consciousness is per se functional. The question “Does consciousness have a function?” has a clear, uncontroversial answer as well, and would not be resourceful to research this topic. As far as I am concerned, we are not zombies who lack consciousness. So far, I take the answers to these questions as uncontroversial or unrelated to the topic at hand.

I am not advocating for a sort of eliminativism of consciousness, nor am I advocating for a sort of epiphenomenalism in which we have consciousness but it does not directly affect our behavior. In fact, I hold the opposite—consciousness fulfills certain roles in our lives and often does have causal efficacy on our behavior. Consciousness, in my view, has a function in the

“thin” sense, in which it plays a certain role for an organism. However, I am arguing that s- consciousness has no known adaptive function, or function in the “thick” sense, in which it plays a role for an organism’s fit, selectivity, cohesiveness with its environment, ability to survive, and so on.

What I will be arguing, then, is that humans could have survived as well as we did with or without s-consciousness.10

10 Often, an illegitimate move is made in which one argues from mere, basic function (in the thin sense) to adaptive function, claiming that if something has a known function, it therefore has a known adaptive function. I draw a distinction between both senses of function, and am pushing the view that s-consciousness lacks the latter.

16

Second, my intention here is to explore consciousness through an evolutionary lens. For those who deny the reality of evolution, natural selection, environmental fit, or adaptive function, my discussion will be irrelevant. Because the current debate investigates whether s- consciousness has any known valuable consequences for a species’ persistence and proliferation, those who would discard evolutionary theory would not find this work useful nor persuasive.

The misconception I am addressing here is that my thesis stresses adaptive function—the advantage that promotes a creature’s cohesiveness with its environment. S-consciousness is evidently useful in several respects, such as in developing phenomenological methodologies or understanding objects in our consciousness. Without consciousness, perhaps appreciating art, experiencing what it is like to eat flavorful foods, and introspecting sensory states would be impossible. However, the primary topic of debate will be to understand whether species that have the capacity for conscious experience are more likely to persevere over creatures who lack that capacity. I will begin this discussion by delving into the details of GWT, AIR, and HOT theories, and examining what each of them says about whether consciousness provides any adaptive function. As such, it is crucial for my research to be situated in an evolutionary context.

Finally, I also wish to stress that I am discussing about whether the adaptive function of consciousness is known. In the future, new evidence may bear on the question and may objurgate it. But, in this work I am offering an answer as to whether we have any good reason to believe, that, as of now, s-consciousness has any adaptive function.

To conclude, I offer a brief tour of the rest of this thesis. In the next chapter, I will provide the motivation and empirical evidence in favor each of the three theories of consciousness. In chapter three, I will highlight some of the negative arguments that attempt to

17

repudiate each view. Here I will advocate for HOT theory, bringing out the tripartite distinction between c-consciousness, t-consciousness, and s-consciousness in more detail than I have above.

I will also defend HOT theory from the particularly damning arguments launched against it. Finally, in my fourth chapter, I offer the conclusion as to why HOT theory ought to be endorsed as the successful theory of consciousness, and that under HOT, s-consciousness offers no known adaptive function, contrary to what our intuitions incline us to believe.

18

Chapter II: Theories of Consciousness

Global workspace, attended intermediate-level representation, and higher-order thought theories of consciousness have gained notable recognition over the years. These three theories are attempts to explain the how, why, and what of consciousness—offering solutions to the central questions concerning consciousness. In this chapter, I summarize these theories and present the virtues of each.

§1. The Global Workspace Theory

The global workspace theory of consciousness (GWT) states that information becomes conscious when it enters working memory comes to be utilized by a wide variety of cognitive mechanisms. “According to this theory, consciousness is just brain-wide information sharing,”

(Dehaene, 2014, p. 165). When we are presented with a sensory stimulus, the perceptual information is first processed in specialized brain areas, and then sent to the “global workspace” for access by the systems in charge of, memory, judgment, motor control, and language. The information can thus be reported and acted upon. Notice that GWT privileges what Block (1995) calls “access consciousness” over “phenomenal consciousness,” or “what it’s like”. For GWT,

19

conscious states must be reportable. Dehaene (2014) writes “All this evidence points to an important conclusion…subjective reports can and should be trusted,” (p. 43).

Bernard Baars was the first person to propose GWT in his 1988 book A Cognitive Theory of Consciousness. There, he characterized consciousness as the result of competition between processors: the information that wins the race is broadcasted to the entire workspace. Input processors receive information which is then distributed throughout workspace and received by a subset of unconscious processors that are specialized for different operations. As I see the tea cup on my desk, I see various aspects of it: the shape, the opacity, the color, its orientation, and so on. The cup is held in my working memory and I can analyze these various elements and synthesize them because this information is accessed by the various systems in charge of the relevant processing. But I don’t just see the individual properties, I see them as bound together into a single object. I have conscious awareness of my perception of the cup insofar that I have information about it in working memory and that the systems in charge of processing the information in the workspace i.e. working memory can access it.

Dehaene adopts GWT and modifies it slightly from Baars’ view. In Dehaene’s characterization, information that enters into the workspace is held in a storage area which

Dehaene identifies as the prefrontal cortex. Instead of different processors competing to access the information, our brains have a sort of neural routing system that disseminates information to the relevant systems. This allows for those systems to process the information and make that information available for use and reporting. Once the information is conscious, we can talk about it, evaluate it, act on it, save it for future use, deliberate on it, and so on.

GWT also explains why certain items do not make it into consciousness: some sensory information doesn’t make it into the information storage. As I become consciously aware of a tea

20

cup, I subliminally see various items in my visual field, but am not consciously aware of them.

Seconds from now, I will not be able to, say, recall or act on that information because it was not

encoded by my prefrontal cortex. Another explanation is that the connections between the

storage and the systems has been disrupted or severed. For instance, if I had a traumatic brain

injury that either damaged the prefrontal cortex or the connections between the prefrontal cortex

and other brain regions, such as the visual cortex, then we could expect that many of the items in

my early visual processing areas to not breach the threshold of consciousness.

§2. The Attended Intermediate-Level Representation Theory

Prinz (2012) proposed his own cognitive theory of consciousness. The “attended

intermediate-level representation,” or AIR theory, holds that attention is both necessary and

sufficient for consciousness: an item of information must be attended to in order for it to become

conscious, and conversely, when a state is conscious, according to AIR theory, it is an object of

attention.

It is important to note that, for AIR theory, attention is what allows items to come into

consciousness. Under AIR theory, if I attend to an object, I will thereby have a conscious state

representing it. As with GWT, working memory also plays an important role: attention acts as a

“gatekeeper” to working memory, but under AIR theory, whenever I attend to something, I also

become conscious of it. That is, the items that we attend to allow for information to be stored in

working memory, which can then be used to be reported or acted upon. According to Prinz.

Working memory can be understood as “…a short-term capacity that allows for executive

control,” (Prinz, 2012, p. 93). Putting this all together, when I pay attention to, say, this cup, my

attention encodes the information about the cup in working memory. Because the cup is,

21

effectively, held in my working memory, I can for example, have a conscious state representing the cup, such as desiring it.

It is also important to note that the relationship between working memory and attention, according to AIR theory, is bidirectional. The items we attend to clearly influence which items make it into working memory but, working memory can also affect our attention; if working memory is full or occupied with some other task at hand, it become difficult to pay attention. For instance, if I were to attempt to attend to a particular item in a visual scene, it would take me longer to do so if I were also occupied with a difficult task that also required some allocation of my attention. It becomes much more difficult to attend to the cup when other stimuli compete for my attention (such as loud sounds going on outside, a task that I’m already focused on, and so on). The other stimuli are encoded in working memory, making paying attention to the cup more difficult. So while attention is the “gatekeeper” to working memory, attention is also constrained by working memory.

Prinz notes that stimuli sometimes capture our attention, but are sometimes captured by our attention. If a stimulus such as say a loud noise outside or a glaring visual anomaly captures our attention, we call it “bottom-up attention.” In other words, information about the stimulus forces its way into our working memory. By contrast, “top-down attention” occurs when we deliberately and effortfully focus or direct our attention toward particular stimuli. Currently, there are several objects that are competing for my attention, but I can attend to, say, my book instead of the tea cup, or my keyboard instead of a book. I can force my attention toward objects with some deliberate effort.

What about the “intermediate-level” component of AIR theory? Prinz notes that perceptual processes are hierarchicalized. Marr (1980) argued that vision has three stages: low,

22

intermediate, and high. At the low level, the brain computes a gerrymandered image, in which an

object of vision is composed of dots and lines. At the intermediate level, we get a “2.5D” image

of an object, where we see the object as a synthesized whole: features such as shading, boundary

distinguishing, and surface textures are combined to give a 2.5D sketch of the object in vision.

The high level uses the information of the 2.5D image to determine a 3D form/model of the

object, which we utilize for object recognition. Jackendoff (1987), suggested that visual

consciousness arises at the intermediate level. Drawing on this idea Prinz writes, “Visual

consciousness arises at a level of processing that is neither too piecemeal nor too abstract. It

arises at an intermediate level, which occurs between discrete pixels and abstract models,” (p.

52).

The final aspect of AIR theory is that what is attended to must be the representation of an

object of consciousness. That is, as I attend to the cup, for instance, I get a representation of the

cup. Thus, if I am to call some experience a conscious one, it is because I get a representation of

the objects in my experience. Contrast this with the higher-order thought (HOT) theory,

according to which a conscious experience of say, a cup requires both a “first-order”

representation of the object (the cup) and a “higher-order” representation about that

representation. AIR theory rejects the higher-order view; according to AIR, an object of

consciousness is experienced by say, being perceived—i.e., by applying the mechanisms of

attention to a first-order state. In contemporary jargon, AIR (like GWT) is a “first-order theory”,

not a higher order theory.

To summarize then, AIR theory holds that consciousness arises when one attends to an

object and has an intermediate-level representation of it.

23

The virtues of AIR theory are plentiful. For one, the link between consciousness and attention is an intuitive one. Consciousness is often taken to be “above and beyond mere receptivity,” and attention is often seen as a fitting candidate for the missing ingredient makes for phenomenality. In addition to the intuitive pull of AIR theory, a large body of empirical research has reached similar conclusions. Consider the disorder known as “unilateral neglect”, in which a person is unable to have a conscious perception of items on one side of their visual fields. The fact that this disorder is often due to damage to the right inferior parietal cortex—a likely site for the mechanisms of attention—supports the idea that attention is required for consciousness, and vice versa.11 Note that perception in these cases does indeed occur, but subliminally. Prinz writes: “Research on unilateral neglect gives us a candidate mechanism for consciousness: attention. When attention mechanisms are damaged, consciousness is lost, even though perception remains,” (2012, p. 83).

Studies that examine some counterintuitive phenomena in intact and healthy brains also have a bearing on this issue. Motion blindness is a phenomena in which items in the foreground of a visual display seem to pop in and out of existence when the background of the visual scene is moving in ways that distract one’s attention.12 The attentional blink is a similar phenomenon, in which a participant is asked to detect two stimuli in a set of rapidly occurring stimuli. 13 The first stimulus will be detected while the second will not if it occurs sufficiently soon after the first, thus momentarily occupying the perceiver’s attention. Most strikingly, there is inattentional blindness, a phenomenon in which items are not consciously perceived if one does not attend to

11 See Marshall & Halligan (1998); Bisiach & Rusconi (1991); and Doricchi and Galati (2000) for results regarding unilateral neglect. 12 See Bonneh, Cooperman, & Sagi (2001) for results regarding motion blindness. 13 See Luck, Woodman, & Vogel (2000) for results regarding the attentional blink.

24

them.14 This is well documented by the infamous “invisible gorilla” case in which two different

teams are passing a ball back and forth, asking viewers to count how many pass one team makes.

Sometime during the video, a gorilla shows up, beats its chest, and walks off. Many viewers fail

to realize that a gorilla appeared! All of these phenomena suggest that attention is required for

consciousness.

Other studies show that consciousness is required for attention also weigh in on the

conclusions posited by AIR. Visual pop-out is a phenomenon in which there is a stimulus that

something forces one to experience it, such as a black dot on white paper. 15 Posner tasks are

tests in which an arrow appears in the middle of a screen which either congruently or

incongruently predicts when a stimulus will appear. In Posner tasks, if the arrow accurately

predicts the stimulus, reaction times significantly decrease. 16 Binocular rivalry is a phenomenon

in which two separate images are presented, one in each eye. The result shown by binocular

rivalry is not a blending of the two images, but rather one of the images at a time. 17 These

demonstrate that what we attend to can be constrained by consciousness. The conclusion that

Prinz draws from these results is that attention is both necessary and sufficient for consciousness.

There are parallels between GWT and AIR theory. Both place heavy importance on

working memory. GWT holds that working memory is the workspace that houses consciousness.

AIR theory holds that attention is the gatekeeper to working memory, and thus consciousness.

Secondly, attention plays a substantial role in how items become conscious. AIR theory holds

that attention is necessary and sufficient for consciousness while GWT holds that attention is the

14 Inattentional blindness has garnered much interested in psychological sciences. See Rock, Linnett, Grant, & Mack (1992); Mack & Rock (1998); Koivisto, Hyona, & Revonsuo (2004); and Simons & Chabris (1999) for results regarding inattentional blindness. 15 See Treisman & Gelde (1980) for results regarding visual pop-out. 16 See Posner (1980) for results regarding Posner tasks. 17 See Mitchel, Stoner, & Reynolds (2004) for results regarding binocular rivalry/switching.

25

process that determines which objects are held in working memory for different cognitive mechanisms to utilize, thus making for conscious experience. Finally, both GWT and AIR theory offer an account of subliminal perception, in which items that do not breach the threshold of consciousness are still processed and registered in the brain. I will say more about these studies in the next chapter.

§3. The Higher-Order Thought Theory

In a series of publications Rosenthal has proposed the higher-order thought theory of consciousness (Rosenthal 2005, 2008, 2010, 2011). As the name of the theory suggests, this theory holds that consciousness occurs when a higher-order state represents a lower-order one. In particular, HOT theory holds that the higher-order state must be a thought about the lower level state, rather than a sensation or perception of it, as “inner sense” theorists hold, such as Dretske

(1995) or Armstrong (1968) for consciousness to occur. For instance, if I perceive a tea cup in front of me, and the perception of the cup is initially subliminal, but I then become aware of that perception via the higher-order thought. HOT theory entails that my perception of the cup is conscious. By contrast, if I perceive the book, but am not aware of myself as having done so, because, for instance, I am too focused on the cup then the perception of the book will remain unconscious. In other words, the higher-order representation of the lower level mental state (of the book) is unconscious. The information about the book is in my visual processing stream, and affects some of my behavior and other mental states, but this perception information is not conscious, but only subliminal, as I do not have a higher-order thought representing it as part of my mental life. Note that my higher-order thoughts (HOTs) need not themselves be conscious states in order for them to make me aware of my first-order perception. HOTs can become

26

conscious, but this would require third-order thoughts that represent them as part of my mental life. Rosenthal holds that such cases are rare, occurring only in focused introspection.

The primary aim of a theory of s-consciousness is to distinguish between conscious and unconscious processing. Whereas GWT claims that items of information fail to become conscious when they fail to enter the workspace, and AIR theory suggests items of information remain unconscious when they are not attended to, HOT theory holds that items of information fail to become conscious when a higher-order state fails to represent them.

Like AIR theory, HOT theory enjoys its own kind of intuitive pull: when I am consciously aware of the objects in my environment, I am aware of my mental states toward those objects (such as perception or desiring). What it is like for me to a conscious perception of the cup is determined by how the HOT represents the lower-order state. The HOT is what makes me conscious of my perceiving the cup. State consciousness goes beyond mere receptivity; it is our awareness of how the world appears to us in sensation perception, and in thought.

Like GWT and the AIR theory, HOT theory is supported by empirical evidence. Lau &

Rosenthal (2011) defends HOT theory on empirical grounds, suggesting that conscious awareness can occur outside of attention and that consciousness can still occur even with prefrontal cortex lesions. Lau & Rosenthal (2011) had subjects discriminate between two visually-masked figures, but received transcranial magnetic stimulation (TMS) to the dorsolateral prefrontal cortex which is responsible for higher-order awareness. As a result, subjects’ performances remained largely the same in both (TMS) and (non-TMS) conditions.

However, in the TMS conditions, subjects altered their subjective reports of their awareness of the two figures. In other words, they could still perceive and make discriminations at the first- order level, but lacked consciousness as they were unable to report on their lower-level states, a

27

condition for having HOTs. Lau & Rosenthal (2011) use this against global workspace theory, which predicts that prefrontal cortex reflects conscious awareness and task performance. The conclusion that Lau & Rosenthal (2011) draws is consistent with HOT theory and show that there is neurological evidence that also supports this way of thinking about consciousness.

§4. Conclusion

Thus far, I have examined what theories of consciousness are relevant in the current paradigm of cognitive science. It isn’t hard to see why the three theories discussed in this chapter have a foothold in the cognitive science community. Each gives its own principles account of consciousness and offers empirical support in its favor. Moreover, each is cashed out in a way that seems to fit in with our intuitions. In the next chapter, I will argue that HOT theory is preferable, by examining some of the problems that face the alternatives. In the final chapter, I will discuss how HOT theory bears on the question of the adaptive function of consciousness.

28

Chapter III: Problems with AIR Theory, GWT, and HOT Theory(?)

In this chapter, I offer arguments against each of the three theories discussed in previous chapters, and endorse HOT theory as the most useful way of thinking about consciousness—the theory we ought to adopt. The task of the next chapter will be to apply HOT theory to the question of the adaptive function of (s-)consciousness.

The first theory to be examined is Prinz’s AIR (attended intermediate-level representation) theory, which states that that attention is necessary and sufficient for consciousness. One this view, attended intermediate-level representation will constitute conscious experience.

§1. Problems with AIR Theory

Despite the intuitive connection between attention and consciousness, empirical evidence has surfaced to show that there is a dissociation between attention and conscious awareness.

Kentridge, Heywood, & Weiskrantz (1999) studied a patient (G.Y.) with unilateral damage to his left striate cortex as a result of a car accident. G.Y. came to have a condition known as

“blindsight”. He was able to attend to stimuli but lacked any sort of conscious awareness in his right hemifield. Kentridge et al. (1999) presented G.Y. with a Posner task in which a cue

29

indicates where the target stimulus will appear. Subjects would see a fixation point, followed by an arrow (i.e. the cue) that points either left or right. After a short period, the cue disappears and a target appears, in the left or right regions of the screen, and the participant must decide, as quickly and accurately as possible, whether the target is on the left or on the right. As one would conjecture, a congruent trial, in which the arrow points to the region in which the target later appears is much faster and accurately recognized than an incongruent one, where the direction of the cue and orientation of the target do not align.

Kentridge et al. (1999) found that G.Y.’s performance on this task is on part with that of those of normal, undamaged subjects. Thus, conclude the authors, he is able to attend to the stimuli (the targets and the cues). However, when asked whether he perceives the stimuli, G.Y. reports that he has no conscious awareness of the target and the cue. If we take G.Y.’s reports seriously, it would suggest that he has attention but lacks conscious awareness.

These results were extended by Kentridge, Heywood, & Weiskrantz (2003) which replicated the study of Kentridge et al. (1999) but added an extra condition: to make more precise the dichotomy between G.Y.’s attention to the stimuli and his (lack of) awareness. In this study, Kentridge et al. looked at whether G.Y. was able to discriminate between stimuli rather than merely detecting them. Researchers recreated the conditions of the prior study, but in addition to having G.Y. respond as quickly as possible to the horizontal orientation of the target

(left or right), the vertical orientation was also introduced. In each trial, a cue was briefly flashed, followed by a target, which appeared either above the meridian of the fixation point, or below.

G.Y. had to respond as quickly and as accurately to where the target was (e.g. upper-left, lower- right, etc.). What Kentridge et al. found was that G.Y. was accurate in locating the stimulus’ location in both the horizontal and vertical orientations. Again, G.Y., when asked whether he

30

was aware of the location of the stimulus, reported that he was not.18 Kentridge et al. suggest that these results offer the conclusion that G.Y. is able to able to attend to stimuli but lacks any consciousness of them, concluding that attention may well be a necessary condition for consciousness, but it is not a sufficient one (p. 831), contra Prinz’s AIR theory.

Further studies reinforce these results. Kentridge, Nijboer, & Heywood (2007) concluded that attention is not sufficient for consciousness by testing neurologically normal, healthy subjects with a low-contrast Posner task, in which the target is virtually invisible on-screen.19

The authors found that even normal subjects are able to attend to particular targets (and regions onscreen) without having any conscious awareness.

Schurger, Cowey, & Tallon-Baudry (2006) examined G.Y.’s ability to discriminate between a stimulus’s orientation in various positions of the screen, including G.Y.’s “blind” region. When asked to discriminate between a left- or right-leaning stimulus that appeared on screen (that is, in both visual and blind regions of G.Y.’s hemifields), G.Y. was able to perform above chance in correctly picking out the stimulus’ location.20 This suggests that, even though

G.Y. had no conscious awareness, was still able to attend to the stimulus.

Schurger, Cowey, Cohen, Treisman, & Tallon-Baudry (2008) tested G.Y. with a peripheral orientation-discrimination task with an attention-cueing procedure, attempting to dissociate attention and consciousness. Schurger et al. found that G.Y. performed above chance

18 Reports included “only aware of one or two” (of the targets) in one block, “nothing is happening—I have no awareness or experience,” and “I couldn’t tell, there wasn’t anything there…” (Kentridge et al. 2003, p. 833). 19 Kentridge et al. (2007) ran two experiments. Experiment 1 ran subjects and debriefed the subjects after the trials. In experiment 2, experimenters did not debrief the subjects after blocks of trials. The reason for the second experiment was that the experimenters believed that it was “…possible that they may have had a fleeting experience of the primes which had faded from memory by the time they were interviewed.” (Kentridge et al., 2007, p. 866). The criticism, thus, is that subjects did not have any conscious awareness, but experienced “something” which was consolidated by memory when they were asked for their reports. The second experiment employed a forced-choice technique which assessed subjects’ ability to detect the primes. 20 Following each trial, G.Y. reported whether he was aware of the stimulus or not.

31

in being able to discriminate between locations of a target stimulus without reporting any sort of awareness of the stimulus, adding more fuel to the conclusion that attention is not sufficient for consciousness.

Sumner, Tsai, Yu, & Nachev (2006) contribute to these findings: the experimenters manipulated attention by inserting a pre-cue prime in each trial which made subjects attend to two orientations of a screen (top or bottom, followed by a left or right cue), following a mask and then a target stimulus which was to be identified by the subject.21 The authors found that attention increased the likelihood of detecting the prime (that is, becoming conscious of it), as well as the attentional influence on priming allowed for increased accuracy of detecting the target. Sumner et al. suggest that attention is dissociable from consciousness; attention can modulate sensorimotor processes that are not supported by conscious awareness. Moreover,

Bahrami, Lavie, & Rees (2007) examined fMRI scans of humans who performed a low-load and high-load foveal task which required substantial allocation of attention. Bahrami et al. found that the same voxels (a 3D ‘block’ that builds up a map of the brain) that responded to the “invisible” stimuli22 during these tasks was significantly reduced in the primary visual cortex (V1, V2, and

V3), meaning that attentional capacity constrains visual awareness of those stimuli. In other words, according to the authors, attention cannot be a sufficient condition for awareness of visible but nonconscious (that is, “invisible”) stimuli (Bahrami et al. 2007, p. 509).

AIR claims that that attention is necessary and sufficient for consciousness. Clearly, we have seen that at least the sufficiency claim is contentious, if we are to take neurocognitive results seriously. Some have argued that that one can have consciousness without attention,

21 The study had three experiments. The first was described above. The second altered the duration of the primes (from 10 ms to 70 ms). The third altered the brightness of the prime by manipulating the contrast of the cue and the background). 22 Determined by a forced-choice paradigm.

32

nullifying the necessity claim. Norman & Tokarev (2014) examined the question through how faces are processed. Holistic face processing (or HFP) suggests that faces are not processed by their individual constitutive elements, but rather their spatial configuration and relation to other components. For example, instead of processing an eye, followed by a nose, followed by a mouth, followed by another eye, and so on until each component has been processed, we have a more effective heuristic—we see the spatial configuration of eyes to a nose to a mouth etc. and process the conjunction of the entire relationship.23 HFP has been supported by the neuropsychological literature (Norman & Tokarev 2014, p. 1341). In their study, Norman &

Tokarev had human facial images that were split into two sections (made up an “upper half” of a face and a “lower half”, divided by a line). The participants saw one image of a face for 250 ms, followed by a distractor cue in the same location as the image, or in a different one, and then another image of a face for 100 ms. The participants were to determine whether the two faces were identical or distinct as quickly as possible (categorized by identical, completely different, or composite). Experimenters hypothesized that if attention was necessary for consciousness, then the trials in which the distractor cue appeared in the same location as the image should result in faster reaction times and a greater degree of accuracy (as the cue would not distract from attending to the image). However, results show that the trials in which the distractor cue aligned with the images (labeled “attention” trials) were not significantly distinct from the trials in which the distractor cue varied apart from the images, suggesting that faces are still processed irrespective of if we attend to features of faces. The authors write “These results add to the argument that HFP is carried out independently of attention,” (Norman & Tokarev,

23 In fact, there is a dedicated brain area to processing faces in particular. The fusiform face area (FFA) processes groupings of elements in objects that are facelike (such as the “Man in the Moon”, electrical outlets, and smiley emoticons).

33

2014, p. 1341). Specifically, these results show that it is possible to have conscious awareness

(here, cashed out as “conscious processing”) without attention, thus endangering the second half

of AIR’s core tenet.

Another study by van Boxtel, Tsuchiya, & Koch (2010) sediment the view that attention

and consciousness are independent processes, contra AIR. van Boxtel, et al. manipulated both

influences of consciousness and attention by examining effects of afterimage visibility.

Researchers had participants perform an attention-demanding task while focusing on a stimulus

(a gray Gabor patch with a Gaussian-windowed grating).24 Moreover, the stimulus was either

highly-contrasted (and thus consciously visible) or not (and thus nonconscious) to account for

conscious awareness. The 2 x 2 study had participants focus on the stimulus, which, after a brief

appearance, disappeared and produced an afterimage: participants were to report how long the

afterimage appeared by pressing and releasing a button. The result was that attention and

consciousness had inverted effects: attending to the stimulus reduces the duration of the

afterimage whereas consciously perceiving the stimulus increases the afterimage’s duration. If it

were the case that attention and consciousness were intimately bound, we would expect to see

the effects of both attention and consciousness overlap; however, we do not. This study thus

provides additional evidence against AIR.

Finally, AIR also faces some theoretical criticisms. Mylopoulos (2015) criticizes AIR on

the grounds that it cannot account for conscious intentions. Her criticism stems from AIR’s

commitment that consciousness occurs when an intermediate-level representation is attended to,

and that all consciousness is perceptual.25 So, if all conscious states must be attended to,

24 Participants either had to focus on the patch itself, or focus on the patch as well as rapid serial visual presentation task: in both cases, attention was employed. 25 See Prinz, 2012, p. 150.

34

perceptual states, then what about intentional states? Mylopoulos argues that AIR is committed

to saying either that intentions are attended intermediate-level perceptual states, or that intentions

are never conscious. Mylopoulos’s point is that intentions are not plausibly assimilated to

sensory/perceptual states (what would a sensory/perceptual intention even look like?), nor does it

seem as though intentions fail to exist. Both of these options seem to be untenable, says

Mylopoulos, and counter to what we take consciousness to be, thus threatening some of AIR’s commitments. In fact, conscious intending is a normal aspect of our mental lives!

The studies and challenges that I cited all pose considerable danger for AIR. Not only

have both sufficiency and necessity claims been challenged, but also, AIR has been shown to

have counterintuitive commitments, such as with intentions. These considerations might not

outright falsify AIR, but they give us sufficient reason to reject it as the canonical or most

promising model of consciousness today. Next, let us examine global workspace theory (GWT).

§2. Problems with GWT

As stated previously, GWT holds that “consciousness is just brain-wide information

sharing” (Dehaene, 2014, p. 165). That is, the brain acts as a sort of workspace: the prefrontal

cortex stores the information, which is later disseminated to other areas of the brain responsible

for relevantly processing it. Working memory plays an important role here: one of the prefrontal

cortex’s tasks is to hold items in working memory, which then makes those items accessible for

further processing. According to GWT, then, consciousness occurs when the relevant cognitive

processes “receive” and process that information.

35

There are, however, challenges to GWT. First, as Dalton (1997) argues, GWT is a theory about how consciousness works: it is able to tell us the neurological underpinnings of consciousness and as well as what cognitive processes result in conscious experiences. Yet,

GWT is unable to answer questions such as “what is consciousness?” and “how does phenomenality arise from cognitive processes?” In other words, it is unable to answer the Hard

Problem of consciousness—why do we have such and such phenomenal experiences, and how do we explain them? This then recategorizes GWT: it is no longer a theory of consciousness, but rather a theory of neurological processes that result in consciousness. GWT isn’t delegitimized on this basis, for sure: however, it is important to understand the scope of GWT. It is, at best, only able to account for cognitive processes, not consciousness. In fact, GWT may provide utility in understanding the signatures of consciousness, the underlying mechanisms, the interconnectivity of the brain, etc. which would make it an invaluable resource in neuropsychological sciences. However, at present, it is unable to accommodate the relevant discussions regarding consciousness: its nature, what is like for us to have certain experiences, why do sensory objects appear to us the way they do, and so on.26

One criticism from Lau & Rosenthal (2011) is that global workspace theory makes an assumption about awareness-related activity associated with the prefrontal cortex. As we’ve discussed before, global workspace theory takes the prefrontal cortex to be the information storage of the workspace. However, Lau & Rosenthal (2011) charges GWT with assuming that consciousness is necessary for particular essential behavior. The authors write that “[A]ccording to the neuronal global workspace theory, the awareness-related activity in the prefrontal and

26 Stephen Pinker expresses a similar view in his book How the Mind Works (2009). He writes “So, we may not need a separate theory of where sentience [s-consciousness] occurs in the brain, how it fits into mental computation, or why it evolved…What we do need is a theory of how the subjective qualities of sentience emerge out of mere information access.” (p. 148)

36

parietal areas is associated with the essential behavioral functions, such as flexible control of behavior, cognitive control and ability to perform various tasks. So, on this view, the potential for good performance, in both perceptual and higher cognitive tasks, is crucial for a representation to be conscious.” (p. 366)

What does this mean? For one, because GWT holds that consciousness arises when stored information is accessed by the rest of the workspace, and because the storage for information (via the prefrontal cortex) is working memory, GWT is committed to saying that all essential and vital higher cognitive processes and behaviors must be conscious. This idea aligns with our intuitions, namely that cognitively-demanding tasks must be done consciously (or conversely, those tasks cannot be done unconsciously). Yet, several studies have pointed to the idea that the most demanding cognitive tasks, such as making judgements or determining the meanings of words, need not be done consciously!

For instance, take Marshall & Halligan (1988) who studied patient P.S., who had right cerebral damage, resulting in unilateral neglect to the left side of her visual field. P.S. was able to perceive items in her right visual hemifield, but items the left side failed to become consciously perceived. Marshall & Halligan (1988) presented P.S. with two pictures of a house: one that was on fire and one that was not. The right side of each image was identical to the other. However, for one image, the left side of the house was on fire while in the other image, it was not. Thus,

P.S., who was blind in her left hemifield should have no preference between the two houses: they are visually identical in the “conscious” right side. Yet, despite reporting no conscious awareness in her left visual field, P.S. reported that she would much rather live in the house that wasn’t on storage. How can this be? How can we make judgements, even complex ones in which multiple

37

layers of information need to waded through, without conscious awareness? Simple—we process stimuli unconsciously.

The results of Marshall and Halligan (1988) show that stimuli don’t need to make it to conscious awareness in order for us to process them. More than that, the results also show that we can make judgements even when the relevant determinations aren’t conscious. One phenomenon that gets brought up in this context is “highway hypnosis”—the familiar experience of driving for long periods of time and “losing consciousness”, in that one lacks conscious awareness of the surroundings, what landmarks one has passed, what driving actions one has taken, etc.), while nevertheless being able to drive competently and correctly. There is a simple explanation: when we drive, much of what we perceive is processed unconsciously. We don’t have to be consciously aware of all our surroundings, because much of what we do is regulated unconsciously. Moreover, not only is a lot of information processing done unconsciously, it seems as though we can only be conscious of a few aspects of our, say, visual field, at any given time.

Dehaene (2014) concedes that GWT allows for unconscious processing. However, GWT only accounts for rudimentary or simple stimuli to be able to be processed under the threshold of consciousness. The utility of consciousness for Dehaene is that it acts as a sort of decision maker, allowing us to make meaningful choices based on the relevant stimuli. He says “[c]onsciousness may be the brain’s scale-tipping device—collapsing all unconscious probabilities into a single conscious sample, so that we can move on to further decisions,” (p. 93). However, the cases of

P.S. and highway hypnotism demonstrate that even complex, multilevel information can unconsciously processed, and that consciousness, in many cases, has virtually no effect on how we act on that information.

38

Another example of unconscious processing can be found in expert chess players. de

Groot & Gobet (1996) looked at whether chess players can get a “full picture” of a chess board

with a single glance: that is, can chess masters are able to assimilate information about a chess

board and the pieces’ positions and their available moves, with just a single look at the board?

The answer is yes. Masters at chess are able to take a quick look at the board and memorize it as

well as all the possible moves each piece can make at that given turn. How is this possible? de

Groot & Gobet found that masters of the game parse the information into meaningful segments

so that they don’t have to remember every detail about the board. Specifically, chess masters

don’t do this consciously—the ability to memorize a board is regulated unconsciously, meaning

that this method of having a “full picture” of a board and all the possible moves is done automatically, and without deliberate effort.

Evidence of unconscious processing is plentiful. Marcel (1983) flashed words very

briefly to subjects who were then told to pick a corresponding patch of color. If the word was

“blue” or “red”, then the subject was much faster at picking the correct color than if they had an

unrelated word. Note that the flashed words were shown so quickly (<50ms) that literally every

subject reported no conscious awareness of them. Marcel (1983) used this to show that we

process the meanings of words without requiring conscious awareness of them. Greenwald,

Daine, & Abrams (1996) found that the meanings of words can be processed unconsciously:

researchers had participants classify emotionally positive or negative words by hitting one of two

keys while a hidden masked prime preceded each target. The target either lined up with the

prime such that a positive word was matched with a positive word (such as “happy” preceding

“joy”) (or a negative word with a negative word), or the target and prime were mismatched (such

as “death” preceding “joy”). When the trial was congruent (positive/positive or negative/negative

39

pairings of prime and target), participants performed significantly faster at identifying the

emotional valence of the target than on incongruent trials. So, even if a word is flashed so briefly

that we don’t “catch it” consciously, we still process its meaning.

Naccache & Dehaene (2001) looked at whether numbers could be unconsciously

processed. Participants were told to determine, as fast as possible, whether the target number was

less than or greater than 5. However, unbeknownst to the participants, there was a hidden prime

that was flashed briefly (<50ms), which included the numbers 1, 4, 6, and 9. The primes were

grouped as either smaller than 5 or larger than 5. As one would expect, if the target and the prime

was congruent (for instance, if the target was 4, and the prime was either 1 or 4, i.e. the prime

and target were both less than 5), participants responded significantly quicker than if the trial was

incongruent.

What is the take away message from these studies? For one, they highlight that many

processes that we are familiar with—perception, deliberation, language comprehension, and so

on—can be done unconsciously. Comprehension of the meanings of words, comparisons of the

values of numbers, and assessment of sensory stimuli can all occur without our being conscious

of the mental states that facilitate them. Moreover, returning to the criticism of Lau & Rosenthal

(2011), GWT holds that certain actions must be performed with conscious awareness of them.

Yet, as we’ve seen, many aspects of our active lives are performed without the presence of

conscious awareness. S-consciousness, under GWT, remains necessary for much of our behavior

and cognitive processing. However, many studies, such as the ones I have cited and many others

all point to the same conclusion: consciousness isn’t all that necessary.

Most of Dehaene’s research has been focused on understanding unconscious processes

and how much of what we do remains unconscious. Clearly, Dehaene is no stranger to the

40

conclusion that our “unconscious mind” regulates much of our cognitive tasks. In fact, he would welcome the idea that consciousness must be necessary. He writes:

Is consciousness a mere epiphenomenon? Should it be likened to the loud roar of a jet engine—a

useless and painful but unavoidable consequence of the brain’s machinery, inescapably arising from

its construction? The British psychologist Max Velmans clearly leans toward this pessimistic

conclusion…however, I explore a different road. (p. 91)

Perhaps it is ironic that Dehaene, a pioneer of unconscious research, holds that consciousness must have a utility. In any case, the criticism of Lau and Rosenthal (2011) is that GWT must hold that certain essential behaviors of our lives must be done consciously: it presupposes that consciousness must have a function. Dehaene would not object to this charge—after all, he wants consciousness to have a necessary function. However, much unconsciousness research has halted and ended the old myths of consciousness, namely, that consciousness regulates everything, from thought to perception.

The last criticism of GWT, and perhaps the most damning one, comes from Hassin,

Bargh, Engell, & McCulloch (2009). Recall how GWT works: information gets stored in the frontal lobe. That information gets “broadcasted” to other systems in the brain, which results in consciousness. Moreover, working memory plays an important role for GWT: when information is encoded in working memory, this allows for this broadcasting (and hence, consciousness) to occur. The frontal lobe is in charge of working memory, and working memory is in charge of conscious awareness, according to this model. Hassin et al. (2009) argues that “WM (working memory) can operate outside of conscious awareness,” (p. 667). Across five studies, Hassin et al. looked at whether working memory can have an “implicit” activation. In the first three studies, participants were shown either empty or filled-in disks on a screen on a grid. The disks were presented one at a time and the trials consisted of one of three patterns: pattern set, broken

41

pattern set, and a control set. In the pattern set trials, the disks were presented in a chronological and predictable sequence. In the broken pattern set trials, disks 1-4 were presented in a pattern, and disk 5 “broke” that pattern. In a control set trial, there was no predictable pattern. Subjects were told to determine as quickly as possible whether the fifth disk was empty or filled in. The first three studies varied in how awareness was tested (by self report, immediate reconstruction, and immediate recognition, respectively). Study 4 followed the same formula, except that researchers explicitly stated that there are several types of patterns that can be present, and that following the patterns can lead to performance. Study 5 borrowed the same task as the previous four, except instead of guessing whether a disk was filled in or not, participants were told to guess the next number in a sequence (for instance, a pattern set could look like 0, 2, 4, 6 and a broken pattern set could look like 0, 2, 4, 2).

In each of these studies, researchers found that reaction times for broken pattern set trials were much larger than both pattern set and control set trials. Moreover, researchers found no intention in participants of extracting and finding patterns in each trial. Thus, in other words, subjects implicitly held each item in working memory, constructing and understanding the previous items and their relations, while predicting where the next item might appear. The authors conclude that there must be an implicit component to working memory—that it can’t be a fully conscious and intentional operation. This is bad news for GWT, which is committed to the view that working memory allows for items to come into consciousness. Yet, as the results of

Hassin et al. show, there are unconscious processes that occur in working memory, and that items encoded in working memory may not be completely and fully conscious.

42

GWT has some thorny issues associated with it, and must be revised in light of some of

the criticisms it faces. Although a plausible theory, studies have posited considerable threat to its

commitments.

§3. Problems with HOT Theory?

In this section, I turn to a defense of HOT theory, which holds that that mental states

become conscious when higher-order states represent them. For instance, I am aware of my

desire to drink tea when my desire is represented in a higher-order state. Note that the first-order

state doesn’t require me to be conscious of it—I become conscious through the higher-order

representation. Now recall back to chapter 1, where the “concert argument” was discussed.

Dehaene (2014) says that one doesn’t need to be represented in their own mental states to be

conscious (in other words, I can have conscious awareness of a concert without having to remind

myself that I am perceiving a concert) (p. 24).

Prinz (2012) has a similar (yet in some ways distinct) argument against HOT theory. He

writes:

To have a higher-order thought, we need to deploy mental-state concepts. To think that you are

seeing a sunset, you need a concept of seeing. This implies that people who have difficulty with

mental – state concepts, such as individuals with autism, should suffer from corresponding deficits

in consciousness. (p. 26)

Notice the difference: Dehaene’s argument is that we need not be thinking about ourselves in

order to be conscious of some phenomenon. Prinz’s argument is that to be conscious requires one

to have mental state concepts (e.g. believing, seeing, intending, hoping, etc.).

43

In both, the criticism is effectively the same: HOT theory is committed to saying that, for a person to be conscious, they must represent themselves with relevant mental state concepts in a higher-order thought. Running with the concert example, the criticism states:

1) According to HOT theory, I can only be consciously aware of the concert if I am aware

of myself being conscious of the concert.

2) I can be consciously of the concert, without having to be aware of myself being

conscious.

Therefore,

3) HOT theory is false.

Dehaene and Prinz take this to be most endangering critique of HOT theory.

Unfortunately, for both thinkers, they fail to understand the fundamental tenets of HOT theory.

First, HOT theory does not assert that “nothing is going on” in the first-order state. It isn’t that there is no perception of the concert until I represent it in a higher-order state: in fact, HOT theory holds that there is a perception of a concert at the first-order level (that is, I have t- consciousness of the concert). In order for me to be consciously aware of the concert, I need to represent my first-order state in a higher-order state (for example, “I see the concert”), resulting in s-consciousness. Dehaene clings to the “I” that is being posited by the HOT, but fails to realize that the “I” is not the privileged or centripetal position of the HOT. Self-reference is necessary, sure. But what makes the concert conscious is how the HOT represents the first- ordered state—the “I” is merely the anchor for a HOT, that I see the concert, but I can still be conscious of the concert even if I don’t represent myself as having a HOT (this would be a HOT

44

at the third level). Prinz, similarly thinks that one would need mental state concepts in order to be

able to have a HOT (such as a concept of “self”). Again, having the thought “I am aware of my

seeing the concert” is a third-level HOT, and only then needs a concept of “self”. One does not

need a concept of the “I” at the second-level representation of a first-level state of a

phenomenon: one only needs a reference to who is the one in a particular mental state.

Second, an anticipatory criticism that one might put forth is this: “What about those

states—the higher-order ones? Don’t I have to be aware of my own first-order states? Doesn’t

this lead to an infinite regress, as each state must be represented by a higher-order mental state?”

This sort of criticism, that famously accompanies theories that invoke levels of mental

representations, isn’t a threat for HOT theory. Here is a picturesque example that might highlight

how HOT gets around this criticism, and how it can account for unconscious mental states: let us

establish that at the first level, I am in a mental state (one that corresponds to perceiving the

concert). I become conscious of that mental state when it becomes represented by another mental

state—a higher order thought. Now, to become conscious of this state it would need to be

represented in a yet higher-order state, namely a third-order thought. This hierarchy would

“stop” when a mental state fails to be represented by a higher-order state.

More generally, let S1 be the first-order state (my perception of the concert). S1 becomes

a conscious state when S2 represents it (S2 becomes the higher-order state of S1). S1 is the state

that corresponds to the concert, and I become aware of S1 by being in S2.In turn, S3 makes S2

conscious when it represents it, respectively. When I enter S3, S2 becomes a conscious state.

Now, suppose I don’t have S4, the state which represents S3. Thus, S3 will not be a conscious

state, but S1 and S2 will. This can continue on until a higher-order state fails to represent its

lower-order one (Sn fails to represent Sn-1).

45

Thus, I don’t need to “represent myself” in the way Dehaene and Prinz think. I don’t need to posit myself as “being conscious” to have conscious experiences. Nor does HOT theory face an infinite regress, as it can sufficiently explain why we become conscious of certain states while other states—including the majority of our HOTs—remain unconscious.

Lau & Rosenthal (2011) has become the prime study to defer to for HOT theory, but few have taken up the torch.27 Prinz (2012) considers this a detriment to HOT theory, that relatively few scientists have studied it and that it hasn’t become a mainstay for the rest of the consciousness research community.28 Yet, it would be naïve to think that HOT theory is illegitimate merely because it has yet to be looked at more carefully in the sciences. On the contrary, while the empirical support isn’t as diverse as those of, say, GWT, there has yet to be a sufficient denouncement of HOT theory. The strongest objection it faces is the one discussed above, but as we have seen, it can be adequately answered. Currently, there is no reason to reject

HOT if GWT and AIR theory are its competitors, as both GWT and AIR have the difficulties highlighted above. In fact, in light of the criticisms launched against GWT and AIR, we ought to reject those theories and accept HOT, as it is able to explain conscious and unconscious processes, and lacks obvious shortcomings.

In the final chapter, I will apply HOT theory to the question of whether consciousness has any known adaptive function.

27 , who doesn’t use the phraseology employed by HOT theory, is another cognitive neuroscientist whose findings nevertheless support HOT theory. See Graziano (2013). 28 See Prinz (2012), p. 26.

46

Chapter IV: Consciousness and Adaptive Function

We now arrive at the central question that initiated this discussion: does consciousness have any known adaptive function? In the previous chapter, I argued that HOT is preferable to the rival GWT and AIR theories. But, the central issue I am addressing is not merely which theory best characterizes how we ought to understand s-consciousness, but also whether s- consciousness has any sort of known evolutionary benefit to a species, or whether is just a sort of

“biological decoration”. In order to answer this question, it was necessary to lay out what is meant by “consciousness” and to offer a hypothesis how certain cognitive processes result in conscious experience. It was also important to delve into competing theories of consciousness, understanding their basic tenets and shortcomings. Without that background, this discussion would be, at best, shortsighted. But we are now in a position to make the case for the main claim of this work—namely, that consciousness has no known adaptive function.

It may seem prima facie that consciousness would have to provide some biological benefit to an organism. We can perhaps even think of aspects in our own lives that might be diminished if we lacked consciousness. Here, it is important to make explicit that there are multiple ways of understanding the question of the function of consciousness, corresponding to the tripartite distinction that was drawn in chapter 1, between creature consciousness, transitive

47

consciousness, and state consciousness. Whether the first or second of these has an adaptive function seems to me uncontroversial: wakefulness as well as appropriate mental and behavioral responsivity seem to be vital for many organisms to survive in an environment. The question I focus on here is whether that s-consciousness has any adaptive function. My claim is that there being something it’s like to have, for example, the visual sensation of green or the gustatory sensation of meat will not benefit an organism or increase the chances of its survival.

This may initially strike us as counterintuitive. After all, isn’t it clear that when I am hungry or angry, I am thereby in a conscious state? Are my hunger or emotion not immediately conscious when I enter into them? And isn’t the same true of all other psychological states?

Yet, as I have argued in previous chapters, there are many cases of nonconscious psychological states. I can be hungry but unaware of my hunger, or angry without being aware of my anger. In such cases, our mental states are not conscious states. They can remain active in our nonconscious mental life, until, for instance, someone else points out that we may be hungry or angry. Rosenthal (2008) makes this point in this passage:

[E]ven when an organism is fully conscious, many of its psychological states may fail to be

conscious states. Fully conscious humans often have many thoughts and desires that are not

conscious states, and sometimes have subliminal perceptions, which are also not conscious.

Doubtless, the same holds for other organisms as well. So we cannot infer from the function of an

organism’s being conscious to a function of the consciousness of its psychological states. The

difference between these two functions is sometimes overlooked perhaps because it is assumed that

the psychological stats of an awake organism are invariably conscious, or at least psychological of

a particular type. But since this is not the case, it is crucial to distinguish these two questions about

function. (p. 830)

48

This highlights several crucial considerations. Most importantly, we must always take

care to distinguish between conscious psychological states and unconscious ones. Conscious

psychological states are those that we are aware of ourselves being in. Unconscious

psychological states are those in which exist within us and play an active role in our cognitive

functioning, but do not make it into our conscious first-person point of view, because we are not

aware of ourselves as being in them. A psychological state has mental properties—e.g. content,

qualitative character, or affective valence—which relate to its causal powers within an organism.

Consciousness is just one property of these psychological states. The question of whether a

person is hungry is not the same question of what it is like for that person to experience hunger,

which is a matter of how that person represents their state of hunger to themselves, from the first-

person point of view. Similarly, whether psychological states provide any known evolutionary

benefit to an organism is a different question from whether conscious states do. Psychological

states which include thinking, emoting, perceiving, believing, desiring, and so on, do not to be

conscious. And while this doesn’t show that (s-)consciousness has no adaptive function, it does

show that a distinction must be made between one’s psychological engagements with one’s

environment and one’s awareness of that engagement.

Plainly, it would be foolish to say that psychological states such as perception, reasoning,

belief, desire, and so on provide no known benefit to a creature. One’s ability to sense that there

is a predator nearby, or to reason about courses of action provides an immense benefit, allowing

one to assess the environment and act upon one’s knowledge of that assessment. However,

having a psychological state does not entail that I be conscious of it, for there are plentitude of

cases in our everyday lives in which I am in a psychological state that is not conscious, for

49

instance, when I have a belief but am not aware of it. Indeed, it would be difficult to argue that psychological states have no adaptive function if we are to take the idea of evolution seriously.

One might be inclined to think that at least some psychological states must be conscious, or rather, there are some states that must be conscious states insofar that they cannot ever be unconscious. For example, when planning a schedule or solving a complex mathematical problem, we seem to be aware of each cognitive step from beginning to end. We can deliberate about the operations and functions that are needed to reach a conclusion in a math problem, and to give an account of how we go about determining whether a plan fits alongside other events in our schedule. Not only can we become aware of ourselves, going through each step and invoking the proper functions and behaviors needed to reach a conclusion, we are also able to vocalize how and why we decided to use those functions and behaviors. If this is correct, it would spell trouble for the idea that consciousness has no known adaptive benefit: after all, if there are some actions must be done consciously, actions that seem to be necessary for our survival, then the main thesis of this chapter would be false.

In the previous chapter, we saw that contemporary psychological studies of unconscious processing shed light on the utility of nonconscious mental states. G.Y., the patient with blindsight was still able to attend to stimuli in his blind field and to perceive them, performing in some cases with consistently with normal, neurologically healthy subjects. P.S., who had damage to her right cerebral cortex, was unable to become consciously aware of stimuli in her left field, but was still able to differentiate, on the basis of unconscious perception, between an undesirable house (one which was on fire, but in P.S.’s left neglected field) and a desirable one (no fire).

These cases are among the many types of evidence for the claim that psychological states can occur without s-consciousness. Further, these examples highlight that s-consciousness may be

50

completely absent in an organism but that organism is still able to carry out functions and

behaviors (albeit with hindrance).29

This effect is also in normal-brained people as well. As I pointed out in the last chapter,

there are several aspects of our lives that we normal take to be “necessarily conscious” or that

consciousness is indispensable for. Marcel (1983) and Greenwald et al. (1996) demonstrated that

word meanings don’t require conscious effort to process. van Boxtel et al. (2010) showed that

images are still processed even when a person is distracted and does not become consciously

aware of the stimulus. Naccache & Dehaene (2001) showed that number processing can occur

without consciousness. Even activities that require an immense amount of learning and

conditioning such as playing chess can ultimately become unconsciously automated, as de Groot

& Gobet (1996) have shown.

Perhaps there is reason to raise eyebrows here. “These results only show that some

aspects of our lives are automated,” a critic might say. “They don’t show that all or even most of

our lives are regulated through our unconscious, much less show that consciousness provides

known no benefit to our lives!” Indeed, these studies alone only provide evidence for the weak

claim that consciousness provides only some known benefit to our lives—perhaps less that we

thought. To the stronger claim, that consciousness provides no benefit, one might object that,

even if there are instances of unconscious processing, most aspects of our mental life are s-

conscious. After all, aren’t we the pilots in the cockpit of the mind, consciously aware of most of

what we come across, able to process and intervene?

29 Even in more informal settings, blindsight and hemineglect patients are still able to discriminate between objects and interact with their environment, such as being able to walk through a messy room and avoid bumping into everyday objects. A famous example can be found here: https://www.youtube.com/watch?v=GwGmWqX0MnM.

51

There are reasons to think that most of our mental states are not s-conscious. When I look across a visual field, I don’t consciously take into account every object in the scene: rather, I get the “big picture” of what’s going on in the scene. (Imagine how many items one would have to consciously keep track of when viewing a grassy plain!). This is the principle behind phenomena such as motion blindness, inattentional blindness, and change blindness, as discussed in chapter

2. In consciousness, we create a useful heuristic to evaluate a visual scene. Some stimuli of a scene are prevalent or salient—for example, a car crash in the road—and we evaluate them as important or crucial to the scene. Other stimuli on the other hand are neglected or ignored if they aren’t salient in the scene e.g., what color shoes the driver happened to be wearing.

The biological benefit of this “unconscious evaluation” is obvious. An organism need not to consciously process every element in a scene, which would take time and limited cognitive resources, but only those that are most likely to affect its goals.

Even though consciousness allows us to focus on salient features, our nonconscious states still keep track of relevant information about individuals and relations between them. A gazelle doesn’t need to consciously identify the predator as a lion, a tiger, or a cheetah in order to run away for its survival. Similarly, we don’t need to consciously know the specific type of danger that is reaching towards us, we just need to nonconsciously register that the danger might affect us. How fast is it moving toward us, whether it’s a familiar object or a foreign one, how close it is, etc. are all relevant features to our survival if we come across another creature that is a potential predator. Likewise, having access to food and water is vital to our survival and does not require a higher-order state for us to assess. Instead, these are all regulated unconsciously and automatically. Humans probably wouldn’t survive if we had to consciously analyze every aspect

52

of every situation, determining the value of each object in a scene and whether it is relevant without the aid of our unconscious evaluator.

In fact, there is evidence that we evaluate unconsciously. Pessiglione, Schmit, Draganski,

Kalsisch, Lau, Dolan, & Frith (2007) had participants squeeze a handle in order to earn money.

Just before each trial, participants were shown a subliminally masked image of either a penny or a pound. Even though the images of the penny or pound weren’t reported as being seen (one sign of nonconsciousness), participants were still affected. Those who were exposed to the image of the pound output more force than those who were exposed to the image of the penny. In addition, participants who viewed the masked pound had sweatier hands in anticipation of their reward over those who viewed the masked penny. Pessiglione et al. (2007) concluded that not only are values unconsciously processed, but in addition, our motivations and anticipations aren’t regulated by consciousness.

In a similar study, participants were rewarded with money if they correctly pressed a button or refrained from pressing it. Participants would see a signal, then press (or refrain from pressing) a button, and were then told whether they earned or lost money from doing so. The signal had a subliminal image contained within it that appeared when the signal was flashed: one image indicated “go”, one image indicated “refrain”, and one image was neutral. What

Pessiglione, Petrovic, Dauizeau, Palminteri, Dolan, & Frith (2008) found was that participants, despite not consciously perceiving the subliminal image, we able to correctly press the button or refrain from pressing it, resulting in their being rewarded with a large sum of money. Despite not being consciously aware of their own patterns and motivations for their behavior, participants were able to learn to valuate nonconsciously.

53

These experiments provide justification for two crucial conclusions. First, they show that, counter to our intuitions, consciousness is not what drives us, as in the case of how we determine our behavior and motivations. Second, they show that evaluations are often made below the threshold of consciousness. Even if we don’t consciously perceive a stimulus, our brains are able to account for it, associating a positive or negative value to it.

Perhaps these conclusions aren’t so weird, especially if we consider normal everyday behavior that modifies our behavior even without our knowledge of doing so. Consider a reflex: when the cup falls off the desk, even though I am conscious of the cup falling, I don’t consider whether it is good or bad that the cup is falling, or whether I ought to intervene between the cup and the floor—I just instinctively dive for it. Our brains can account for such phenomena without us having to be in higher-order states. In fact, if we had to become consciously aware of every happening that occurs, we wouldn’t be able to react quickly enough to those stimuli due to how long it takes in order for us to become fully aware of those happenings. In this example, the cup would have been broken long before I would be able to react if it wasn’t the case that our brains can handle stimuli without us being aware of the mental states that are responsible for this.

Furthermore, even our intentions develop below the threshold of consciousness. In a series of ingenious experiments, Libet found that the brain spikes with activity even before we become aware of having any intentions. In one experiment Libet, Gleason, Wright & Pearl

(1982) had subjects look at a clocklike dial that revolved in about two and half seconds. At any time during the revolution, a subject was to flex his or her wrist (which was determined as a

“voluntary act”). After flexing their wrist, subjects were told to report at what time they had the intention to act and flex. Meanwhile, subjects were monitored by EEG to examine brain activity during these trials. What Libet and his colleagues found was that even before subjects were

54

aware of their intentions (determined by report time), activity was found several hundred milliseconds in the cerebral cortex. In other words, intentions start out life unconsciously.

Perhaps the most compelling case for consciousness having a known necessary place in our lives, and thus providing an adaptive benefit for a species is the ability to reason. The ability to plan, hypothesize, and conceptualize seem much too useful for us to simply stash away in the corner of unconsciousness. It almost seems absurd to make such a claim: after all, we can cognize the steps we take when we reason, as in the case of solving a long mathematical problem or creating a schedule. Plus, it almost seems as though rationality entails a higher awareness of oneself.

Yet, despite these intuitions, rationality can be separated out of consciousness. Consider what Rosenthal (2008) says about the link between consciousness and rationality:

But there is reason to doubt that any such essential connection actually holds between the

consciousness and rationality of thinking. Rational thinking is not always conscious, and behavior

is often rational even when the mental process that lead to it are not conscious. We sometimes

rationally solve problems and work out plans even when we are not thinking consciously about

those problems or plans. Intuitively it seems that rational solutions “just come to “; that is our

introspective impression. The best explanation is that these solutions actually come to us as a result

of thinking that is not conscious. (p. 831)

How often does this happen to us? Well, quite often! We are presented with a problem and are unable to solve it until much later, when we take a break after expunging it from our conscious mind. In fact, phenomena such as “artist’s block” or “writer’s block” are a perfect example of this: an artist or writer is unable to create or work when they are trying to put forth conscious and deliberate effort. Rather, artist’s and writer’s blocks are often overcome unconsciously, after the creators has stopped putting the conscious effort to overcome them!

55

Van Opstal, van Gaal, Lamme, & Dehaene (2011) specifically went out to test if even mathematical calculation can be done unconsciously. Experimenters had participants exposed to a series of arrows on a screen pointing left and right, and participants were asked to report if there were more arrows pointing left or right. In some trials, the arrows stayed on screen for a few hundred milliseconds. In other trials, the arrows were subliminally masked (and thus, not consciously perceivable). The results? Despite not being able to report or perceive the masked arrows, participants were still able to correctly determine (above chance) whether there more arrows pointing left or right, suggesting that there is at least some mathematical reasoning that is unconscious.

To be fair, the above experiments don’t necessitate that all reasoning is done unconsciously: we can think of cases in which we are consciously reasoning about our actions, such as deciding between two equally good or viable options or evaluating an argument. But, one function of subliminal reasoning is obvious for an organism to survive: we don’t need to consciously weigh all possibilities when presented with a problem. If a predator is detected, a creature doesn’t need to balance each and every alternative in order to escape it—the creature would be dinner long before then—it needs only to nonconsciously process a set of relevant stimuli and take appropriate action to escape to safety. What it’s like to experience the fear of being chased by a predator is unlikely to provide any benefit to an organism. In fact, it may even be a detriment, considering the length of time it takes for a mental state to become conscious!

A natural question then, is why do we even have consciousness? Rosenthal (2008), who agrees that consciousness has no known adaptive function is of the opinion that consciousness developed as a result of our having useful psychological states, which results in our being able to express ourselves and establishing a theory of mind to ourselves. He writes that “[t]houghts

56

about one’s own thoughts and desires initially occur by inferring in a folk-theoretical way from conscious observations of one’s own behavior. These inferences do serve a useful purpose, since they give rise to a general theory of mind, which in turn enable and enhances elaborate social interaction.” If Rosenthal is right, then consciousness arose as a byproduct of our sociality, perhaps in a case of gene-culture co-evolution, where establishing social groups and cultures would influence our neurobiology. Humans whose survival depended upon how they got along and were able to communicate to each other (even about themselves) would be more likely to have developed consciousness than those who were asocial. Despite the importance we place on our conscious lives, we neglect the utility of the unconscious. Our nervous systems are a result of evolutionary development to cope with our environment, and it is understandable for why we have such psychological states. Couldn’t we just agree with people like Max Velmans who said that consciousness is basically an accident, and in this case, that it arises because of how useful our neurological mechanisms are when dealing with threats around us? Given the results from contemporary psychology, neurology, biology, and cognitive science, it becomes difficult to hold that (s-)consciousness was necessary for our survival.

I hope the case that I have presented here isn’t seen as a pessimistic conclusion. Perhaps consciousness has some known function in other aspects of our lives, such as in regards to art and aesthetics, or perhaps for spirituality. But, consciousness is an evolutionary byproduct, not the driving force that guides our actions and lives, contrary to what generations of philosophers have thought.

57

References

Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge: Cambridge University

Press.

Bahrami, B., Lavie, N., Rees, G. (2007). Attentional load modulates responses of human primary

visual cortex to invisible stimuli. Current Biology 17, 509-513.

Bisiach E., Rusconi, M. L. (1991). Remission of somatoparaphrenic delusion through vestibular

stimulation. Neuropsychologia 29 (10): 1029-1031.

Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain

Sciences 18, 227-287.

Bonneh, Y., Cooperman, A., Sagi, D. (2001). Motion-induced blindness in normal observers.

Nature 411 (6839): 798-801.

Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness

Studies 2, 200-219

Chalmers, D. (1996). The Conscious Mind: In Search of a Theory of Conscious Experience.

Oxford, UK: Oxford University Press.

Churchland, P. (1984). Matter and Consciousness: A Contemporary Introduction to the

Philosophy of Mind. Cambridge, MA: MIT Press.

Dalton, J. W. (1997) The unfinished theatre. Journal of Consciousness Studies 4 (4): 316. de Groot, A. Gobet, F. (1996). Perception and Memory in Chess. Assen, Netherlands: Van

Gorcum.

Descartes, R. (1641). Meditations on First Philosophy. In Stephen Cahn (8th ed.), Classics of

Western Philosophy. Indianapolis, IN: Hackett Publishing.

58

Dehaene, S., Naccache S. (2001). Towards a cognitive neuroscience of consciousness: basic

evidence and a workspace framework. Cognition 79 (1-2): 1-37.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering how the Brain Codes our

Thoughts. New York, NY: Penguin.

Doricchi, F., Galati, G. (2000). Implicit semantic evaluation of object symmetry and

contralesional visual denial in a case of left unilateral neglect with damage of the dorsal

paraventricular white matter. Cortex 36 (3): 337-350.

Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.

Dretske, F. (1997). What good is consciousness? Canadian Journal of Philosophy, 27 (1):1-

15.

Gennaro, R. (2004). Higher-Order Theories of Consciousness. Amsterdam: John Benjamins.

Graziano, M. (2013). Consciousness and the Social Brain. Oxford, UK: Oxford.

Hassin, P., Bargh, J., Engell, A., McCulloch, K. (2009). Implicit working memory.

Consciousness and Cognition 18 (3): 665-678.

Hassin, R. (2011). Consciousness might still be in business, but not in this business.

Consciousness and Cognition 20, 299-311.

Jackendoff, R. (1987). Consciousness and the Computational Mind. Cambridge, MA: MIT

Press.

Jackson, F. (1982). Epiphenomenal qualia. Philosophical Quarterly 32, 127-136.

Kentridge, R. W., Heywood, C. A., Weiskrantz, L. (1999). Attention without awareness in

blindsight. The Royal Society 266, 1805-1811.

Kentridge, R. W., Heywood, C. A., Weiskrantz, L. (2004). Spatial attention speeds

discrimination without awareness in blindsight. Neuropsychologia 42, 831-835.

59

Kentridge, R. W., Nijboer, T. C. W., Heywood, C. A. (2008). Attended but unseen: visual

attention is not sufficient for visual awareness. Neuropsychologia 46, 864-869.

Koivisto, M., Hyona, J., Revonsuo, A. (2004). The effects of eye movements, spatial attention,

and stimulus features on inattentional blindness. Vision Research 44 (27): 3211-3221.

Kripke, S. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.

Libet, B., Gleason, C., Wright, E., Pearl, D. (1983) Time of conscious intention to act in relation

to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely

voluntary act. Brain 106 (3): 623-642.

Lau, H., Rosenthal, D. (2011). Empirical support for higher-order theories of conscious

awareness. Trends in Cognitive Science 15 (8): 365-373.

Luck, S., Woodman, G., Vogel, E. (2000). Event-related potential studies of attention. Trends in

Cognitive Science 4 (11): 432-440.

Lycan, W. (1996). Consciousness and Experience. Cambridge, MA: MIT Press.

Lycan, W. (2015). Representational Theories of Consciousness. SEP Entry, accessed at

https://plato.stanford.edu/entries/consciousness-representational.

Lyyra, P. (2010). Higher-order theories of consciousness: An appraisal and application.

Jyväskylä Studies in Education, Psychology and Social Research 387.

Mack, A., Rock, I. (1998). Innattentional Blindness. Cambridge, Mass.: MIT Press.

Marcel A. (1983) Conscious and unconscious perception: experiments on visual masking and

word recognition. Cognitive Psychology 15, 197-237.

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and

Processing of Visual Information.

Marshall J., Halligan, P. (1988). Blindsight and insight in visuo-spatial neglect. Nature 336,

60

(6201): 766-767.

Mitchel, J., Stoner, G., Reynolds, J. (2004). Object-based attention determines dominance in

binocular rivalry. Nature 429 (6990): 413-410.

Mylopoulos, M. (2015). Conscious intention: a challenge for AIR theory. Frontiers in

Psychology 6, 1-3.

Naccache, L., Dehaene, S. (2001). The priming method: imaging unconscious repetition priming

reveals an abstract representation of number in the parietal lobes. Cerebral Cortex 11

(10): 966-974.

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review 83, 435-451.

Norman, L., Tokarev, A. (2014) Spatial attention does not modulate holistic face processing,

even when multiple faces are present. Perception 43, 1341-1352.

Pessiglione, M., Petrovic, P., Dauizeau, J., Palminteri, S., Dolan, R., Frith, C. (2008). Subliminal

instrumental conditioning demonstrated in the brain. Neuron 59 (4): 561-567.

Pessiglione, M., Schmidt, L., Draganski, B., Kalsisch, R., Lau, H., Dolan, R., Frith, C. (2007).

How the brain translates money into force: a neuroimaging study of subliminal

motivation. Science 316 (5826): 904-906.

Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology 32:

3-25.

Prinz, J. (2012). The Conscious Brain: How Attention Engenders Consciousness. Oxford, UK:

Oxford University Press.

Rosenthal, D. (2002a). How many kinds of consciousness? Consciousness and Cognition 11,

653-665.

Rosenthal, D. (2002b). Explaining consciousness. In David J. Chalmers (ed.), Philosophy of

61

Mind: Classical and Contemporary Readings. Oxford University Press, 109-131.

Rosenthal, D. (2005). Consciousness and Mind. Oxford, UK: Clarendon Press.

Rosenthal, D. (2008). Consciousness and its function. Neuropsychologia 46: 829-840. Rock, I.,

Linnett, C. M., Grant, P., Mack A. (1992). Perception without attention: results of a new method.

Cognitive Psychology 24 (4): 502-304.

Sahraie, A., Weiskrantz, L., Barbur, J. L., Simmons, A., Williams, S. C. R., Brammer, M. J.

(1997). Pattern of neuronal activity associated with conscious and unconscious

processing of visual signals. The National Academy of Sciences 94, 9406-9411. Schurger,

A., Cowey, A. Tallon-Baudry, C. (2006). Induced gamma-band oscillations correlate with

awareness in hemianopic patient G.Y. Neuropsychologia 44, 1796-1803.

Schurger, A., Cowey, A., Cohen, J., Treisman, A. Tallon-Baudry, C. (2008). Distinct and

independent correlates of attention and awareness in a hemianopic patient.

Neuropsychologia 46, 2189-2197.

Simons, D., Chabris, C., (1999). Gorillas in our midst: sustained inattentional blindness for

dynamic events. Perception 28 (9): 1059-1074.

Soto, D., Mäntylä T., Silvanto, J. (2011) Working memory without consciousness. Current

Biology 21 (22): R912-R913.

Sumner, P., Tsai, P., Yu, K., Nachev, P. (2006). Attentional modulation of sensorimotor

processes in the absence of perceptual awareness. Proceedings of the National Academy

of Sciences of the United States of America 103 (27): 10520-10525.

Treisman, A., Gelde, G. (1980). A feature-integration theory of attention. Cognitive Psychology

12: 97-136.

62

van Boxtel, J. J. A., Tsuchiya, N., Koch, C. (2010). Opposing effects of attention and

consciousness on afterimages. PNAS 107, 8883-8888. van Gaal, S., Naccache, L., Meuwese, J. D. I., van Loon, A. M., Leighton, A., Cohen, L.,

Dehaene, S. (2014). Can the meaning of multiple unconscious words be integrated

unconsciously? Philosophical Transactions of the Royal Society: B, 369: 1-12. van Opstal, F., de Lange, F., Dehaene, S. (2011). Rapid parallel semantic processing of numbers

without awareness. Cognition 120 (1): 136-147.

Weiskrantz, L., Barbur, J. L., Sahraie, A. (1995). Parameters affecting conscious versus

unconscious visual discrimination with damage to the visual cortex. The National

Academy of Sciences 92, 6122-6126.

Williams, G. (1963). Highway hypnosis. International Journal of Clinical and Experimental

Hypnosis 103, 143-151.

Wilson, J. (2010). What is Hume’s Dictum, and Why Believe It? Philosophy and

Phenomenological Research, 80 (3): 595-637.

63