<<

Consciousness, , & Perceptual Monitoring Hakwan Lau1,2,3,4 [email protected]

1. Department of , University of California Los Angeles, USA 2. Research Institute, University of California Los Angeles, USA 3. Department of Psychology, The University of Hong Kong, Hong Kong 4. State Key Laboratory of Brain and Cognitive , The University of Hong Kong, Hong Kong

I introduce an empirically-grounded version of a higher-order theory of conscious . Traditionally, theories of either focus on the global availability of conscious , or take conscious phenomenology as a brute fact due to some biological or basic representational properties. Here I argue instead that the key to characterizing the consciousness lies in its connections to formation and epistemic justification on a subjective level1.

An empirical link experiments have shown that, at least at the level of functional anatomy, the neural mechanisms for conscious perception and sensory metacognition are similar (Lau and Rosenthal 2011). By sensory metacognition we mean the monitoring of the quality or reliability of internal perceptual signals (Morales, Lau, and Fleming 2018). These two mechanisms both involve neural activity in the prefrontal and parietal cortices, outside of primary sensory regions.

In terms of the cognitive mechanisms and , there also seems to be some connection between the two. When people are asked to detect stimuli in the periphery, they are more likely to say that they see something (Knotts et al. 2018). This is true even when they are not actually more accurate in their decisions for the periphery compared to detection around fovea; they just make more false alarms in the periphery. One explanation is that we incorrectly represent the reliability of our internal perceptual signals, i.e. a failure of metacognition (Odegaard et al. 2018). Accordingly, we subjectively feel that we consciously see colorful details in the periphery, even though color sensitivity is poor in the periphery from as early as the retina. The ways that our conscious may be unrealistic (or inflated at , Knotts et al. 2018), seem similar and related to the ways that our metacognition behaves (Miyoshi and Lau, forthcoming).

These considerations have led some to think that consciousness and metacognition may be intrinsically related (Brown, Lau, and LeDoux, forthcoming). But it would be too strong to think they are one and the same; it is implausible that consciousness is necessary for any kind of sensory metacognition, as there may be unconscious mechanisms for perceptual monitoring too (Cortese, Lau, and Kawato, forthcoming). So what exactly is the relationship between consciousness and metacognition? Here I offer a theory of conscious perception to account for this link.

1 A short essay related to this article is published in Magazine: https://aeon.co/ideas/is- consciousness-a-battle-between-your-beliefs-and-

1 Some ‘magical’ Let us start with some intuitive examples. Intuitions are rarely universal. So they aren’t meant to be required assumptions for the arguments below. But they can sometimes help motivate .

On the internet there are videos2 showing that monkeys and apes seem capable of appreciating stage magic, i.e. tricks based on sleight of hand. Upon seeing these tricks, they express bewilderment and amusement, just like we do. Intuitively, it seems hard to imagine that the aren’t seeing the tricks consciously.

To put it another way, I suggest there can never be such a thing as unconscious magic. It makes no for a stage magician to specialize in magic tricks that are registered unconsciously in the of the audience, but not consciously seen. Even if people giggle uncontrollably upon being presented with such unconscious tricks, this won’t be the same thing as genuine magic appreciation.

The that stage magic, or at least some specific forms of it, may only work consciously probably has something to do with the very of conscious perception. Specifically, conscious seeing tends to lead to certain corresponding beliefs about what we see. There are three important features of these beliefs: i) they are actual personal-level beliefs, not just some representations in the brain unendorsed by the as an agent; and yet ii) they tend to happen automatically, without having the engage in some effortful cognitive inference; iii) they feel subjectively justified, in the sense that it seems, from the agent’s point of view at least, very reasonable to believe them. These features are true only if the relevant seeing is conscious; unconscious sensing doesn’t give rise to beliefs with these features, if they give rise to beliefs at all.

Together these features explain why seeing certain types of stage magic is so amusing. It is so because when you consciously see a magic trick, it tends to cause beliefs (e.g. a person’s turning into a cat) that conflict with other beliefs you have as a rational person (e.g. people don’t ever spontaneously turn into cats). Usually when you entertain such weird beliefs that are incompatible with everything else you believe, you reason your way out, so ultimately you may reject the weird beliefs. But here the beliefs occur automatically, beyond your volitional control. Also, they present themselves to you as justified, as if they are the most reasonable things to believe. That’s why it is so bewildering to see magic, and sometimes you can even see some of the same tricks over and over again without getting tired of them; all of this happens because of what conscious seeing constitutively involves.

Contrast this with cognitively realizing something bizarre, such as that: today is Monday but yesterday you were at work, although you never work on Sundays. Upon having such weird realization, one would re-organize one’s , check it with other beliefs. If we see this unexpected situation happen over and over again, we would finally come to some resolution, such as accepting that we must have mistaken that today is Monday after all. But certain types

2 Such as: https://www.youtube.com/watch?v=FIxYCDbRGJc

2 of magic tricks, presented consciously to us, remain as visually entertaining as ever (when done properly), even when we fully expect to see something bewildering. There is something about conscious perception that seems to be epistemically stubborn, compared to other usual modes of acquiring evidence for reasoning and behavior.

Consciousness as perceptual reality monitoring A certain version of a higher-order theory of conscious perception may account for the above three features. According to higher-order theories, having a (first-order) perceptual representation on its own is not sufficient for conscious perception. In addition, one would need to have a relevant higher-order representation. There is some debate among higher-order theorists regarding what should be the content and nature of the higher-order representation, leading to different versions of higher-order theories (Lau and Rosenthal 2011; Brown, Lau, and LeDoux, forthcoming).

Here I propose that, given some necessary basic background functioning of an agent capable of perception and (including some degree of rational decision-making and inference), conscious perception in that agent occurs if there is a relevant higher-order representation with the content that, a particular first-order perceptual representation is a reliable reflection of the external world right now. The occurrence of this higher-order representation gives rise to conscious with the perceptual content represented by the relevant first-order state.

The agent is typically not conscious of the content of this higher-order representation itself, but the representation is instantiated in the system in such a way to allow relevant inferences to be drawn (automatically) and to be made available to the agent (on a personal level, in ways that make the inferences feel subjectively justified). To understand why this may be a useful and plausible mechanism to have, let us run it through some key cases. These cases will also help illustrate why in the above paragraph I wrote “if” rather than “if and only if”; I specified one scenario in which conscious experience occurs but there’s also another scenario.

According to the theory, occurs when a first-order representation occurs without the corresponding higher-order representation (see the below section on objections and replies for more details on this point). That’s why the perceptual capacity is there (due to the first-order representations), but the phenomenology of conscious perception is missing (Weiskrantz 1999).

In visual working (e.g. holding an image ‘online’ in one’s for a few seconds), the corresponding early sensory (i.e. first-order) representations in the brain are activated during the memory delay (Harrison and Tong 2010). And yet, despite the activation of similar first-order representations, one does not mistakenly see the memorized image during the delay period, at least not in the normal way of consciously seeing, phenomenologically speaking. The content of does not ‘leak out’ to the world. According to the theory, it is because the higher-order system (correctly) does not consider the relevant first-order representation to be reliable reflections of the state of the world right now. Instead the phenomenology (of imagery) is accounted for by the fact that the higher-order system considers the first-order representation to be a reliable internal representation generated by oneself.

3 Importantly, there are also cases when first-order representations occur without any conscious experience at all. For example, neurons in sensory areas show spontaneous activity. Literally, your cat-representing neurons are firing regularly now and then, on their own (Moutard, Dehaene, and Malach 2015). And yet you don’t have any sense of seeing or imagining a cat nearly as often. The reason offered here is, your higher-order system correctly discards such spontaneous activity as noise. It does not represent anything that immediately and directly concerns our personal-level rational inferences.

But this higher-order system may also make errors, as in cases of and . Here similar first-order representations occur (Horikawa et al. 2013), but they aren’t meant to be reliable reflections of the world right now. They are generated internally, not exactly as unstructured noise but also not with conscious as in the case of imagery or working memory. And yet the higher-order system mistakenly considers these first-order representations to be reliable reflections of the world right now, which leads to the corresponding conscious experiences.

One can thus see how conscious perception and metacognition may be linked. Conscious perception is a of perceptual reality monitoring. Just as in reality monitoring in memory (Simons, Garrison, and Johnson 2017), one distinguishes between the possible sources for a memory trace (e.g. was it something I saw a while ago, or something I imagined myself a while ago?), in perception we also need to know what causes the relevant first-order perceptual representations occurring at the moment. To be precise, we as agents do not engage in this reality monitoring activity; our higher-order mechanism in the brain does it automatically for us at a subpersonal level. When a perceptual representation is deemed by the higher-order mechanism to be a reliable reflection of the world, rather than just noise, conscious perception occurs. When it is deemed by the same mechanism to be a reliable representation generated by oneself, the corresponding experience is distinctly different from normal perception. Under normal functioning, one can tell the difference without effort, because they feel so different. And when perceptual activity is deemed not to be so reliable or a meaningful representation at all, no conscious experience occurs. Because the function of sensory metacognition is exactly to monitor the reliability of first-order perceptual representation, naturally, the mechanism for perceptual reality monitoring can be conveniently employed also for this purpose.

With this view, the three features of beliefs due to conscious perception can be easily accounted for. If you have the higher-order representation that a certain perceptual state is likely a reliable reflection of the world right now, it would be natural for your cognitive system to form the corresponding beliefs automatically, as a matter of syllogistic inference (a certain metacognitive system X says P is likely reliable; X itself is generally reliable; therefore, I should believe P); the purpose of having such perceptual reality monitoring system is exactly to allow the results of this kind of inference to be made available for the subject at an agent level. To the extent that your higher-order system is to be trusted (which one can learn over ), the fact that it says your perceptual state is reliable gives you a logical reason to think the corresponding beliefs are justified too.

4 Trouble for first-order views? The consideration of the above key cases present some challenges for first-order theories of consciousness, i.e. views according to which consciousness is explained by the nature of the first-order representations alone. A first-order theorist may say that spontaneous or working memory-driven activations of first-order representations are probably only superficially similar to first-order representations in normal conscious perception. The same neurons may be involved, but the dynamics of such activity are likely different (Moutard, Dehaene, and Malach 2015) because in the case of spontaneous activity or working memory, the activity was driven in a purely endogenous fashion, whereas in the normal perception case there is bottom-up input (which likely changes the dynamics of the relevant neural activity). So a first-order theorist may say this difference in the first-order representations itself may explain the difference in phenomenology between these cases.

However, in dreams there is also a lack of meaningful external input, at least in the visual domain. And yet there are often vivid conscious perceptual experiences. Of course, there may be other differences at the first-order level thus far unknown, but we already know that neural activity in the prefrontal and parietal cortices – where higher-order representations likely reside, can clearly distinguish between dreams and dreamless .

Ultimately, I believe first-order theories can be rejected on empirical grounds by considering the case of normal perception alone - to the extent that first-order representations are believed to be representations in sensory cortices. Empirically we know that we need something more for conscious perception to occur. Although the issue remains debated, a strong case can already be made that conscious perception constitutively involves activity in the (Brown, Lau, and LeDoux, forthcoming.; Lau and Rosenthal 2011). Should that turn out to be confirmed, this motivates a higher-order over a first-order account.

Below I will also offer an account from evolutionary perspective as to why a higher-order mechanism need to exist for agents with brains like ours. Given that, it is plausible for consciousness to exploit this mechanism, rather than to stipulate that the dynamics of first-order representations will somehow directly contribute, without a known mechanism.

Of course, one can also consider a single first-order representation to consist of both neuronal activity in the first-order cortex, as well activity in the prefrontal cortex. The former may determine the perceptual content, and the latter may determine which mode of representation is employed; together they may count as one representation, not a first-order plus a higher-order one. But this will just be a terminological issue, or at best a (tricky) matter of how we individuate representations. These views are not considered further here because they are not adopted by most contemporary philosophers who engage in the relevant empirical literature. Following standard neuroscience traditions, we will speak of neuronal representations in different cortices as having distinct content.

Some limitations of the higher-order view

5 So, instead, let us focus on other varieties of higher-order theories which may help us sharpen and further clarify the theory proposed here. That is, would all the above have worked with just any higher-order theory?

Let us first consider Rosenthal’s Higher-Order Thought theory (Rosenthal 2004), as there are objections to this prominent view, or at least ‘barefoot’ versions of it, which apply also to other higher-order variants. According to Rosenthal’s view, the relevant higher-order representation necessary for consciousness is akin to a thought that one is in a certain first-order state, e.g. “I am in a of representing something red”.

This kind of view faces one well known challenge (Neander 1998): what happens when the perceptual content of the higher-order state mismatches with the content of the first-order state? i.e. what if your higher-order thought says you are in a mental state of representing something red, while your first-order state actually represents something green (or colorless)? This kind of issue does not come up the same way in the reality monitoring view proposed here, because the relevant higher-order representation does not duplicate the first-order content. Instead it just indexes which of a number of possible first-order states is a reliable reflection of the current world.

This is not to say this challenge in itself rejects the higher-order thought view. Rosenthal’s reply is that in such cases, the conscious phenomenology would simply go with the higher-order content (Rosenthal 2011). But then, at least in the case of conscious perception, why not get rid of the first-order representation and let the higher-order representation do all the work? From a computational standpoint, this doesn’t seem like a very efficient cognitive architecture to have this two levels of overlapping content. But I do not consider this point alone decisive; Mother Nature might well have done an inefficient job here and there.

A somewhat stronger argument is that empirically we know that the prefrontal and parietal cortices, where higher-order representations likely reside, are capacity-limited (Block 2007). They are too busy doing too many things so they can only process a few items reliably at once. Can they support the richness of our perceptual phenomenology? I think this is unlikely, but once again I do not consider this alone a decisive argument because there is currently some debate as to how rich our perceptual phenomenology really is (Knotts et al. 2018).

Fortunately these issues can be empirically resolved. Although we cannot do these experiments easily and decisively given today’s technology, the question is clear – can we decode and predict the content of phenomenology from prefrontal and parietal neuronal activity alone (Rosenthal’s prediction), or do we also need to include activity from the sensory cortices (my prediction)? Answering this question will decisively resolve the above issues, so they will be settled eventually.

More relevant to the present discussion is the consideration of how Rosenthal’s theory may handle the key cases discussed above. In particular, in working memory one is aware of being in the relevant first-order state, as one is actively holding the memory content online during the

6 delay. One certainly thinks one is actively representing the relevant content. Just why isn’t there the phenomenology of normal conscious seeing? This is not to say we cannot add some extra components or qualifications for the higher-order representation to distinguish between one’s being aware of representing something out in the world, versus representing something in one’s own mind that isn’t necessarily in the world. But ultimately, to go into such further specifications, one may well end up with a theory not much different from the one proposed here – neglecting for the moment the above issue of re-representing versus merely indexing the relevant first- order perceptual content. Let us consider how this exercise may go.

A higher-order belief view? So the lesson from the consideration of the case of working memory seems to be that the mere thought of being in a certain first-order state isn’t always sufficient for normal conscious perception to occur. Instead, something seems to be important regarding whether the relevant first-order state reflects the current state of the world, or our ongoing mental activity (as in cases of working memory and imagery), so that we should update our beliefs accordingly.

One could try to build that element into the content of the higher-order representation, such that instead of being a mere thought that one is in a certain first-order state, the higher-order representation could be a belief about the state of the world itself, or a belief about one’s inner mental activity. For example, according to this view a first-order representation of something red would not in itself lead to conscious experiences of redness; there needs to be a higher-order belief that the world has something red out there, or that one is actively imagining something red. This can distinguish normal perception from the working memory case, as in the latter case one only that there was something red a moment ago, and that one is holding a representation of something red in one’s mind - but not out there in the world. Therefore, conscious experience of seeing red does not occur. The phenomenology of remembering or imagining redness would be different, because the corresponding higher-order belief is different.

But this kind of view – let us call this a Higher-Order Belief view – would run into trouble with a particular kind of . People sometimes ingest with the full of their effects, e.g. for recreational purposes. Let’s say a person hallucinates a cat on the wall after taking some psychoactive substances for fun. That person may not really believe that there is in fact a cat. Instead, the person may well be amused by such conscious experience of seeing the cat, while fully believing that there is no such cat in the world at all.

In other words, consciousness does not always go with beliefs (Pitcher 1971). To identify the two would be too strong. But to say a mere thought of being in a first-order perceptual state is sufficient for conscious perception would be too weak in terms of specifying the connection between consciousness and beliefs. What is needed is an intermediate kind of view, where conscious perception has a strong tendency to lead to beliefs, but this tendency can be legitimately overridden by other background beliefs.

The proposed view offers exactly this kind of solution. When your higher-order system says that your first-order representation of a cat is likely a correct reflection of the current world, absent

7 other background knowledge, one’s rational reasoning system naturally draws the inference and forms the belief that there is a cat. But if there is also the background knowledge that one is likely hallucinating, or that one’s higher-order system is likely temporarily at fault otherwise, a rational reasoning system would of course refrain from forming such beliefs.

Davidson’s puzzle & perceptual justification If the key is just to find a medium position such that conscious perception tends to but not always lead to the corresponding beliefs, one may ask if a higher-order view such as the one proposed here is necessary. Why not say that conscious perception is but a perceptual process with the disposition to cause the relevant beliefs (Armstrong 2002/1968; Pitcher 1971)? Why not adopt a traditional functionalist strategy, to identify consciousness with the function of having the tendency to cause these beliefs?

What we need to account for here is a bit more than that. the three features we discussed, concerning beliefs formed due to conscious perception. One of them is that such beliefs introspectively feel justified. Of course one could in principle specify that the functional role of consciousness is exactly to have the tendency to generate beliefs that feel subjectively justified. But this would be a rather ad hoc way to let the so-called ‘functional role’ do all the work. This kind of strategy tends not to make very useful theories. On the other hand, as explained above, the view proposed here can provide a more mechanistic and principled account as to why these beliefs may feel justified. The relevant higher-order representations, as well as the reliability of the higher-order mechanisms by which they are generated, together provide a logical basis for one to give a rational justification of the relevant perceptual beliefs.

This last point may benefit from considering the context of a well-known puzzle of perceptual justification (Davidson 2001/1986). In a much quoted passage, Donald Davidson wrote:

The relation between a sensation and a belief cannot be logical, since sensations are not beliefs or other propositional attitudes. What then is the relation? The answer is, I think, obvious: the relation is causal. Sensations cause some beliefs and in this sense are the basis or ground of those beliefs. But a causal explanation of a belief does not show how or why the belief is justified.

Since this was written, much has changed in the literature of perceptual . Today many authors would argue that perception can involve propositional content. But interestingly, the way ‘sensation’ was used above seems to correspond well with how we have been characterizing first-order perceptual representations so far. In today’s neuroscience we speak of the firing of sensory neurons in e.g. the as representing objects and features. But representations of objects and features alone, like a picture of a unicorn on its own, are not statements that can turn out to be true or false. A picture of a unicorn can be demonstrably false if it is meant to depict a real in this world right now, that is, if we ascribe to it an assertoric , by e.g. labeling it as such. On the other hand, if we point to the picture and say “this reflects my own ”, this may well be a true claim.

8 Thus, although first-order perceptual representations themselves - as we have been characterizing them so far - cannot be true or false, and thus cannot be logically connected to beliefs nor to provide the relevant (logical) justifications, such connection is made possible by the corresponding higher-order representations. For example, the representation that “this particular first-order representation reflects the state of the world right now” itself has truth conditions, and can be logically connected to the relevant perceptual beliefs via the kind of syllogistic inferences described earlier.

This, I argue, is the essential epistemic role played by the higher-order representations for consciousness. To consciously see something is to see something as being out there in this world right now. To experience mental imagery is to be conscious of certain ongoing first-order mental activity. These allow us to form the relevant perceptual beliefs with justification. This is all done by way of having the corresponding higher-order representations, which are available for our syllogistic reasoning mechanisms.

In this sense, what I am defending is a specific and limited version of a higher-order theory. The relevant higher-order mechanism does not re-represent first-order content. All it does is to decide if one should ascribe to some (first-order) sensory representation an assertoric attitude i.e. to decide that it legitimately reflects the current state of the world, or one’s ongoing inner mental , thereby making it conscious in the relevant ways; or whether it is merely spontaneous noise, thereby undeserving of entering consciousness.

Evolutionary plausibility & from AI Traditionally, some higher-order views have been criticized for being evolutionarily implausible (Carruthers 2000). Why would the conscious brain develop this ‘inner sense organ’ or monitoring system to sit on top of the already good perceptual machinery?

I have mentioned at the beginning of the paper that this may well be an empirical brute fact, that we do have a metacognitive system in the brain, specifically in the prefrontal and parietal regions, to monitor the reliability of internal perceptual signals.

However, some also believe that metacognitive signals, i.e. subjective confidence, are represented in on the first-order level (van Bergen et al. 2015). If this were true, it calls into question whether or why a separate metacognitive mechanism is necessary.

So instead of suggesting that an independent metacognitive mechanism already existed before it got ‘recycled’ for the purpose of perceptual reality monitoring, I hereby offer what I think is an alternative and possibly more plausible evolutionary scenario.

Contemporary work on artificial , i.e. ‘deep ’ types of neural network modeling, provides cues as to how a somewhat biologically realistic system can develop perceptual capacities close to level (Sejnowski 2018). To achieve this, it is generally agreed that predictive coding, i.e. a mechanism of recognition or that involves testing hypotheses generated in a top-down fashion, is substantively useful if not utterly

9 necessary (Huang and Rao 2011). However, how to train and develop such predictive mechanism with a realistic amount of data has been a challenge. One recent breakthrough is an called Generative Adversarial Networks (Goodfellow, Bengio, and Courville 2016), in which a generative network (capable of supporting predictive coding) competes with an adversary, the job of which is to scrutinize the difference between genuine perceptual samples (i.e. images of the actual world) versus outputs of the generative network (sometimes called forgeries). This has been been touted as “one of the coolest ideas” in deep learning in recent years, because having this adversary mechanism as a competitor is tremendous help to developing and training the generative models with relatively little data to make good top-down predictions (LeCun, Bengio, and Hinton 2015).

In artificial neural networks, so far the use of Generative Adversarial Networks has been a results-motivated engineering trick. But in the , we already know that top-down driven imagery (or working memory) and bottom-up perception recruit the same first-order mechanisms (Harrison and Tong 2010). It would make sense that the brain distinguishes between these causes of firing, to correctly infer what the firing means. If this in turn facilitates the evolutionary pressure to develop better predictive coding systems too - to allow an to imagine and memorize with better precision - the phylogenetic development of this adversarial mechanism would seem plausible.

From there, this system may in turn be recycled also for sensory metacognition. If the higher- order system can distinguish between externally-triggered input versus endogenous generations, the hypothesis is it can also help to rule out cases of spontaneous noise. A loose analogy may be that: if we learn to distinguish between the of red wine from white wine really well, presumably it becomes easy for us to detect wine from water as well.

In other words, consciousness might have emerged as a consequence of our brains’ needing to make sensory predictions, for the sake of improving the capacity of perception itself, which in turn also allows us to engage in top-down mental imagery and working memory, using the same perceptual machinery. Amid evolutionary pressure, a metacognitive / adversarial organ is thus developed, which categorically identifies the various possible causes of the firing of sensory neurons. When it is decided by this mechanism that some sensory activity likely represents the state of the current world in a truthful fashion, we consciously see things as such.

Some objections and replies: blindsight and synaesthesia Let us now turn to some important objections, which can also help to further illustrate the theory.

Above I suggested that blindsight is due to a failure of forming the corresponding higher-order representations. But classical blindsight patients have lesions in the primary visual cortex (Weiskrantz 1999), wherein first-order representations presumably reside. Does this not suggest that it is a disturbance on a first-order rather than a higher-order level?

The reply is: It is true that classical blindsight patients have lesions in the primary visual cortex. But neuroimaging research has generally shown that blindsight ability is reflected by other

10 activity in other extrastriate visual areas. Why does such activity support behavior but not lead to conscious experience? It was found that comparing blindsight with normal perception, at matched visual discrimination performance, there was difference in brain activity in the prefrontal and parietal cortices (Persaud et al. 2011), where higher-order representations likely reside.

A related challenge concerns learning in blindsight. Imagine there is a group of blindsight subjects, who lost their conscious visual phenomenology, with their ability to perform certain visual tasks preserved. Over time, if we give these subjects correct feedback on their performance, they will learn that they can perform tasks correctly. From there, after each decision, they may develop the (higher-order) realization that their relevant (first-order) sensory activity must be truthfully reflecting some aspects of the state of the present world. Does it mean that they will learn to consciously see? There has been no reported cases of their regaining visual consciousness by this kind of simple learning.

The answer would be no, because even if they learn to have such realization, that doesn’t come about as a constitutive part of the perceptual process, which is important to account for how the formation of perceptual beliefs is automatic. There is a key difference here: if one learns that one is good at performing visual tasks, and thereby infers that one’s own sensory activity is truthfully reflecting the world, one can also easily unlearn that upon being told that one’s ability is gone. Learning and inferring at this higher cognitive level is subject to the usual personal level scrutiny, all background beliefs are relevant, and that one can change one’s mind, reason one’s way out as different beliefs contradict, etc. But according to the view proposed here, the relevant higher-order representation is itself generated by a subpersonal-level, automatic mechanism. This is why we can account for the fact that certain magic tricks can be appreciated over and over again without losing their amusement value.

Another objection comes from the case of synaesthesia (Deroy and Spence 2016) , which we can consider to be somewhat the reverse scenario. Synaesthetes generally know that synaesthetic experiences aren’t truthful reflections of the world, and have long stopped having the tendency to form such beliefs. This seems to contradict with the view proposed here.

I insist that synaesthetes are still constantly presented with the temptation to believe their synaesthetic experiences, only that it is overridden by the strong background belief about their condition, and abundance of training to ignore such temptation. Although this may seem unconvincingly question-begging from the outset, it actually fits well with how one would characterize or report about synaesthetic experiences, e.g. “it looks as though all number 3s are yellow”. Unlike other beliefs and thoughts where one can eventually reason one’s way out and get rid of them accordingly, synaesthetic experiences remain to be sources of some kind of conflict with reality. This is because having the tendency to cause the unrealistic beliefs remains a constitutive property of the conscious experience.

Cognitive phenomenology, , & agency

11 The theory proposed here is one about conscious perception primarily. But the idea is that conscious experiences happens whenever certain first-order activity is considered by a higher- order system as not noise. Under the normal case of perception, this is because the higher- order system considers it as a truthful reflection of the world right now. But conscious experience also occurs when it is considered to be a reliable representation generated by oneself. Just why do the two cases feel so different?

The answer I give here is limited. I will not try to say here why they are different in their specific ways, but will just point out that conscious perception and imagery have to be different, because when the brain is functioning normally, they are distinguished phenomenologically. Unless one is hallucinating or suffering from , we distinguish the two cases at ease, based on how they feel. These experiences are yet somewhat related to the conscious experience of seeing a cat because of the common or similar content on a first-order level.

This mode of representation associated with imagery and working memory may be what characterizes cognitive phenomenology (Brown, Mandik, 2012), that is the experiences associated with thinking about a cat, or just having other thoughts such as “2+2=4”. When we have these experiences, it does not feel as though something is happening in the world right now. Rather, it is about what goes on within one’s mind. As to why such cognitive phenomenology may be ‘thinner’ as compared to normal perception, partly this may be explained by the differences in content on the first-order level. But even for the same perceptual content, imagery also seems less vivid compared to normal perception, in most people. A prediction is, for whose imagery is rich and vivid, it may be because the higher system does not distinguish between the two cases as much as others. Accordingly, one may expect that for such individuals, metacognition may be less efficient, because metacognition depends on the same mechanism as perceptual reality monitoring.

One may also wonder if the proposed account here can account for other conscious experiences such as emotions. As in the case of cognitive phenomenology, I can only provide a preliminary sketch here: in emotional experiences, there are likewise first-order representations common in cases where one is thinking about other’s emotions, as well as going through the same emotions oneself. Take for example the case of fear. Similar neural activity may be involved in the amygdala when one is facing threat, or thinks that another is being threatened (Grabenhorst et al. 2019). The relevant higher-order system may be needed to deem such first-order representation as being triggered by the circumstances one is facing, or whether one is only exercising the same first-order representations only to think about other individual’s potential fear. The proposal is that the conscious experience of fear (or any other ) only occurs when the relevant higher-order system deems the first-order emotional representation as applicable to oneself (LeDoux 2019). The corresponding, subjectively justified belief that one would form, would be that one is afraid.

Likewise, our first-order motor representations for one’s own actions are also exercised when one imagine actions, or when one is observing or actions in others (LeDoux 2019; Iacoboni 2009). The proposal is that one experiences intention first-hand, only when the

12 relevant higher-order system deems such representations as being applicable to oneself right now - that is, when one is in fact generating these motor commands to cause certain movements. One thereby forms the corresponding beliefs that one is intending to act in a certain way. The potential functional consequences of having these beliefs are discussed elsewhere (Liu & Lau, forthcoming).

The So above I have given a theory of conscious perception, and a sketch of how this may extend to other conscious experiences. As a general theory of consciousness, it may be a standard requirement that it addresses certain classic paradoxes.

One such puzzle is the well-known knowledge argument (Jackson 2007). On a variant of this famous puzzle, a talented scientist was born without the . Let’s call this scientist Frank. Frank studied the diligently, and understood all there is to know about the chemistry of stinky tofu, a snack that certain Asian populations enjoy. He went so far to study the neuroscience behind it too, including the physiology of the olfactory and gustatory system, individual differences, genetic variations, etc. So he understood why certain individuals would be addicted to stinky tofu, while others may absolutely hate it. And yet, he didn’t know what stinky tofu smells like, from a subjective point of view. One day, he invented a treatment to restore one’s olfaction. After applying it to himself, he regained his sense of smell. Upon encountering stinky tofu, he realized: Ah, this is what stinky tofu smells like. It is truly disgusting. This is what all this fuss is about!

The puzzle is, does he gain some new knowledge upon having this conscious experience? It certainly seems like he learned something new. And yet he had known all there was to know from an objective and scientific perspective. So in a sense, it must mean that having the conscious experience first hand involves having some new information. So consciousness involves new knowledge that is not exhausted by any amount of textbook studying - or so the argument goes.

To address the puzzle, a theory of consciousness must be able to specify what sort of new information Frank gained when smelling stinky tofu for the first time, that was at the same time not expressible in all the textbooks that Frank had already studied. Perhaps this means consciousness outstrips representational knowledge, and thereby any representational understanding of the mind?

The answer based on the current proposal is this: upon experiencing the smell of stinky tofu, Frank learned to recall and imagine this same smell at will. From there a part of his brain - namely, the higher-order system, had to learn to distinguish the -generation of this representation, from when it is externally triggered. This is self-knowledge, about how one’s olfactory neural code behaves, learned at a subpersonal level. Unlike textbook knowledge, this is not something you can unlearn or update by other books. This is not knowledge one typically acquires at the rational, personal level. Once it is in your system, you can’t reason it away. This is what Frank’s new conscious experience involves.

13 In closing: global availability and other It is often suggested that one key feature of conscious perception is that the perceptual information is globally available to one’s central cognitive system, such that different subsystems (e.g. memory, , motor control) can all access it (Dehaene, Lau, and Kouider 2017) . According to some views, there is nothing about the phenomenology of conscious perception that isn’t exhausted by this fact. But these views are problematic in the light of the key cases considered here. In particular, when holding an image in working memory there is perfectly good global cognitive access to that representation - this is precisely why we actively hold it ‘online’ in our mind. But it is quite unlike seeing something out there in the world. It is nothing like what we vividly experience in dreams, where the neural activity is also generated endogenously, lacking external input. My point here has been to argue, the critical factor that tracks this phenomenology of normal conscious perception - what I take to be central to the challenge to understanding consciousness - is the assertoric attitude ascribed to the sensory representations. In dreams we see things as if they are out there right now, in working memory we don’t.

Besides proposing an empirically plausible mechanism for how this may work, this has been largely an exercise of pinning down the necessary and sufficient conditions for conscious perception. I have not provided an explanation on a conceptual level as to why conscious perception comes with this assertoric force. I have not given for us to think that whenever a percept is taken to reflect the state of the world, there should necessarily be something it is like to have such percept. I have simply stated this is likely how it is.

Of course, this much seeked explanation remains the Holy Grail. But to the extent we recognize the daunting challenge to solve this Hard Problem, which arguably all current theories fail to meet, a useful approach may be to consider its easier variants. In the of making , we would do well to challenges for which some theories would fail while others succeed. This way we can arbitrate between them.

Consider what I call here the Hard Enough Problem: for any theory that says X is the set of necessary and sufficient conditions for consciousness, we find a most simple creature that satisfies X, and challenge the theorist to say if such creature is indeed conscious. For instance, it has been claimed the Integrated Information Theory can explain consciousness in a principled way (Bayne 2018). But according to the theory, an inactive set of (XOR) logic gates connected as a grid would be considered conscious, even if they are not receiving any input to be computed meaningfully. The proponents of the theory suggested we should indeed consider these gates to be conscious3.

So as one can see, the challenge is one of daring to bite the bullet, to admit how implausible a theory is when these extreme cases are considered. In other words, that is to see how ridiculous one would look when these cases are run through, if one wants to maintain one’s theoretical position.

3 See https://www.scottaaronson.com/blog/?p=1799

14 How does the view proposed here fare? According to the theory, apes are probably conscious, as they seem capable of appreciation of stage magic. For other animals, this is less clear4. But to the extent that certain animals have the capacity for such genuine appreciation, this probably means that their perceptual processes have the automatic tendency to lead to beliefs that feel justified, even when things appear to contradict with what one otherwise expects. In general, animals where the same sensory neurons may fire due to be both external input as well as endogenous generation are likely to have to distinguish between these cases in order to behave reasonably. To do so they will probably need to have the mechanisms for ascribing assertoric attitudes to sensory activity deemed likely to be reflecting the state of the present world, so that the relevant perceptual representations can influence their rational behavior appropriately. According to the proposed view, such animals, however small and unsophisticated, are all conscious.

Simple robots that just pick up sensory information and act accordingly, on the other hand, may not be conscious. This depends on the exact computational architecture. If perceptual information is simply integrated with other sources of information in an optimal Bayesian fashion, for example, then the theory proposed here would not predict that such robot is conscious. However intelligent they may look and behave, such robots are just like zombies.

If, on the other hand, the system has to make categorical judgements as to whether an internal perceptual signal likely reflects the current state of the world (rather than being generated internally, or is just noise), this may be a different story. Imagine if the same perceptual machinery can be exercised by different possible inputs, some internal and external. Once it is decided that a certain internal signal is legitimately reflecting the present world, the relevant percept may acquire a certain stubborn assertoric status, having the tendency to make impact on a central reasoning system even when other evidence points to otherwise. For such robot, perceiving would be closely tied to believing. doesn't just represent bodily damage, it leads to the inescapable tendency to finding it reasonable to believe that there is bodily damage right now, even when circumstantial evidence clearly suggests otherwise. According to the view defended here, such robot is conscious, however mechanical it may look.

I have no in convincing all my critics that these cases above are all plausible by some absolute standard. This may depend on the intuitions of the individual. But perhaps it would suffice for now, if some of you agree that they are relatively plausible compared to the predictions made by some other theories, e.g. those inactive logic gates mentioned above.

Acknowledgement Earlier versions of this paper has been presented at seminars, at CUNY (hosted by David Rosenthal), Stanford (hosted by Paul Skokowski), University of Hong Kong (hosted by Joe Lau), NYU (hosted by Dave Chalmers), at the 2017 meeting for the Society for and Psychology in Baltimore, as well as the 2018 meeting for the Association for the Scientific

4 See this for a useful discussion: https://blogs.scientificamerican.com/illusion-chasers/did-the- baboon-feel-the-magic/?redirect=1

15 Studies of Consciousness in Krakow, Poland. I have also benefited from feedback from a graduate class at UCLA Psychology on this topic in 2017. I thank Matthias Michel, Jorge Morales, Declan Smithies, Bryce Hueber, Susanna Seigel, Wai-hung WONG, Jackson Kernion, Gabe Greenberg, Sam Cumming, , Alex Kiefer, Joe Lau, David Rosenthal, Richard Brown, Joe LeDoux, Jacob Berger, Adriana Renero, Cody Cushing, Eric Mandelbaum, and many others, for comments and extremely helpful discussion.

References

Armstrong, D. M. 2002. “A Materialist Theory of the Mind.” https://doi.org/10.4324/9780203003237. Bayne, Tim. 2018. “On the Axiomatic Foundations of the Integrated Information Theory of Consciousness.” Neuroscience of Consciousness 2018 (1): niy007. Bergen, Ruben S. van, Wei Ji Ma, Michael S. Pratte, and Janneke F. M. Jehee. 2015. “Sensory Uncertainty Decoded from Visual Cortex Predicts Behavior.” Nature Neuroscience 18 (12): 1728–30. Block, Ned. 2007. “Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience.” The Behavioral and Brain Sciences 30 (5-6): 481–99; discussion 499–548. Brown, Richard, Hakwan Lau, and Joseph LeDoux. (forthcoming) “The Misunderstood Higher- Order Approach to Consciousness.” https://doi.org/10.31234/osf.io/xpy8h. Brown, Richard, Pete Mandik 2012. “On Whether the Higher-Order Thought Theory of Consciousness Entails Cognitive Phenomenology, Or: What Is It Like to Think That One Thinks That P?” Philosophical Topics. https://doi.org/10.5840/philtopics201240211. Carruthers, Peter. 2000. “Phenomenal Consciousness.” https://doi.org/10.1017/cbo9780511487491. Cortese, Aurelio, Hakwan Lau, and Mitsuo Kawato. (forthcoming) “Metacognition Facilitates the Exploitation of Unconscious Brain States.” https://doi.org/10.1101/548941. Davidson, Donald. 2001/1986. “A Coherence Theory of Truth and Knowledge.” Subjective, Intersubjective, Objective. https://doi.org/10.1093/0198237537.003.0010. Dehaene, Stanislas, Hakwan Lau, and Sid Kouider. 2017. “What Is Consciousness, and Could Machines Have It?” 358 (6362): 486–92. Deroy, Ophelia, and Charles Spence. 2016. “Lessons of Synaesthesia for Consciousness: Learning from the Exception, rather than the General.” Neuropsychologia 88 (July): 49–57. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Grabenhorst, Fabian, Raymundo Báez-Mendoza, Wilfried Genest, Gustavo Deco, and Wolfram Schultz. 2019. “ Amygdala Neurons Simulate Decision Processes of Social Partners.” Cell 177 (4): 986–98.e15. Harrison, S. A., and F. Tong. 2010. “Decoding the Contents of Visual Working Memory from Activity in the Human Visual Cortex.” Journal of Vision. https://doi.org/10.1167/9.8.582. Horikawa, T., M. Tamaki, Y. Miyawaki, and Y. Kamitani. 2013. “Neural Decoding of Visual Imagery During Sleep.” Science. https://doi.org/10.1126/science.1234330. Huang, Yanping, and Rajesh P. N. Rao. 2011. “Predictive Coding.” Wiley Interdisciplinary Reviews: . https://doi.org/10.1002/wcs.142. Iacoboni, Marco. 2009. “Imitation, , and Mirror Neurons.” Annual Review of Psychology. https://doi.org/10.1146/annurev.psych.60.110707.163604. Jackson, Frank. 2007. “The Knowledge Argument, Diaphanousness, Representationalism.” Phenomenal and Phenomenal Knowledge. https://doi.org/10.1093/acprof:oso/9780195171655.003.0003. Knotts, J. D., Brian Odegaard, Hakwan Lau, and David Rosenthal. 2018. “Subjective Inflation:

16 Phenomenology’s Get-Rich-Quick Scheme.” Current Opinion in Psychology 29 (November): 49–55. Lau, Hakwan, and David Rosenthal. 2011. “Empirical Support for Higher-Order Theories of Conscious .” Trends in Cognitive Sciences 15 (8): 365–73. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–44. LeDoux, Joseph. 2019. The Deep History of Ourselves: The Four-Billion-Year Story of How We Got Conscious Brains. Penguin. Miyoshi, Kiyofumi, and Hakwan Lau (forthcoming) “Realistic Variance Assumptions Favor Metacognitive Heuristics.” https://doi.org/10.31234/osf.io/a63kr. Morales, Jorge, Hakwan Lau, and Stephen M. Fleming. 2018. “Domain-General and Domain- Specific Patterns of Activity Supporting Metacognition in Human Prefrontal Cortex.” The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 38 (14): 3534–46. Moutard, Clément, , and Rafael Malach. 2015. “Spontaneous Fluctuations and Non-Linear Ignitions: Two Dynamic Faces of Cortical Recurrent Loops.” Neuron 88 (1): 194–206. Neander, Karen. 1998. “The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness.” Nous. https://doi.org/10.1111/0029-4624.32.s12.18. Odegaard, Brian, Min Yu Chang, Hakwan Lau, and Sing-Hang Cheung. 2018. “Inflation versus Filling-in: Why We Feel We See More than We Actually Do in .” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 373 (1755). https://doi.org/10.1098/rstb.2017.0345. Persaud, Navindra, Matthew Davidson, Brian Maniscalco, Dean Mobbs, Richard E. Passingham, Alan Cowey, and Hakwan Lau. 2011. “Awareness-Related Activity in Prefrontal and Parietal Cortices in Blindsight Reflects More than Superior Visual Performance.” NeuroImage 58 (2): 605–11. Pitcher, George. 1971. A Theory of Perception. Rosenthal, D. 2011. “Exaggerated Reports: Reply to Block.” Analysis. https://doi.org/10.1093/analys/anr039. Rosenthal, David M. 2004. “2. Varieties of Higher-Order Theory.” Higher-Order Theories of Consciousness. https://doi.org/10.1075/aicr.56.04ros. Sejnowski, Terrence J. 2018. The Deep Learning Revolution. MIT Press. Simons, Jon S., Jane R. Garrison, and Marcia K. Johnson. 2017. “Brain Mechanisms of Reality Monitoring.” Trends in Cognitive Sciences 21 (6): 462–73. Weiskrantz, Lawrence. 1999. “Consciousness Lost and Found.” https://doi.org/10.1093/acprof:oso/9780198524588.001.0001.

17