AI and Consciousness - Interalia Magazine
Total Page:16
File Type:pdf, Size:1020Kb
AI and Consciousness - Interalia Magazine https://www.interaliamag.org/interviews/keith-frankish/ AI and Consciousness Keith Frankish is a philosopher and writer. He is an Honorary Reader in Philosophy at the University of Sheffield, a Visiting Research Fellow (formerly Senior Lecturer) at The Open University, and an Adjunct Professor with the Brain and Mind Program in Neurosciences at the University of Crete. His interests lie mainly in philosophy of mind, and he is well known for defending an illusionist view of phenomenal consciousness and a two-level theory of the human mind. In this exclusive interview he discusses his ideas on the relationship between Artificial Intelligence and Consciousness. Keith Frankish ( https://www.interaliamag.org/author/keithfrankish/ ) February 2018 in Interviews ( https://www.interaliamag.org/category/interviews/ ) Leave a comment ( https://www.interaliamag.org/interviews/keith-frankish/#respond ) 1 of 10 03/05/2018, 19:13 AI and Consciousness - Interalia Magazine https://www.interaliamag.org/interviews/keith-frankish/ Keith Frankish Richard Bright: Can we begin by you saying something about your background? Keith Frankish: I trained as a philosopher of mind, but I think of myself as a cognitive scientist — as someone bringing his particular skills to the cross-disciplinary enterprise of understanding the human mind. Most of my research has been concerned in some way with the conscious mind — with conscious belief and reasoning on the one hand and with conscious experience on the other. I might summarise my views by saying that the conscious mind is less fundamental than we suppose. The conscious mind is very important, of course, and the source of many of our uniquely human abilities, but I see it as a fragile superstructure, shaped by culture and heavily dependent on nonconscious processes. The nonconscious mind is the engine room of cognition. It’s this perspective that I want to bring to thinking about AI. When we wonder what artificial minds might be like, we naturally think about artificial versions of our own conscious minds, but I think that distorts our view. Artificial intelligences may be very different. RB: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a 2 of 10 03/05/2018, 19:13 AI and Consciousness - Interalia Magazine https://www.interaliamag.org/interviews/keith-frankish/ complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence? KF: I’m inclined to adopt a minimal definition of intelligence as a problem-solving capacity — a capacity to respond to stimuli in ways that further some purpose or task. Even plants have intelligence in this sense. They have been designed by natural selection to perform certain tasks, such as maintaining the levels of vital nutrients, and they respond to stimuli in ways that help them achieve these tasks, such as by moving their leaves to face the sun. Other intelligent systems perform more demanding tasks, such as navigating around an environment, or recognising faces, and they can learn from experience and adapt their responses accordingly. I think that our minds and the minds of other animals are largely composed of special-purpose intelligent systems like this, designed by natural selection to perform specific tasks that are important for survival. But of course when we speak of ‘intelligence’ we are usually thinking of something much broader — the sort of general cognitive capacity measured by IQ tests. Again, we can think of this as a problem- solving capacity, but this time a general, open-ended one, which can be applied to any task. (IQ tests have different components — reasoning, knowledge, processing speed, and so on — but these are themselves general abilities.) Note that I’ve said nothing about the more subjective aspects of intelligence, such as self- awareness, emotion, and consciousness. That’s not because I think they are irrelevant. For example, emotional responses are important for evaluating courses of action and making wise decisions. But I see them as important because they make us better problem solvers, not in their own right. I don’t think we should assume that artificial or alien intelligences will have them, at least in the same form we do. They might find different ways of doing things. So I don’t want to build them into the definition of intelligence. Another thing to note is that I’ve defined general intelligence in a way that makes it problematic. How on earth could evolution (or human engineers) create a mechanism that could solve any problem? In fact, I don’t think it did. There is no general-purpose reasoning system in the brain. Rather, evolution — both biological and cultural — found tricks for getting the special-purpose systems to work together in ways that approximate to general intelligence. This is what created the conscious mind — the fragile superstructure I referred to. I’ll say a bit about how I think this happened because it’s central to the way I think of intelligence. (I should note that my views on this are heavily indebted to the work of the philosophers Daniel Dennett and Peter Carruthers, among others. I particularly recommend Dennett’s book Consciousness Explained and Carruthers’s The Centred Mind .) I think there were three key components to the trick: language, self-stimulation, and mental imagery, which developed separately. Language gives us a universal representational medium, which can combine outputs from different specialist systems. It has a structure which facilitates complex thought and logical inference, and it can represent abstract ideas and imaginary situations. By self-stimulation, I mean stimulating one’s own mental subsystems by creating sensory inputs that focus and direct their activities. In particular, we can stimulate ourselves with language, questioning ourselves, instructing ourselves, prompting ourselves, and so on. The third element is the capacity to produce mental imagery and to imagine ourselves performing actions. (The latter ability probably depends on mechanisms that evolved for the guidance of action.) This enables new kinds of self-stimulation. We can talk to ourselves in inner speech (producing auditory images of the utterances), conjure up images of sights and sounds and other stimuli, and try out actions in imagination before performing them. Such imagistic self-stimulation creates what we call the stream of consciousness , and it forms a new level of mental activity, which Daniel Dennett describes as a soft-wired ‘virtual machine’ 3 of 10 03/05/2018, 19:13 AI and Consciousness - Interalia Magazine https://www.interaliamag.org/interviews/keith-frankish/ running on the hardware of the biological brain. This virtual mind enables us to tackle problems beyond the scope of our unaided biological brains. When we confront a problem to which our mental processes don’t deliver a spontaneous response, we don’t have to remain baffled. We can do something — start questioning ourselves: How could I tackle this? What would help? Is there another way of looking at it? What if I did this? And we can imagine relevant scenes, objects, conversations, actions, and suchlike. These self-stimulations may then generate a spontaneous response — more inner speech, other sensory imagery, or an emotional reaction — which reframes the problem or provides a partial solution to it, and which prompts another round of self- stimulation, and so on. In this way, by engaging in cycles of self-stimulation and response, we can work our way through problems that would otherwise be beyond us. It is important to emphasise that the process needn’t be pre-planned. We don’t need to know in advance precisely which self- stimulations will solve our original problem. (If we did, then we would in effect already have solved it.) Rather, it is a process of trial and error, and we may make many false starts and encounter many dead ends before we get to a solution. At the same time, however, it won’t be completely random. We may have picked up useful tricks and developed hunches about what will work, based on past experience. And, of course, we can draw on vast stores of culturally transmitted knowledge and know-how, thanks again to the wonderfully flexible representational medium provided by human language. I think this distinction between the biological mind and the virtual mind is crucial to understanding human psychology, and I have argued that it corresponds to the distinction drawn by ‘dual-system’ theories of reasoning, as advocated, for example, by Daniel Kahneman in his book Thinking, Fast and Slow . Dual-system theories claim that the human mind has two different reasoning systems: System 1 (actually a large suite of subsystems), whose operations are fast, automatic, and nonconscious, and System 2, which is slow, controlled, and conscious. System 1 corresponds to the biological mind, and System 2 to the virtual mind. (System 2 processes are conscious, since sensory imagery is processed like actual sensory inputs — we are aware of imaged sights and sounds in much the same way that we are aware of real ones.) And it is by installing this virtual System 2 in our heads (that is, by developing regular habits of self-stimulation) that we achieve something close to general intelligence. Of course, you can’t install a System 2 on just any brain. You need to have a suitably rich suite of biological subsystems in place first, including a language system, before you can get the trick to work RB: How does, and how can, machine intelligence enhance human intelligence? KF: In two very different ways, I think.