
The Connection Between Intelligence and Consciousness The Challenges We Face Thomas Pinella 4.20.2016 CSC 242W Abstract Introduction A Map to General Intelligence Reinforcement Learning Computational Creativity The Question of Consciousness Easy Problems of Consciousness Hard Problems of Consciousness Consciousness as Fundamental: Panpsychism The Theater of Consciousness Self-Model Theory of Consciousness Integrated Information Theory of Consciousness Calculating Φ Criticism of IIT Different Consciousnesses IIT and Intelligence Self-Models and Intelligence Problem I: Mysterianism Problem II: The Fabric of Reality Conclusion References 1 Abstract Our intuition tells us that there is a connection between intelligence and consciousness. We assume consciousness in other human beings and will generally grant it to other intellectually equipped, high-functioning mammals such as primates and dolphins. But we are more hesitant to attribute consciousness to lesser order beings, such as fruit flies and bacteria; relatively, they are lacking in the cognitive department. Although it is common for intuitions to be proven misleading, this one is not completely unfounded; as we will explore in the following pages, there is indeed a theoretical basis that supports the idea that intelligence and subjective experience are related at a fundamental level. Additionally, we will examine the intrinsic difficulties associated with the task of imbuing intelligence and, by extension, consciousness, into a machine. Introduction In his 1950 paper on computing intelligence -- a paper that ignited the field of artificial intelligence -- Alan Turing opens with a hypothetical game he called the “Imitation Game” (Turing 1950). The hypothetical game that originally involved 2 the interactions between three entities, a man, a woman, and an interrogator with a machine taking on the role of either the man or woman, has since transformed into the more popular “Turing Test.” This “Turing Test,” as it has come to be known, is a simplified version including one interrogator and one interrogatee. Through only conversation, it is the goal of the interrogator to correctly predict the identity of the interrogatee, whether it is human or machine, and it is the goal of the interrogatee to fool the interrogator into believing that is human. Should a machine succeed and trick the interrogator that it is human, we would consider the machine as having passed the “Turing Test” and, by the test’s definition, we would be forced to attribute it “intelligence.” Although this test has been popularized over the years, especially with the Loebner Prize annual competition and its frequent appearances in Hollywood movies such as The Imitation Game and Ex Machina, the test is clearly not very scientific and leaves a very lacking definition of intelligence. But in some ways this is evidence of how difficult it is to properly formalize intelligence. All this being said, the “Turing Test” is a test for Strong AI, also known as Artificial General Intelligence (AGI). This intelligence is at the level of a human and is called “general” because of its ability to perform at a high level over a wide variety of tasks, like a human. This is opposed to Weak AI, which has a narrow non-general scope. In some ways then, the “Turing Test” is a reasonable test for this type of general intelligence, because what is more unpredictable or general than conversation? 3 The debate over whether or not it’s possible or to build Strong AI, or AGI, has gone on for decades and will most likely continue for decades to come (Hawkins 2004). Those who argue that it is indeed possible often ask the logical question: “why should the substrate matter? If life can emerge from carbon, what should stop it from emerging from silicon?” After all, what is the brain but a complex organ that merely manipulates information? We may not fully understand it yet, but that’s no reason as to why mimicking the functions of the brain in a computer should be impossible. In order to find a more concrete connection between intelligence and consciousness beyond our mere intuition, we will first need to better understand and better formalize what it means to be intelligent. We will then introduce several theories on consciousness and examine how they relate to intelligence as will have formally defined it. Finally, we will take a look at the intrinsic difficulties associated with bringing the theories presented in this paper to life. A Map to General Intelligence In 1964, Ray Solomonoff, a man considered to be one of the founders of algorithmic information theory (along with Claude Shannon), developed a mathematical method of universal inductive inference, referred to as Solomonoff 4 Induction. The concept, though uncomputable and impossible to implement on computers, was relatively straightforward: in order to predict what caused the current observation x, we test every possible cause (where each cause is viewed as an unhalting algorithm or a program p), and for those cases where the output of p matches x, the shorter p’s representation in code is, the more likely it is to be our hypothesis that explains x (Sunehag, P and Hutter, M 2011). This draws upon the idea of Occam’s Razor, which states that the simplest explanation is generally the most likely to be the correct explanation. From Solomonoff’s theory of inductive inference came approximations of it, including one called AIXI created by Marcus Hutter. As Solomonoff Induction goes, AIXI tests all possible hypotheses (in actual implementation, it tests only a sample), but in addition to that, it uses reinforcement learning to work towards some goal by maximizing reward every iteration (Sunehag, P and Hutter, M 2011). This idea of using reinforcement learning in conjunction with Solomonoff Induction is shared by Schmidhuber’s Godel Machine. Let’s demystify reinforcement learning and see how it relates to artificial general intelligence. Reinforcement Learning This method of machine learning requires the distinction between an agent and its environment, and it involves how the two affect each other. 5 The important variables we need to keep track of are: states, rewards, and actions. The goal of the agent is to find the sequence of actions that maximizes its total expected sum of rewards. Schmidhuber took this model of reinforcement learning and used it for the purpose of building creative machines that are capable of coming up with novel and aesthetically pleasing constructions ranging from the arts to the sciences (Schmidhuber 2010). As we will see, this artificially manufactured creative power is directly linked to artificial general intelligence. Computational Creativity Without supervised learning, how is it possible to make use of reinforcement learning to build machines that possess some level of creativity? Schmidhuber approached this by attempting to formalize beauty. Essentially, the more something can be compressed, the more beauty it has. Beauty, by this definition then, is directly related to the Kolmogorov complexity (the length of the most compressed version of some data) of the object in question. In fact, this brings us 6 back to Solomonoff Induction: the shortest explanations are the most probable predictions. The deep ties between compression and prediction are commonly understood in the field of algorithmic information theory (Franz 2015). Therefore, that which is most predictable is also the most compressed, and, by extension, the most beautiful. In Schmidhuber’s construction of an agent that is creative, however, he does not set beauty as the reward to maximize. Instead, his reinforcement learning algorithm maximizes the first derivative of beauty, which he refers to as “aesthetic pleasure.” This can be visualized in the following equation where O denotes the observer, or agent, at time t, D is the observed phenomenon, B(D, O(t)) is the compressed observation or beauty as we defined it previously, and I(D, O(t)) is the “aesthetic pleasure” garnered by the agent as it observes the change in beauty: δB(D, O(t)) I(D, O(t)) = δt By maximizing the change in beauty, seen above as I(D, O(t)), the agent discovers or creates things that are not only beautiful, but are also novel. Because of this, it does not get stuck creating the same thing, even if it’s beautiful, over and over again; it gets “bored” and seeks to make new things. An important detail: in order to maximize the change in beauty, the agent does this by modifying and improving its compressor, making it ever more efficient. The general intelligence present in this model grows clearer in that we see that 7 the agent is acting like a scientist. What does a scientist do? A scientist strives to distill the world he or she observes into simple rules; the scientist, like our agent, is compressing. And there we have our definition of general intelligence: intelligence is the process of observing the world and compressing it into a simpler model capable of accurate prediction. The Question of Consciousness Although the concept of general intelligence can be a difficult one to grapple with and define, it is easy in comparison to understanding the nature of consciousness. In consciousness research, a small but growing field, there are commonly understood to be two types of problems to be dealt with: the “easy” problems of consciousness and the “hard” problems of consciousness (Chalmers 1997). Easy Problems of Consciousness First, the easy: these are problems that neuroscience has been in the process of solving for decades, and they have been successful so far. These problems ask for the correlation between observable behavior and activity in the brain; essentially, they ask how brain mechanisms perform functions. For this reason, 8 the scientific method has worked for neuroscientists interested in finding exactly what brain activities are responsible for certain behavior. Questions that fall into this category include how entities categorize and react to environmental stimuli, the difference between awake and asleep, and the ability to control behavior (Chalmers 1997).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages27 Page
-
File Size-