Artificial Intelligence and Consciousness

Total Page:16

File Type:pdf, Size:1020Kb

Artificial Intelligence and Consciousness Artificial Intelligence and Consciousness Antonio Chella1, Riccardo Manzotti2 1Department of Computer Engineering, University of Palermo Viale delle Scienze, 90128, Palermo, Italy [email protected] 2Institute of Communication, Enterprise Consumption and Behaviour, IULM University, via Carlo Bo, 20143, Milan, Italy [email protected] Morpheus : We have only bits and pieces of information, but what we know for certain is that some point in the early twenty-first century all of mankind was united in celebration. We marvelled at our own magnificence as we gave birth to AI Neo : AI - you mean Artificial Intelligence? Morpheus : A singular consciousness that spawned an entire race of machines. from The Matrix script, 1999 Abstract 2001; Baars, 2002; Franklin, 2003; Kuipers, 2005; Adami, Consciousness is no longer a threatening notion in the 2006; Minsky, 2006; Chella and Manzotti, 2007). community of AI. In human beings, consciousness At the same time, trying to implement a conscious machine corresponds to a collection of different features of human is a feasible approach to the scientific understanding of cognition. AI researchers are interested in understanding consciousness itself. Edelman and Tononi wrote that: “to and, whether achievable, to replicate them in agents. It is understand the mental we may have to invent further ways fair to claim that there is a broad consensus about the of looking at brains. We may even have to synthesize distinction between the phenomenal and the cognitive artifacts resembling brains connected to bodily functions in aspects of consciousness (sometimes referred to as P- and order fully to understand those processes. Although the day A-Consciousness). The former problem is somehow related when we shall be able to create such conscious artifacts is with the so-called hard problem and requires deep theoretical commitments. The latter problems are more far off we may have to make them before we deeply frequently if not successfully addressed by the AI understand the processes of thought itself.” (Edelman and community. By and large, several models are competing to Tononi, 2000). suggest the better way to deal with cognitive aspects of The field of artificial consciousness naturally emerges consciousness allegedly missing from traditional AI from AI and consciousness debate. implementations. The AAAI Symposium on AI and According to Holland (Holland, 2003), it is possible to Consciousness presents an overview on the current state of distinguish between Weak Artificial Consciousness and the art of consciousness inspired AI research. Strong Artificial Consciousness. Holland defines them as follow: Introduction 1. Weak Artificial Consciousness: design and construction of machine that simulates In the last ten years there has been a growing interest consciousness or cognitive processes usually towards the field of consciousness. Several researchers, correlated with consciousness. also from traditional Artificial Intelligence, addressed the hypothesis of designing and implementing models for 2. Strong Artificial Consciousness: design and artificial consciousness (sometimes referred to as machine construction of conscious machines. consciousness or synthetic consciousness) – on one hand Most of the people currently working in AI could embrace there is hope of being able to design a model for the former definition. Anyhow, the boundaries between the consciousness, on the other hand the actual twos are not easy to define. For instance, if a machine implementations of such models could be helpful for could exhibit all behaviors normally associated with a understanding consciousness (Minsky, 1985; Baars, 1988; conscious being, would it be a conscious machine? Are Franklin, 1995; McCarthy, 1995; Aleksander, 2000; there behaviors which are uniquely correlated with Edelman and Tononi, 2000; Jennings, 2000; Aleksander, consciousness? Other authors claim that the understanding of consciousness could provide a better foundation for complex control whenever autonomy has to be achieved. Copyright © 2007, Association for the Advancement of Artificial In this respect, consciousness-inspired architectures could Intelligence (www.aaai.org). All rights reserved. be, at least in principle, applied to all kind of complex 1 systems ranging from a petrochemical plant to a complex interaction between the brain, the body and the network of computers. The complexity of current artificial surrounding environment has been pointed out (Chrisley, systems is such that outperforms traditional control 2003; Rockwell, 2005; Chella and Manzotti, 2007; techniques. Manzotti, 2007). According to Sanz, there are three motivations to pursue In the field of AI there has been a considerable interest artificial consciousness (Sanz, 2005): towards consciousness. Marvin Minsky was one of the firsts to point out that “some machines are already • implementing and designing machines resembling potentially more conscious than are people, and that further human beings (cognitive robotics); enhancements would be relatively easy to make. However, • understanding the nature of consciousness (cognitive this does not imply that those machines would thereby, science); automatically, become much more intelligent. This is • implementing and designing more efficient control because it is one thing to have access to data, but another systems. thing to know how to make good use of it.” (Minsky, 1991) The current generation of systems for man-machine The target of researchers involved in recent work on interaction shows impressive performances with respect to artificial consciousness is twofold: the nature of the mechanics and the control of movements; see for phenomenal consciousness (the so-called hard problem) example the anthropomorphic robots produced by the and the active role of consciousness in controlling and Japanese companies and universities. However, these planning the behaviour of an agent. We do not know yet if robots, currently at the state of the art, present only limited it is possible to solve the two aspects separately. capabilities of perception, reasoning and action in novel Most mammals seem to show some kind of consciousness and unstructured environments. Moreover, the capabilities – in particular, human beings. Therefore, it is highly of user-robot interaction are standardized and well defined. probable that the kind of cognitive architecture responsible A new generation of robots and softbots aimed at for consciousness has some evolutionary advantage. interacting with humans in an unconstrained environment Although it is still difficult to single out a precise shall need a better awareness of their surroundings and of functional role for consciousness, many believe that the relevant events, objects, and agents. In short, the new consciousness endorses more robust autonomy, higher generation of robots and softbots shall need some form of resilience, more general capability for problem-solving, artificial consciousness. reflexivity, and self-awareness (Atkinson, Thomas et al., AI has long avoided facing consciousness in agents. Yet 2000; McDermott, 2001; Franklin, 2003; Bongard, Zykov consciousness corresponds to many aspects of human et al., 2006). cognition which are essential to our highest cognitive There are a few research areas that cooperate and compete capabilities: autonomy, resilience, phenomenal experience, in order to outline the framework of this new field: learning, attention. In the past, it was customary to embodiment; simulation and depiction; environmentalism approach the human mind as if consciousness was an or externalism, extended control theory. None of them is unnecessary gadget that could be added at the end. The completely independent of the others. They strive to reach prevailing attitude regarded consciousness as a confused a higher level of integration. name for a set of ill-posed problems. Yet, classic AI faced Embodiment tries to address the issues of symbol many problems and, under many respects, similar grounding, anchoring, and intentionality. Recent work problems are now facing neural networks, robotics, emphasizing the role of embodiment in grounding autonomous agents and control systems. The fact is that AI conscious experience goes beyond the insights of systems, although a lot more advanced than in the past, still Brooksian embodied AI and discussions of symbol fall short of human agents. Consciousness could be the grounding (Harnad, 1990; Harnad, 1995; Ziemke, 2001; missing step in the ladder from current artificial agents to Holland, 2003; Bongard, Zykov et al., 2006). On this human like agents. A few years ago, John Haugeland asked view, a crucial role for the body in an artificial whether (Haugeland, 1985/1997, p. 247): “Could be consciousness is to provide the unified, meaning-giving consciousness a theoretical time bomb ticking away in the locus required to support and justify attributions of belly of AI?” coherent experience in the first place. Simulation and depiction deal with synthetic phenomenology developing models of mental imagery, Consciousness inspired AI research attention, working memory. Progress has been made in Consciousness inspired AI research appeared in many understanding how imagination- and simulation-guided different contexts ranging from robotics to control systems, action (Hesslow, 2003), along with the virtual reality from simulation to autonomous systems. metaphor (Revonsuo, 1995), are crucial components of Epigenetic robotics and synthetic approaches to robotics
Recommended publications
  • Arxiv:2012.10390V2 [Cs.AI] 20 Feb 2021 Inaccessible
    Opinion - Paper under review Deep Learning and the Global Workspace Theory Rufin VanRullen1, 2 and Ryota Kanai3 1CerCo, CNRS UMR5549, Toulouse, France 2ANITI, Universit´ede Toulouse, France 3Araya Inc, Tokyo. Japan Abstract Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace theory refers to a large-scale system integrating and distributing infor- mation among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep learning techniques. We propose a roadmap based on unsu- pervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal global latent workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications. 1 Cognitive neural architectures in brains and ma- chines Deep learning denotes a machine learning system using artificial neural networks with multiple \hidden" layers between the input and output layers. Although the underlying theory is more than 3 decades old [1, 2], it is only in the last decade that these systems have started to fully reveal their potential [3]. Many of the recent breakthroughs in AI (Artificial Intelligence) have been fueled by deep learning. Neuroscientists have been quick to point out the similarities (and differences) between the brain and these deep artificial neural networks [4{9]. The advent of deep learning has allowed the efficient computer implementation of perceptual and cognitive functions that had been so far arXiv:2012.10390v2 [cs.AI] 20 Feb 2021 inaccessible.
    [Show full text]
  • Mind and World: Beyond the Externalism/Internalism Debate
    Research Proposal Mind and World: Beyond the Externalism/Internalism Debate Sanjit Chakraborty Research Scholar Department of Philosophy Jadavpur University Background For the last few years the concept of the natural kind terms has haunted me. My main concern has been regarding the location of the meaning of these terms. Are meanings of the natural kind terms in the head or in the world? This question has been the most pressing in Philosophy of Mind and Philosophy of Language. I have realized that we cannot separate mind from the world. I had in the beginning only a layman‟s conception regarding mind, meaning and the world. When I entered the field of philosophy inspired by Hilary Putnam, I found that semantic externalism is a vexing issue involving a vast area. The location of content is at the core of the metaphysical debate regarding internalism and externalism in the sense that internalists believe that mental proprieties are intrinsic only if they preserve across world identity of internal replicas. Externalism is opposed to this thinking. For externalists, mental properties are in many cases dependent on physical or social environment. The linguistic strategy also maintains a difference between internalism and externalism regarding the mental content. Descriptivism focuses on general terms that consist in descriptive content and leads to mode of presentation of reference through sense. Besides, the causal theory of reference refutes descriptivism to ensure that there is a causal chain of reference between words and 1 objects that help us to identify agent‟s thought through an identification of its relation with external environment.
    [Show full text]
  • Unity of Consciousness and Bi-Level Externalism* BERNARD W
    Unity of Consciousness and Bi-Level Externalism* BERNARD W. KOBES In a conference hotel recently I accompanied a philosophical friend on a shop- ping expedition for some bold red and green wrapping papers, to be displayed as props in his upcoming talk on sensory qualia. Most examples used in philo- sophical discussions of consciousness have this static, snapshot character. The vehicle for a conscious perception of red wrapping paper may be a pattern of activation in a region of the brain devoted to visual input. But while walking, a person can turn her head, notice the wrapping paper, reach out, and grasp it. Our survival often depends on our ability to negotiate a changing environ- ment by simultaneous streams of perception and action. What do vehicles of conscious content look like when there is realistically complex interaction with the world? A dynamic view requires, according to Susan Hurley, a ‘twisted rope’, the strands of which are continuous multi-modal streams of inputs and outputs looping into the environment and back into the central nervous system. The twist in the rope is analogous to the unity of consciousness. The contents of conscious perceptions depend directly— non-instrumentally— on relations among inputs and outputs; similarly for conscious intentions.1 We are evidently a long way from static red and green sensory qualia. Hurley’s book is a work of formidable ambition and multi-disciplinary eru- dition. She marshals wide-ranging literatures in the philosophy of mind, cogni- tive psychology, cognitive neuroscience, and dynamic systems theory. Her Kant and Wittgenstein scholarship is also impressive.
    [Show full text]
  • KNOWLEDGE ACCORDING to IDEALISM Idealism As a Philosophy
    KNOWLEDGE ACCORDING TO IDEALISM Idealism as a philosophy had its greatest impact during the nineteenth century. It is a philosophical approach that has as its central tenet that ideas are the only true reality, the only thing worth knowing. In a search for truth, beauty, and justice that is enduring and everlasting; the focus is on conscious reasoning in the mind. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as “idea-ism”. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. Etymologically Its origin is: from Greek idea “form, shape” from weid- also the origin of the “his” in his-tor “wise, learned” underlying English “history.” In Latin this root became videre “to see” and related words. It is the same root in Sanskrit veda “knowledge as in the Rig-Veda. The stem entered Germanic as witan “know,” seen in Modern German wissen “to know” and in English “wisdom” and “twit,” a shortened form of Middle English atwite derived from æt “at” +witen “reproach.” In short Idealism is a philosophical position which adheres to the view that nothing exists except as it is an idea in the mind of man or the mind of God. The idealist believes that the universe has intelligence and a will; that all material things are explainable in terms of a mind standing behind them. PHILOSOPHICAL RATIONALE OF IDEALISM a) The Universe (Ontology or Metaphysics) To the idealist, the nature of the universe is mind; it is an idea.
    [Show full text]
  • Artificial Consciousness and the Consciousness-Attention Dissociation
    Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans.
    [Show full text]
  • Externalism Is a View About the Conditions for Our Thoughts and Words to Refer to Things
    Semantic Internalism and Externalism in the Oxford Handbook of the Philosophy of Language, ed. by Barry C. Smith and Ernest Lepore. Oxford University Press 2006. pp. 323-40. Katalin Farkas Central European University, Budapest 1. Three claims about meaning In a sense, the meaning of our words obviously depends on circumstances outside us. ‘Elm’ in English is used to talk about elms, and though I could decide – perhaps as a kind of code – to use the word ‘elm’ to talk about beeches, my decision would hardly change the English language. The meaning of ‘elm’ depends on conventions of the language speaking community, and these are certainly beyond my control. In this sense, no-one will disagree that meaning is determined by factors outside the individual. At the same time, it seems that it is up to me what I mean by my words; and in fact, the meaning of a word in the language is simply a result of what most of us mean by it. Another way of putting this point is that even if the meaning of an expression is determined by social agreement, grasping the meaning of the word is an individual psychological act. I may grasp the usual public meaning correctly, or I may – willingly or accidentally – mean something different by the word, but it looks that meaning in this sense depends entirely on me. It is also plausible to assume that in some sense, our physical environment contributes to what our words mean. If I am right in assuming that before Europeans arrived to Australia, English had had no word which meant the same as the word ‘kangaroo‘ does nowadays, this is easily explained by the fact that people at that time hadn’t encountered kangaroos.
    [Show full text]
  • Button Putnam Review
    Naturalism, Realism, and Normativity By Hilary Putnam, edited by Mario de Caro Harvard University Press, 2016, 248pp., £39.95 (hbk) ISBN: 9780674659698 This review is due to be published in Philosophy. This is a pre-print and may be subject to minor alterations. The authoritative version should be sought from Philosophy. Hilary Putnam’s Realism with a Human Face1 began with a quotation from Rilke, exhorting us to ‘try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue’. Putnam followed this advice throughout his life. His love for the questions permanently changed how we understand them. In Naturalism, Realism, and Normativity – published only a few weeks after his death – Putnam continued to explore central questions concerning realism and perception, from the perspective of ‘liberal naturalism’. The volume’s thirteen papers were written over the past fifteen years (only one paper is new), and they show a man who fully inhabited the questions he loved. But this presents a certain difficulty for the volume’s would-be readers. Since Putnam had engaged with these questions so many times before, much of his reasoning take place ‘off-stage’, and only appears in the papers in thumbnail form. Compounding this problem, chapters 2, 4, 5 and 8 are replies by Putnam to papers written about him (by Williams, Sosa, Boyd and Wright), but Putnam’s replies are printed here without the original papers. Conversely, chapters 7 and 11 are papers that Putnam wrote about specific authors (Dummett and Block), but they are printed here without those authors’ replies.
    [Show full text]
  • Is AI Intelligent, Really? Bruce D
    Seattle aP cific nivU ersity Digital Commons @ SPU SPU Works Summer August 23rd, 2019 Is AI intelligent, really? Bruce D. Baker Seattle Pacific nU iversity Follow this and additional works at: https://digitalcommons.spu.edu/works Part of the Artificial Intelligence and Robotics Commons, Comparative Methodologies and Theories Commons, Epistemology Commons, Philosophy of Science Commons, and the Practical Theology Commons Recommended Citation Baker, Bruce D., "Is AI intelligent, really?" (2019). SPU Works. 140. https://digitalcommons.spu.edu/works/140 This Article is brought to you for free and open access by Digital Commons @ SPU. It has been accepted for inclusion in SPU Works by an authorized administrator of Digital Commons @ SPU. Bruce Baker August 23, 2019 Is AI intelligent, really? Good question. On the surface, it seems simple enough. Assign any standard you like as a demonstration of intelligence, and then ask whether you could (theoretically) set up an AI to perform it. Sure, it seems common sense that given sufficiently advanced technology you could set up a computer or a robot to do just about anything that you could define as being doable. But what does this prove? Have you proven the AI is really intelligent? Or have you merely shown that there exists a solution to your pre- determined puzzle? Hmmm. This is why AI futurist Max Tegmark emphasizes the difference between narrow (machine-like) and broad (human-like) intelligence.1 And so the question remains: Can the AI be intelligent, really, in the same broad way its creator is? Why is this question so intractable? Because intelligence is not a monolithic property.
    [Show full text]
  • Artificial Intelligence and National Security
    Artificial Intelligence and National Security Artificial intelligence will have immense implications for national and international security, and AI’s potential applications for defense and intelligence have been identified by the federal government as a major priority. There are, however, significant bureaucratic and technical challenges to the adoption and scaling of AI across U.S. defense and intelligence organizations. Moreover, other nations—particularly China and Russia—are also investing in military AI applications. As the strategic competition intensifies, the pressure to deploy untested and poorly understood systems to gain competitive advantage could lead to accidents, failures, and unintended escalation. The Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), in consultation with Reps. Robin Kelly (D-IL) and Will Hurd (R-TX), have worked with government officials, industry representatives, civil society advocates, and academics to better understand the major AI-related national and economic security issues the country faces. This paper hopes to shed more clarity on these challenges and provide actionable policy recommendations, to help guide a U.S. national strategy for AI. BPC’s effort is primarily designed to complement the work done by the Obama and Trump administrations, including President Barack Obama’s 2016 The National Artificial Intelligence Research and Development Strategic Plan,i President Donald Trump’s Executive Order 13859, announcing the American AI Initiative,ii and the Office of Management and Budget’s subsequentGuidance for Regulation of Artificial Intelligence Applications.iii The effort is also designed to further June 2020 advance work done by Kelly and Hurd in their 2018 Committee on Oversight 1 and Government Reform (Information Technology Subcommittee) white paper Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S.
    [Show full text]
  • Theory of Mind As Inverse Reinforcement Learning. Current
    Available online at www.sciencedirect.com ScienceDirect Theory of mind as inverse reinforcement learning 1,2 Julian Jara-Ettinger We review the idea that Theory of Mind—our ability to reason think or want based on how they behave. Research in about other people’s mental states—can be formalized as cognitive science suggests that we infer mental states by inverse reinforcement learning. Under this framework, thinking of other people as utility maximizers: constantly expectations about how mental states produce behavior are acting to maximize the rewards they obtain while mini- captured in a reinforcement learning (RL) model. Predicting mizing the costs that they incur [3–5]. Using this assump- other people’s actions is achieved by simulating a RL model tion, even children can infer other people’s preferences with the hypothesized beliefs and desires, while mental-state [6,7], knowledge [8,9], and moral standing [10–12]. inference is achieved by inverting this model. Although many advances in inverse reinforcement learning (IRL) did not have Theory of mind as inverse reinforcement human Theory of Mind in mind, here we focus on what they learning reveal when conceptualized as cognitive theories. We discuss Computationally, our intuitions about how other minds landmark successes of IRL, and key challenges in building work can be formalized using frameworks developed in a human-like Theory of Mind. classical area of AI: model-based reinforcement learning 1 Addresses (hereafter reinforcement learning or RL) . RL problems 1 Department of Psychology, Yale University, New Haven, CT, United focus on how to combine a world model with a reward States function to produce a sequence of actions, called a policy, 2 Department of Computer Science, Yale University, New Haven, CT, that maximizes agents’ rewards while minimizing their United States costs.
    [Show full text]
  • Artificial Intelligence in the Context of Human Consciousness
    Running head: AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 1 Artificial Intelligence in the Context of Human Consciousness Hannah DeFries A Senior Thesis submitted in partial fulfillment of the requirements for graduation in the Honors Program Liberty University Spring 2019 AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 2 Acceptance of Senior Honors Thesis This Senior Honors Thesis is accepted in partial fulfillment of the requirements for graduation from the Honors Program of Liberty University. ______________________________ Kyung K. Bae, Ph.D. Thesis Chair ______________________________ Jung-Uk Lim, Ph.D. Committee Member ______________________________ Mark Harris, Ph.D. Committee Member ______________________________ James H. Nutter, D.A. Honors Director ______________________________ Date AI IN THE CONTEXT OF HUMAN CONSCIOUSNESS 3 Abstract Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence
    [Show full text]
  • The Relationship Between Consciousness and Intentionality
    University of Central Florida STARS HIM 1990-2015 2013 The relationship between consciousness and intentionality Jordan Bell University of Central Florida Part of the Philosophy Commons Find similar works at: https://stars.library.ucf.edu/honorstheses1990-2015 University of Central Florida Libraries http://library.ucf.edu This Open Access is brought to you for free and open access by STARS. It has been accepted for inclusion in HIM 1990-2015 by an authorized administrator of STARS. For more information, please contact [email protected]. Recommended Citation Bell, Jordan, "The relationship between consciousness and intentionality" (2013). HIM 1990-2015. 1384. https://stars.library.ucf.edu/honorstheses1990-2015/1384 THE RELATIONSHIP BETWEEN CONSCIOUSNESS AND INTENTIONALITY by JORDAN BELL A thesis submitted in partial fulfillment of the requirements for the Honors in the Major Program in Philosophy in the College of Arts & Humanities and in The Burnett Honors College at the University of Central Florida Orlando, Florida Spring Term 2013 Thesis Chair: Dr. Mason Cash ABSTRACT Within the Philosophy of Mind two features of our mental life have been acknowledged as the most perplexing—consciousness, the phenomenal “what it is likeness” of our mental states, and intentionality, the aboutness or directedness of our mental states. As such, it has become commonplace to develop theories about these phenomena which seek to explain them naturalistically, that is, without resort to magic or miracles. Traditionally this has been done by analyzing consciousness and intentionality apart from one another. However, in more recent years the tide has turned. In contemporary theories these phenomena are typically analyzed in terms of the other.
    [Show full text]