1 2 3 Do We Adopt the Intentional Stance Toward Humanoid Robots?

Total Page:16

File Type:pdf, Size:1020Kb

1 2 3 Do We Adopt the Intentional Stance Toward Humanoid Robots? 1 2 3 4 Do we adopt the Intentional Stance toward humanoid robots? 5 6 7 Serena Marchesi1, Davide Ghiglino1,2, Francesca Ciardo1, Ebru Baykara1, Agnieszka Wykowska1* 8 1Istituto Italiano di Tecnologia, Genoa, Italy 9 2DIBRIS, Università di Genova 10 11 *Corresponding author: 12 13 Agnieszka Wykowska 14 Istituto Italiano di Tecnologia 15 Centre for Human Technologies 16 Via E. Melen, 83 17 16040 Genova, Italy 18 e-mail: [email protected] 19 20 1 21 Abstract 22 In daily social interactions, we explain and predict behaviours of other humans by referring to mental states 23 that we assume underlie their behaviours. In other words, we adopt the intentional stance (Dennett, 1971) 24 towards other humans. However, today we also routinely interact with artificial agents: from Apple’s Siri 25 to GPS navigation systems. In the near future, we will casually interact with robots. Since we consider 26 artificial agents to have no mental states, we might not adopt the intentional stance towards them, and this 27 might result in not attuning socially with them. This paper addresses the question of whether adopting the 28 intentional stance towards artificial agents is possible. We propose a new method to examine if people have 29 adopted the intentional stance towards an artificial agent (humanoid robot). The method consists in a 30 questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation 31 (mentalistic vs. mechanistic) of behaviour of a robot depicted in a naturalistic scenario (a sequence of 32 photographs). The results of the first study conducted with this questionnaire showed that although the 33 explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic 34 explanations was also given. This suggests that it is possible to induce adoption of the intentional stance 35 towards artificial agents. Addressing this question is important not only for theoretical purposes of 36 understanding the mechanisms of human social cognition, but also for practical endeavours aiming at 37 designing social robots. 38 39 40 2 41 Introduction 42 Over the last decades, new technologies have entered inexorably our houses, becoming an integral part of 43 our everyday life. Our constant exposure to digital devices, some of which seemingly “smart”, makes the 44 interaction with technology increasingly more smooth and dynamic, from generation to generation (Baack, 45 Brown &Brown, 1991; Zickuhr, K., & Smith, A., 2012; Gonzàles, Ramirez, Viadel, 2012). Some studies 46 support the hypothesis that this exposure is only at its beginning: it seems likely that technologically 47 sophisticated artifacts, such as humanoid robots, will soon be present in our private lives, as assistive 48 technologies and housework helpers (for a review see Stone et al., 2017). 49 Despite the technological habituation, we are (and presumably we will continue to be) undergoing, little is 50 known about social cognition processes we put in place during interaction with machines, humanoid robots 51 specifically. Several authors have theorized that humans possess a natural tendency to anthropomorphize 52 what they do not fully understand. Epley et al. (2007) for instance, defined anthropomorphism as the 53 attribution of human-like characteristics and properties to non-human agents and/or objects, independent 54 of whether they are imaginary or real. The likelihood of spontaneous attribution of anthropomorphic 55 characteristics depends on three major conditions (Epley et al., 2007; Waytz, et al, 2010): first, availability 56 of characteristics that activate existing knowledge that we have about humans ; second, the need of social 57 connection and efficient interaction in the environment; and finally individual traits (such as need of 58 control) or circumstances (e.g., loneliness, lack of bonds with other humans). In the vol. I of the Dictionary 59 of the History of Ideas, Agassi argues that anthropomorphism is a form of parochialism allowing projecting 60 our limited knowledge into a world that we do not fully understand. Some other authors claimed that we, 61 as humans, are the foremost experts in what it means to be human, but we have no phenomenological 62 knowledge about what it means to be a non-human (Nagel, 1974; Gould, 1996). For this reason, when we 63 interact with entities for which we lack specific knowledge, we commonly choose “human” models to 64 predict their behaviors. 65 This stance might be taken also when we interact with technology, the operation of which we cannot 66 understand. Almost everyone has seen someone shouting at their malfunctioning printer, or imploring their 3 67 phone to work, as if they were interacting with another person. Thinking in rational terms, no adult in the 68 healthy population would ever consider his/her smartphone as a sentient being, yet, sometimes, we behave 69 as if technological devices around us possessed intentions, desires and beliefs. 70 This has an important theoretical implication: from this perspective, it seems that other’s mental states are 71 a matter of external attribution rather than a propriety of an entity. Waytz et al. (2010) support this 72 hypothesis, and consider the attribution of mental states as the result of perception of an observer of an 73 observed entity, as a sum of motivational, cognitive, physical and situational factors. Other authors found 74 that several cues can foster the attribution of mental states to non-human entities, from motion (Heider, 75 Simmel, 1944; Scholl, Tremoulet, 2000), and physical properties of an object (Morewedge et al., 2007) to 76 the personological characteristic of the observer (Epley et al., 2007). Furthermore, it seems that the 77 tendency to attribute mental states is ubiquitous and unavoidable for human beings (Parlangeli et al, 2012). 78 Anthropomorphizing seems to be a default mode that we use to interpret and interact with the agents with 79 which we are sharing (or going to share) our environment (Epley et al., 2007, Waytz et al., 2010, Wiese et 80 al., 2017), unless other manners of interpreting other entities’ behavior are available; for example, if a 81 coffee machine is not working because it is not connected to a power socket, we would certainly not explain 82 the failure in delivering the coffee by the coffee machine with reference to its bad intentions. 83 Already in the seventies, Dennett proposed that humans use different strategies to explain and predict other 84 entities’ (objects, artifacts or conspecifics) behaviors (Dennett, 1971, 1987). Dennett defines three main 85 strategies or “stances” that humans use. Consider chemists or physicists in their laboratory, studying a 86 certain kind of molecules. They try to explain (or predict) the molecules’ behavior through the laws of 87 physics. This is what Dennett calls physical instance. There are cases in which laws of physics are an 88 inadequate (or not the most efficient) way to predict the behavior of a system. For example, when we drive 89 a car, we can fairly predict that the speed will decrease if we push the brake pedal, since the car itself is 90 designed this way. To make this kind of prediction, we do not need to know the precise physical 91 mechanisms behind all atoms and molecules in the braking system of the car, but it is sufficient to rely on 92 our experience and knowledge of how the car is designed. This is described by Dennett as the design stance. 93 Dennett proposes the existence of also a third strategy, the intentional stance. Intentional stance relies on 4 94 ascription of beliefs, desires, intentions and, more broadly, mental states to a system. Dennett (1987) 95 defines each system whose behaviour is predictable by others with reference to mental states as an 96 intentional system. Epistemologically speaking, Dennett refers to mental states in a realistic manner: he 97 treats mental states as an objective phenomenon. This is because adopting the intentional stance is the best 98 strategy only towards truly intentional systems (the “true believers”), while for other systems, other 99 strategies might work better. What makes Dennett an interpretationist, however, is that according to his 100 account, mental states can be discerned only from the point of view of the one who adopts intentional stance 101 as a predictive strategy, and dependent on whether this strategy is successful or not. In other words, despite 102 beliefs being a real phenomenon, the success of a predictive strategy confirms their existence. 103 Interestingly, adopting the intentional stance towards an artefact (such as a humanoid robot) does not 104 necessarily require that the artefact itself possesses true intentionality. However, adopting the intentional 105 stance might be a useful or default (as argued above) way to explain a robot’s behavior. Perhaps we do not 106 attribute mental states to robots but we treat them as if they had mental states. Breazeal (1999) highlighted 107 that this process does not require endowing machines with mental states in the human sense, but the user 108 might be able to intuitively and reliably explain and predict the robot’s behavior in these terms. Intentional 109 stance is a powerful tool to interpret other agents’ behavior. It leads to interpret behavioral patterns in a 110 general and flexible way. Specifically, flexibility in changing predictions about others’ intentions is a 111 pivotal characteristic of humans. Adopting the intentional stance is effortless for humans, but of course, it 112 is not the perfect strategy: if we realize that this is not the best stance to make predictions, we can refer to 113 the design stance or even the physical stance. The choice of which stance to adopt is totally free and might 114 be context-dependent: it is a matter of which explanation works best.
Recommended publications
  • Abstract: This Paper Argues Against Daniel Dennett's Conception of What
    Abstract: This paper argues against Daniel Dennett’s conception of what is labeled as the Intentional Stance. Daniel Dennett thinks that conceiving of human beings, as Intentional Systems will reveal aspects about human action that a physical examination of the composition of their bodies will always fail to capture. I will argue against this claim by a reexamination of the Martian’s thought experiment. I will also analyze Dennett’s response to the famous Knowledge argument in order to point out a seeming contradiction in his view which will make my point stronger. Lastly, I will try to conclude with general remarks about my own view. Due to numerous advancements in the field of neurology, philosophers are now reconsidering their views on mental states. This in turn led to reconsideration of common views on action and intentionality in contemporary philosophy. In this paper I will try to defend a strict version of the Physicalist view of human nature, which states that everything about human action and intentionality can be explained by the study of physical matter, physical laws and physical history of humans. This will be attained by exploring the contrast between Daniel Dennett’s Intentional and Physical stances, and by showing how the former “fails” in face of the latter (contra Dennett), especially when considering the evolution of the human species. After that, I will try to point out an inconsistency in Dennett’s view especially in his dealing with Frank Jackson’s Knowledge Argument. I will end with a brief conclusion about the view I am advocating. A central claim of Dennett’s thesis is that if a system holds true beliefs and could be assigned beliefs and desires, then, given the proper amount of practical reasoning, that system’s actions should be relatively accurately predicted, and we can say, then, that the system is an intentional system to whom we can apply the intentional stance.
    [Show full text]
  • 1. a Dangerous Idea
    About This Guide This guide is intended to assist in the use of the DVD Daniel Dennett, Darwin’s Dangerous Idea. The following pages provide an organizational schema for the DVD along with general notes for each section, key quotes from the DVD,and suggested discussion questions relevant to the section. The program is divided into seven parts, each clearly distinguished by a section title during the program. Contents Seven-Part DVD A Dangerous Idea. 3 Darwin’s Inversion . 4 Cranes: Getting Here from There . 8 Fruits of the Tree of Life . 11 Humans without Skyhooks . 13 Gradualism . 17 Memetic Revolution . 20 Articles by Daniel Dennett Could There Be a Darwinian Account of Human Creativity?. 25 From Typo to Thinko: When Evolution Graduated to Semantic Norms. 33 In Darwin’s Wake, Where Am I?. 41 2 Darwin's Dangerous Idea 1. A Dangerous Idea Dennett considers Darwin’s theory of evolution by natural selection the best single idea that anyone ever had.But it has also turned out to be a dangerous one. Science has accepted the theory as the most accurate explanation of the intricate design of living beings,but when it was first proposed,and again in recent times,the theory has met with a backlash from many people.What makes evolution so threatening,when theories in physics and chemistry seem so harmless? One problem with the introduction of Darwin’s great idea is that almost no one was prepared for such a revolutionary view of creation. Dennett gives an analogy between this inversion and Sweden’s change in driving direction: I’m going to imagine, would it be dangerous if tomorrow the people in Great Britain started driving on the right? It would be a really dangerous place to be because they’ve been driving on the left all these years….
    [Show full text]
  • System Intentionality and the Artificial Intelligence Hermeneutic Network
    System Intentionality and the Artificial Intelligence Hermeneutic Network: the Role of Intentional Vocabulary ∗ Jichen Zhu D. Fox Harrell Georgia Institute of Technology Georgia Institute of Technology 686 Cherry Street, suite 336 686 Cherry Street, suite 336 Atlanta, GA, USA Atlanta, GA, USA [email protected] [email protected] ABSTRACT Lewis's interactive music system Voyager, Gil Weinberg & Computer systems that are designed explicitly to exhibit Scott Driscoll's robotic drummer Haile), visual arts (e.g., intentionality embody a phenomenon of increasing cultural Harold Cohen's painting program AARON ), and storytelling importance. In typical discourse about artificial intelligence (e.g., Michael Mateas's drama manager in Fa¸cade). The rel- (AI) systems, system intentionality is often seen as a tech- atively unexplored phenomenon of system intentionality also nical and ontological property of a program, resulting from provides artists and theorists with novel expressive possibil- its underlying algorithms and knowledge engineering. Influ- ities [26]. enced by hermeneutic approaches to text analysis and draw- ing from the areas of actor-network theory and philosophy To fully explore the design space of intentional systems, of mind, this paper proposes a humanistic framework for whether in the forms of art installations, consumer prod- analysis of AI systems stating that system intentionality is ucts, or otherwise, requires a thorough understanding of how narrated and interpreted by its human creators and users. system intentionality is formed. Many technologists regard We pay special attention to the discursive strategies em- the phenomenon of system intentionality as a technical prop- bedded in source code and technical literature of software erty of a system, directly proportionally to the complexity of systems that include such narration and interpretation.
    [Show full text]
  • The Intentional Stance Toward Robots
    The Intentional Stance Toward Robots: Conceptual and Methodological Considerations Sam Thellman ([email protected]) Department of Computer & Information Science, Linkoping¨ University Linkoping,¨ Sweden Tom Ziemke ([email protected]) Department of Computer & Information Science, Linkoping¨ University Linkoping,¨ Sweden Abstract intentional stance toward a robot. Marchesi et al. (2018) developed a questionnaire-based method specifically for as- It is well known that people tend to anthropomorphize in inter- pretations and explanations of the behavior of robots and other sessing whether people adopt the intentional stance toward interactive artifacts. Scientific discussions of this phenomenon robots. These studies all provide insight into people’s folk- tend to confuse the overlapping notions of folk psychology, psychological theories about robots. However, none of them theory of mind, and the intentional stance. We provide a clarifi- cation of the terminology, outline different research questions, assessed how such theories affect people’s predictions of be- and propose a methodology for making progress in studying havior to shed light on the usefulness of taking the intentional the intentional stance toward robots empirically. stance in interactions with robots. Keywords: human-robot interaction; social cognition; inten- Moreover, research that has so far explicitly addressed the tional stance; theory of mind; folk psychology; false-belief task intentional stance toward robots in many cases conflated the intentional stance with overlapping but different notions, such Introduction as folk psychology and theory of mind. In particular, the question whether it is useful for people to predict robot behav- The use of folk psychology in interpersonal interactions has ior by attributing it to mental states (what we in the present been described as “practically indispensable” (Dennett, 1989, paper will call “the intentional stance question”) tends to p.
    [Show full text]
  • The Epoche and the Intentional Stance
    Journal of Cognition and Neuroethics The Epoche and the Intentional Stance David Haack The New School Biography An element that ties all of my work together is a deep pluralistic impulse. Before beginning the study of philosophy at the New School for Social Research, the tradition in philosophy I was most aligned with was pragmatism. I am currently completing my master’s thesis in philosophy of mind and phenomenology. In addition to my interest in epistemology and ontology, I have a secondary interest in ethical theories from antiquity. I find myself both interested in modern philosophy as a science (its intersection with the cognitive sciences) and antiquity’s interest in ethics as how to live a life. In addition to my interest in ethics within antiquity, I have a specific affection for John Dewey’s theories of democracy and education. Publication Details Journal of Cognition and Neuroethics (ISSN: 2166-5087). April, 2016. Volume 4, Issue 1. Citation Haack, David. 2016. “The Epoche and the Intentional Stance.” Journal of Cognition and Neuroethics 4 (1): 27–44. The Epoche and the Intentional Stance David Haack Abstract In this paper I explore the issue of intentionality by looking at the thought of Daniel Dennett and Edmund Husserl. I argue that despite the differences between Dennett’s ‘heterophenomenology’ and Husserl’s phenomenology, the two ways of viewing intentional content, and therefore consciousness, more broadly are not incompatible. I claim that we can view consciousness in a way that incorporates both the phenomenological and heterophenomenological methods. I begin by outlining Husserl’s phenomenology before moving on to a description of Dennett’s heterophenomenology.
    [Show full text]
  • The Conscious Vs. the Intentional Stance. Folk Psychology Strategies in Attributing Conscious States to Other Persons
    Adam Mickiewicz University in Pozna´n Faculty of Psychology and Cognitive Science Michał Wyrwa The conscious vs. the intentional stance. Folk psychology strategies in attributing conscious states to other persons Nastawienie ´swiadomo´sciowea nastawienie intencjonalne. Strategie rozpoznawania cudzych stanów ´swiadomych w psychologii potocznej PhD thesis Supervisor: prof. UAM dr hab. Andrzej Klawiter Pozna´n2020 Acknowledgments I wish to express my sincere appreciation to my supervisor Professor Andrzej Klawiter for all the guidance he gave me throughout the years. With- out his wisdom, intellectual vigor, support, inquisitiveness, critical feedback, questions (that most of the time I did not even notice needed an answer), and frequent words of encouragement, I would have not finished this work. To other dramatis personae of this adventure: Mikołaj Buchwald, Szy- mon Chlebowski, Agnieszka Kubiak, Andrzej Gajda, Bartosz Michałowski, Weronika Potok, Maciej Ra´s,Dawid Ratajczyk, Piotr Styrkowiec. Had I not met you, I would be someone else, somewhere else: luckily I am not. I would also like to thank other colleagues, mentors, and coworkers at the Faculty of Psychology and Cognitive Science at AMU. Especially to all regular and irregular members of the Wednesday seminar group for creating an environment of critical and fruitful discussion, where every attendee is encouraged to speak out and to Professor Grzegorz Króliczak for giving me an opportunity to get a first-hand experience in cognitive neuroscience. Lastly, I am grateful to my family for their unconditional support through- out these very long grad studies of mine. Thank you, Ola, Mom, and Dad. Contents I Summary II Streszczenie Introduction 1 1 Background of the studies 5 1.1 Consciousness as a research problem .
    [Show full text]
  • A Critique of Dennett's Design Stance
    The interpretation of artifacts : a critique of Dennett's design stance Citation for published version (APA): Amerongen, van, M. (2008). The interpretation of artifacts : a critique of Dennett's design stance. Technische Universiteit Eindhoven. https://doi.org/10.6100/IR636331 DOI: 10.6100/IR636331 Document status and date: Published: 01/01/2008 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • Counting Consciousnesses: None, One, Two, Or None of the Above?
    Counting consciousnesses: None, one, two, or none of the above? Daniel C. Dennett and Marcel Kinsbourne Center for Cognitive Studies, Tu fts University, Medford, MA 02155 Electronic mail: [email protected] In a minute there is time For decisions and revisions which a min­ ute will reverse. T. S. Eliot, "The Love Song of J. Alfred Prufrock" In a second there is also time enough, we might add. In his dichotomizing fervor, Bogen fails to realize that our argument is neutral with respect to the number of con­ sciousnesses that inhabit the normal or the split-brain skull. Should there be two, for instance, we would point out that within the neural network that subserves each, no privileged locus should be postulated. (Midline location is not the issue - it was only a minor issue for Descartes, in fact. ) As one of us (Kinsbourne 1982) has pointed out, it follows from the nonexistence of a privileged locus that the limit on the number of consciousnesses that could theoretically be housed in the brain (given suitable dis­ connections) is the minimal complexity of the neuronal substrate that suffices for this kind of functioning. There could be many, and certainly more than two. Not all these consciousnesses may be to Bogen's liking. Given a lateral (coronal) transection, the posterior sector may be pre­ cluded from controlling behavior, while the anterior one be sorely lacking in information to guide spared action. The separated left or right hemisphere, in contrast, is more fully equipped with input and output possibilities, depleted though they are.
    [Show full text]
  • Dennett Denied: a Critique of Dennett’S Evolutionary Account of Intentionality1
    PCID 2.3.6, October 2003 © Angus Menuge and ISCID 2003 Dennett Denied: A Critique of Dennett’s Evolutionary Account of Intentionality1 Angus J. L. Menuge [O]ur intentionality is derived from the intentionality of our “selfish” genes! … But then who or what does the designing? Mother Nature, of course, or more literally, the long, slow process of evolution by natural selection.2 There isn’t any Mother Nature, so it can’t be that we are her children or her artifacts, or that our intentionality derives from hers.3 1. Introduction. Naturalism claims that all genuine properties and relations are in some way reducible to the categories that are studied, or that could, in principle, be studied, by the natural sciences. The main objection to naturalism is that it cannot account for the existence and character of the normative, including rational and moral qualities. In the philosophy of mind, even more fundamental than the problem of consciousness is the problem of intentionality. Natural relations obtain between entities all of which exist, they are not about anything (they have no “content”), and they are not goal-directed. Thus when a rock falls from a mountainside into a river, not only the rock, but also the mountainside and river, must exist. And no one would claim that any state of the rock was about anything or had the goal of ending up in the river. But our thoughts are fundamentally different. I can think of sensible Californian gubernatorial candidates and property tax reduction even though no such things exist, and perhaps never will.
    [Show full text]
  • Intentionality, Consciousness, and Daniel Dennett in Contemporary Philosophy of Mind" (1997)
    Illinois Wesleyan University Digital Commons @ IWU Honors Projects Philosophy 1997 Minding the Mental: Intentionality, Consciousness, and Daniel Dennett in onC temporary Philosophy of Mind Matthew .T Dusek '97 Illinois Wesleyan University Recommended Citation Dusek '97, Matthew T., "Minding the Mental: Intentionality, Consciousness, and Daniel Dennett in Contemporary Philosophy of Mind" (1997). Honors Projects. Paper 8. http://digitalcommons.iwu.edu/phil_honproj/8 This Article is brought to you for free and open access by The Ames Library, the Andrew W. Mellon Center for Curricular and Faculty Development, the Office of the Provost and the Office of the President. It has been accepted for inclusion in Digital Commons @ IWU by the faculty at Illinois Wesleyan University. For more information, please contact [email protected]. ©Copyright is owned by the author of this document. Minding the Mental Intentionality, Consciousness, and Daniel Dennett in Contemporary Philosophy ofMind Illinois Wesleyan University Department of Philosophy Research Honors Thesis Spring, 1997 MATTHEW T. DUSEK Minding the Mental Intentionality, Consciousness, and Daniel Dennett in Contemporary Philosophy ofMind Matthew T. Dusek "In coming to understand anything we are rejecting the facts as they are for us in favour ofthe facts as they are." "We should never ask ofanything 'Is it real?,' for everything is real. The proper question is 'A real what?,' e.g., a real snake or real delirium tremens?" - C.S. Lewis AN ILLINOIS WESLEYAN BOOK Published by The Philosophical
    [Show full text]
  • Volume 1, Issue 1, 2007 Heterophenomenology Debunked
    Volume 1, Issue 1, 2007 Heterophenomenology Debunked Tan Kock Wah, Lecturer, University Malaysia Sarawak, [email protected] Abstract What is analyzed in this paper is of fundamental importance to the viability of Dennett’s works on mind and consciousness. Dennett uses the heterophenomenology method as the basis to ground his thoughts on subjectivity and phenomenal experiences. It is argued here that Dennett’s formulation of heterophenomenology fails to provide the founding framework with which to ground studies on consciousness and qualia. Analysis in the paper has important import on the rest of his theory of consciousness and mind, for without credible philosophical underpinnings, his reasoning on consciousness and mind at large is not likely to amount to much. Introduction The theory of heterophenomenology is Dennett’s brainchild1 on consciousness and one of his most important. Heterophenomenology is the basis on which much of Dennett’s undertaking on consciousness is constructed, informed and built upon. This methodological standpoint bears a distinctly Dennettian stamp on the right approach to the study of mind and consciousness. The issues addressed here are of paramount importance to Dennett’s intellectual onslaught on the questions of subjectivity and consciousness. This paper seeks to investigate this critical aspect of Dennett’s theory to see how well it stands up to scrutiny. Prior to the analysis, however, Dennett’s philosophy of heterophenomenology is briefly outlined to facilitate arguments in the ensuing sections. Heterophenomenology Understanding Dennett’s tour de force on consciousness, Consciousness Explained (Dennett, 1991b), requires mastery of Dennett’s theory of heterophenomenology. It is in fact the crux of Consciousness Explained, for major arguments in the book stand or fall with the cogency of this construct.
    [Show full text]
  • The Intentionality of Machines an Investigation Into Computers’ Capacity for Agential Action
    Haverford College The Intentionality of Machines An Investigation into Computers’ Capacity for Agential Action Henry Woods Senior Thesis Professor Macbeth & Professor Sommerville April 27, 2018 Woods 2 Abstract The philosophy of Artificial Intelligence is beset by an intolerable deadlock. In an attempt to answer the question of whether computers can perform the same actions as humans, many philosophers have posited varying solutions. Of particular note are the Machine Functionalist, Biological Naturalist, and Contextualist accounts. By examining the theories that fall under these categories, it becomes plainly obvious that little progress will be made towards deciding this debate, as philosophers of Artificial Intelligence only succeed in talking past each other. The aim of this essay then is to move this debate forward by providing a test that will hopefully create some friction. In order for us to assess computers on their capacity for agency – an essential quality of the sort of actions we are evaluating computers on, such as playing and speaking – we must judge whether computers have the right sort of relationship with the world, one that is Intentional. Whereas the three major accounts delineated in this essay fail to argue against each other on some common point, we will examine how Intentionality plays a role in human action, and why it is necessary for computers to exhibit it in order for them to be doing the relevant action, and not merely mimicking a human performing that same action. This essay will develop these arguments, finally, by examining what relationship a computer would have to have with the world in order for it to be playing chess or speaking language agentially – in other words as a human does.
    [Show full text]