1Ttif\\Irji~ Lljjl \/' II International

Total Page:16

File Type:pdf, Size:1020Kb

1Ttif\\Irji~ Lljjl \/' II International COMPUTER THOUGHT: PROPOSITIONAL ATTITUDES AND META-KNOWLEDGE (ARTIFICIAL INTELLIGENCE, SEMANTICS, PSYCHOLOGY, ALGORITHMS). Item Type text; Dissertation-Reproduction (electronic) Authors DIETRICH, ERIC STANLEY. Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 07/10/2021 12:28:25 Link to Item http://hdl.handle.net/10150/188116 INFORMATION TO USERS This reproduction was made from a copy of a manuscript sent to us for publication and microfilming. While the most advanced technology has been used to pho­ tograph and reproduce this manuscript. the quality of the reproduction is heavily dependent upon the quality of the material submitted. Pages in any manuscript may have indistinct print. In all cases the best available copy has been filmed. The following explanation of techniques is provided to help clarify notations which may appear on this reproduction. 1. Manuscripts may not always be complete. When it is not possible to obtain missing pages. a note appears to indicate this. 2. When copyrighted materials are removed from the manuscript. a note ap­ pears to indicate this. 3. Oversize materials (maps. drawings. and charts) are photographed by sec­ tioning the original. beginning at the upper left hand comer and continu­ ing from left to right in equal sections with small overlaps. Each oversize page is also filmed as one exposure and is available. for an additional charge. as a standard 35mm slide or in black and white paper format. * 4. Most photographs reproduce acceptably on positive microfilm or micro­ fiche but lack clarity on xerographic copies made from the microfilm. For an additional charge. all photographs are available in black and white standard 35mm slide format. * *For more information about black and white slides or enlarged paper reproductions. please contact the Dissertations Customer Services Department. 1TTIf\\IrJI~ llJJL \/' II International 8603337 Dietrich, Eric Stanley COMPUTER THOUGHT: PROPOSITIONAL ATTITUDES AND META­ KNOWLEDGE The University of Arizona PH.D. 1985 University Microfilms International 300 N. Zeeb Road, Ann Arbor, MI48106 COMPUTER THOUGHT: PROPOSITIONAL ATTITUDES AND META-KNOWLEDGE by Eric Stanley Dietrich A Dissertation Submitted to the Faculty of the DEPARTMENT OF PHILOSOPHY In Partial Fulfillment of the Requirements For the Degree of DOCTOR OF PHILOSOPHY In the Graduate College THE UNIVERSITY OF ARIZONA I 985 THE UNIVERSITY OF ARIZONA GRADUATE COLLEGE As members of the Final Examination Committee, we certify that we have read the dissertation prepared by -----------------------------------------------Eric Dietrich entitled "Computer Thought: Propositional Attitudes and Meta-Knowledge" and recommend that it be accepted as fulfilling the dissertation requirement ~~D' of Philosoehy gj~£/ Date d ~ ~fS Daf.~~ Date Date Final approval and acceptance of this dissertation is contingent upon the candidate's submission of the final copy of the dissertation to the Graduate College_ I hereby certify that I have read this dissertation prepared under my direction and recommend that it be accepted as fulfilling the dissertation ~ait~it~ Da~ STATEMENT BY AUTHOR This dissertation has been submitted in partial fulfill­ ment of requirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author. ACKNOWLEDGMENTS I wish to thank Kent Bach, Myles Brand, Rob Cummins, Chris Fields, Ron Sauers, and Kathy Yagel for help in under­ standing the matters discussed herein. I also wish to thank the Clipr staff and Lyle Bourne for computers and funding. Funding was also provided by the Dietrich Family Foundation. iii TABLE OF CONTENTS Page ABSTRACT ... vii 1. INTRODUCTION. 1 2. FOUR PROBLEMS FOR A COMPUTATIONAL THEORY OF PROPOSITIONAL ATTITUDES 10 2.1 Introduction. 10 2.2 Functionalism 10 2.3 A Taxonomy of Theories of the Mind .. 16 2.3.1 An Introduction to Dennett's Intentional-System Theory .. 17 2.3.2 Functional Theories that Deny the Existence of Propositional Attitudes 21 2.3.3 Causal Functionalism and Propositional Attitudes. 28 2.4 The Four Problemb . 33 3. SEMANTICS AND MENTAL REPRESENTATION 39 3.1 Introduction. 39 3.2 Representation in Computers 41 3.2.1 Data Structures ... 42 3.2.2 Knowledge Representations: Definitions and Examples . 46 3.2.3 The Functional Definition of *Knowledge Representations . 58 3.3 The Constraints on Intelligent Computer Thought •............... 61 iv v Page 3.3.1 Definitions of Computational Efficiency and Expressive Power. 62 3.3.2 On the Search for Universal and Scientia1 Schemes. ..• 71 3.4 Two Theories of Mental Representation 80 3.4.1 Non-mental Representation. 83 3.4.2 Mental Representation. 87 3.5 Conclusion. • 103 4. A THEORY OF PROPOSITIONAL ATTITUDES .• 106 4.1 Introduction. 106 4.2 Computational Functionalism 109 4.2.1 Propositional Attitudes and Computational Relations .. 110 4.2.2 Programs, Algorithms, and Functions. 113 4.2.3 What Propositional Attitudes Are. 116 4.3 Propositional Attitudes and Stephen Stich. 130 4.3.1 The Autonomy Principle ... 131 4.3.2 An Argument for the Autonomy Principle. • . • 136 4.3.3 The Tension between the Autonomy Principle and Propositional Attitudes 141 4.4 Conclusion. ·154 5. MACHINES WITH MIND AND CONSCIOUSNESS. 156 5.1 Introduction. 156 5.2 Implicit Information ••• 158 vi Page 5.2.1 Control-Implicit Information. 160 5.2.2 Domain-Implicit Information •• 161 5.3 Meta-Level Processing ••.•• 163 5.3.1 Strategic Meta-Knowledge. 166 5.3.2 Descriptive Meta-Knowledge. 167 5.3.3 Systemic Meta-Knowledge. 168 5.4 Ip-Intelligence 169 5.4.1 Abstract Data Types and Implicit Information. • . • . • . 173 5.4.2 Content and Implicit Information 178 5.5 Ip-Inte11igence. Propositional Attitudes, Consciousness, and Mind 194 BIBLIOGRAPHY 206 ABSTRACT Though artificial intelligence scientists frequently use words such as "belief" and "desire" when describing the computa­ tional capacities of their programs and computers, they have completely ignored the philosophical and psychological theories of belief and desire. Hence, their explanations of computational capacities which use these terms are frequently little better than folk-psychological explanations. Conversely, though philosophers and psychologists attempt to couch their theories of belief and desire in computational terms, they have consistently misunder­ stood the notions of computation and computational semantics. Hence, their theories of such attitudes are frequently inadequate. A computational theory of propositional attitudes (belief and desire) is presented here. It is argued that the theory of propositional attitudes put forth by philosophers and psycholo­ gists entails that propositional attitudes are a kind of abstract data type. This refined computational view of propositional attitudes bridges the gap between artificial intelligence, philosophy and psychology. Lastly, it is argued that this theory of propositional attitudes has consequences for meta-precessing and consciousness in computers. vii CHAPTER I INTRODUCTION I have an acquaintance whose vocation is artificial intelligence. His research involves a sort of robot--a computer equipped with a camera and one manipulator arm--and some colored blocks. The robot arranges the blocks in any manner one wishes: being told to pick up a red block and put it on a green one, his robot scans its work area for a red block, extends its arm and grasps the block; then finding a green block, it puts the former on the latter. Recently, this acquaintance of mine discovered a mouse in his home kitchen and, characteristically, built an electronic mousetrap which could sense a mouse, exterminate it, and then signal that it had a dead mouse in its chamber. He placed this mousetrap out on the floor in a corner of his kitchen. A few mornings later, awakened by the signal, he found the mouse caught in the trap, dead. When I heard of this incident a few days later I remarked that there seemed to be little difference between his mousetrap and the robot in his laboratory. He disagreed claiming that, in fact, his robot reasoned but the mousetrap did not. The trap was merely an electro-mechanical device obeying the laws of physics. On the other hand, he claimed that his robot had 1 2 thoughts, in particular beliefs, because it could perform a cluster of processes and actions associated with manipulating an environment consisting of itself and various blocks. The mouse­ trap, on the other hand, had no beliefs or thoughts of any kind. As an example, he said, his robot could believe that a red cube was located at such and such a position relative to it, that a block it had picked up was red, that it had been requested to pick up a red block, that there were such things as blocks, and so forth. I asked him what would be required to turn the mousetrap into a thinking mousetrap as opposed to a merely well-functioning physical device. His answer was quite typical of researchers in artificial intelligence. The essential ingredient, he claimed, was a capacity to represent. A thinking mousetrap would need representations and processes defined over them. According to him, computers can have thoughts because they have internal structures the processing of which can count as adding 2 and 2, parsing "please pick up a red block," etc.
Recommended publications
  • The Intersubjective Grounding of Language in Bodily Mimesis Jordan
    Love Reign O’er Me: The intersubjective grounding of language in bodily mimesis Jordan Zlatev The beach is a place where a man can feel, he’s the only soul in the world that’s real. The Who 1. Introduction Walking on the stony beach of Brighton, with the long and powerful waves crashing to my side, with sea foam and skirts of rain beginning to drench me, I suddenly knew how I was to introduce my plenary lecture at the “New Directions” conference – the one that was going to make me a “star”, in a small community, within a small field in the self-obsessed world of Academia, while the world is being busy destroying itself through capitalist competition, environmental depletion and imperialist warfare. As a teenager in Bulgaria in early 80s I had inherited from my elder brother the double LP with The Who’s “rock opera” Quadrophenia, telling the story of an English “mod”, who, rejected from society, his family, his girlfriend, his mates finally uses his last money for a ticket to Brighton and bottle of gin. He walks on the empty beech, feeling like “the only soul in the world that’s real”, and finally steals a boat and leads it to a Rock somewhere off the coast, then lets the boat drift off, and sits in the “pissing rain” waiting for the tide to come and take him, remembering his life… The topic of my lecture, and of this chapter, is that which arguably makes us human beings, separating us from all other species: our extreme capacity for intersubjectivity, the sharing and understanding of others’ experiences.
    [Show full text]
  • Minds, Machines and Qualia: a Theory of Consciousness
    Minds, Machines and Qualia: A Theory of Consciousness by Christopher Williams Cowell A.B. (Harvard University) 1992 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Philosophy in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor John R. Searle, Chair Professor Hans D. Sluga Professor John F. Kihlstrom Spring 2001 The dissertation of Christopher Williams Cowell is approved: Chair Date Date Date University of California, Berkeley Spring 2001 Minds, Machines and Qualia: A Theory of Consciousness Copyright 2001 by Christopher Williams Cowell 1 Abstract Minds, Machines and Qualia: A Theory of Consciousness by Christopher Williams Cowell Doctor of Philosophy in Philosophy University of California, Berkeley Professor John R. Searle, Chair It is clear that there is a problem of consciousness; it is less clear what that problem is. In chapter one I discuss what it might be, bracket o® some especially intractable issues, and present two central questions. First, what is the nature of consciousness? Second, what are the prospects for producing consciousness in machines? I then look at various ways one might approach these questions. Chapter two focuses on the nature of consciousness. My de¯nition of conscious- ness centers on qualia, a concept that I discuss in detail. I show that consciousness can be thought of as having three aspects: intransitive creature consciousness, transitive crea- ture consciousness, and state consciousness. The relations between these three facets are explored. Chapter three expands on two issues raised in chapter two. First, I argue that qualia are present not just in sense perception but also in ordinary propositional thought.
    [Show full text]
  • Brain & Consciousness 1
    Non-Theosophical Ideas about the Mind, the Brain & Consciousness 1 A compendium of Non-Theosophical Ideas about the Mind, the Brain & Consciousness RENÉ DESCARTES THE MIND-BODY DISTINCTION [From: The Internet Encyclopedia of Philosophy (IEP)] One of the deepest and most lasting legacies of Descartes’ philosophy is his thesis that mind and body are really distinct—a thesis now called “mind-body dualism.” He reaches this conclusion by arguing that the nature of the mind (that is, a thinking, non-extended thing) is completely different from that of the body (that is, an extended, non-thinking thing), and therefore it is possible for one to exist without the other. This argument gives rise to the famous problem of mind-body causal interaction still debated today: how can the mind cause some of our bodily limbs to move (for example, raising one’s hand to ask a question), and how can the body’s sense organs cause sensations in the mind when their natures are completely different? This article examines these issues as well as Descartes’ own response to this problem through his brief remarks on how the mind is united with the body to form a human being. This will show how these issues arise because of a misconception about Descartes’ theory of mind-body union, and how the correct conception of their union avoids this version of the problem. The article begins with an examination of the term “real distinction” and of Descartes’ probable motivations for maintaining his dualist thesis. … ******* CHRISTOF KOCH WHAT IS CONSCIOUSNESS? [in Scientific American, June 1, 2018 ] https://www.scientificamerican.com/article/what-is-consciousness/ Consciousness is everything you experience.
    [Show full text]
  • UBCWPL University of British Columbia Working Papers in Linguistics
    UBCWPL University of British Columbia Working Papers in Linguistics -Papers for the Interlocution Workshop- Interlocution: Linguistic structure and human interaction Edited by: Anita Szakay, Connor Mayer, Beth Rogers, Bryan Gick and Joel Dunham July 2009 Volume 24 -Papers for the Interlocution Workshop- Interlocution: Linguistic structure and human interaction Vancouver, British Columbia, Canada May 15-17, 2009 Hosted by: The Department of Linguistics at the University of British Columbia Edited by: Anita Szakay, Connor Mayer, Beth Rogers, Bryan Gick and Joel Dunham The University of British Columbia Working Papers in Linguistics Volume 24 July 2009 UBCWPL is published by the graduate students of the University of British Columbia. We feature current research on language and linguistics by students and faculty of the department, and we are the regular publishers of two conference proceedings: the Workshop on Structure and Constituency in Languages of the Americas (WSCLA) and the International Conference on Salish and Neighbouring Languages (ICSNL). If you have any comments or suggestions, or would like to place orders, please contact : UBCWPL Editors Department of Linguistics Totem Field Studios 2613 West Mall V6T 1Z4 Tel: 604 822 4256 Fax 604 822 9687 E-mail: <[email protected]> Since articles in UBCWPL are works in progress, their publication elsewhere is not precluded. All rights remain with the authors. Cover artwork by Lester Ned Jr. Contact: Ancestral Native Art Creations 10704 #9 Highway Compt. 376 Rosedale, BC V0X 1X0 Phone: (604) 793-5306 Fax: (604) 794-3217 Email: [email protected] ii TABLE OF CONTENTS Preface…..........................................................................v BRYAN GICK .....................................................................1 Interlocution: an overview FREDERICK J.
    [Show full text]
  • A Computer Model of Creativity Based on Perceptual Activity Theory
    A Computer Model of Creativity Based on Perceptual Activity Theory Author Blain, Peter J Published 2007 Thesis Type Thesis (PhD Doctorate) School School of Information and Communication Technology DOI https://doi.org/10.25904/1912/134 Copyright Statement The author owns the copyright in this thesis, unless stated otherwise. Downloaded from http://hdl.handle.net/10072/366782 Griffith Research Online https://research-repository.griffith.edu.au A Computer Model of Creativity Based on Perceptual Activity Theory Peter J. Blain School Information & Communication Technology Griffith University Submitted in fulfilment of the requirements of the degree of Doctor of Philosophy October 2006 i Abstract Perception and mental imagery are often thought of as processes that generate internal representations, but proponents of perceptual activity theory say they are better thought of as guided exploratory activities. The omission of internal representations in the perceptual activity account has led some to see it as computationally implausible. This thesis clarifies perceptual activity theory from a computational perspective, and tests its viability using a computer model called PABLO. The computer model operates in the Letter Spirit domain, which is a framework for creating stylistic variations on the lowercase letters of the Roman alphabet. PABLO is unlike other computer models of perception and mental imagery because it does not use data-structures to represent percepts and mental images. Mental contents are instead modelled in terms of the exploratory activity in which perceptual activity theory says they consist. PABLO also models the flexibility of imagery, and simulates how it can be harnessed and exploited by the system to generate a creative product.
    [Show full text]
  • Evidence for Information Processing in the Brain
    Evidence for Information Processing in the Brain Marc Burock Abstract Many cognitive and neuroscientists attempt to assign biological functions to brain structures. To achieve this end, scientists perform experiments that relate the physical properties of brain structures to organism-level abilities, behaviors, and environmental stimuli. Researchers make use of various measuring instruments and methodological techniques to obtain this kind of relational evidence, ranging from single-unit electrophysiology and optogenetics to whole brain functional MRI. Each experiment is intended to identify brain function. However, seemingly independent of experimental evidence, many cognitive scientists, neuroscientists, and philosophers of science assume that the brain processes information as a scientific fact. In this work we analyze categories of relational evidence and find that although physical features of specific brain areas selectively covary with external stimuli and abilities, and that the brain shows reliable causal organization, there is no direct evidence supporting the claim that information processing is a natural function of the brain. We conclude that the belief in brain information processing adds little to the science of cognitive science and functions primarily as a metaphor for efficient communication of neuroscientific data. Keywords : brain function; cognitive science; experiments; information theory; neuroimaging; neuroscience evidence; philosophy of science 1. Introduction experimental evidence. Like the physicist who can back up the proposition ‘protons have spin’ with a presentation Many of us believe that the brain processes information. of the experimental evidence, we expect that the Bechtel and Richardson (2010), as philosophers of cognitive scientist should be able to do the same cognitive science, consider it uncontroversial that regarding a statement about the brain.
    [Show full text]
  • Consciousness As a Phenomenon of Memory
    Journal of Consciousness Exploration & Research| August 2020 | Volume 11 | Issue 5 | pp. 438-453 438 Grynnsten, H., Consciousness As a Phenomenon of Memory Article Consciousness As a Phenomenon of Memory Henry Grynnsten * Abstract Consciousness has been a mystery for thousands of years. The idea behind this article is that consciousness can be explained by way of the phenomenon of déjà vu. This state seems to come about when a synchronization error appears in sense signals going to the brain. From déjà vu, where there seems to be a memory of the present moment, one can work out that consciousness is a function of memory. Data from the senses are separate, while we see the world as a total experience. This means that the data are bound together in some way. Human consciousness can be explained as a phenomenon that arises when sense data are repeated at a short interval in the brain. The reason that humans have consciousness may have to do with language learning. Consciousness possibly appeared simultaneously with language, in a short span of time, and it is consciousness that makes human language possible. There was such a great advantage to using language, that all Homo sapiens acquired it. Keywords : Consciousness, memory, phenomenon, language, human. 1. Introduction The question of consciousness has been described as a hard problem, a term famously coined by David Chalmers [1]. It often seems that the subjective experience of consciousness influences our theories about it, so that it becomes hard to put your finger on it. Philosophers have come up with various theories through the ages.
    [Show full text]
  • Consciousness Studies
    Consciousness Studies Wikibooks.org March 19, 2013 On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. An URI to this license is given in the list of figures on page 321. If this document is a derived work from the contents of one of these projects and the content was still licensed by the project under this license at the time of derivation this document has to be licensed under the same, a similar or a compatible license, as stated in section 4b of the license. The list of contributors is included in chapter Contributors on page 319. The licenses GPL, LGPL and GFDL are included in chapter Licenses on page 329, since this book and/or parts of it may or may not be licensed under one or more of these licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of figures on page 321. This PDF was generated by the LATEX typesetting software. The LATEX source code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from the PDF file, we recommend the use of http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility or clicking the paper clip attachment symbol on the lower left of your PDF Viewer, selecting Save Attachment. After extracting it from the PDF file you have to rename it to source.7z. To uncompress the resulting archive we recommend the use of http://www.7-zip.org/.
    [Show full text]
  • Psychological Concepts in Cognitive Neuroscience: Some Remarks on Bennett & Hacker’S Philosophical Foundations of Neuroscience
    PSYCHOLOGICAL CONCEPTS IN COGNITIVE NEUROSCIENCE: SOME REMARKS ON BENNETT & HACKER’S PHILOSOPHICAL FOUNDATIONS OF NEUROSCIENCE Marcelo Carvalho Universidade Federal de São Paulo ABSTRACT: The use of psychological concepts in cognitive neuroscience is heavily criticized by Bennett & Hacker's Philosophical Foundations of Neuroscience. The central objection points to neuroscience's attribution to the brain of psychological concepts that are meaningful only when applied to the entire being. That is supposedly the case of “seeing,” “communicating,” and “reading.” Bennett & Hacker identify in such attributions what they call a mereological fallacy. The critical revision of Bennett & Hacker's argument is an opportunity to present the debate about philosophy and psychological neuroscience and outline a Wittgensteinian perspective about the meaning of psychological concepts, its interest, and its relevance to scientific research. KEYWORDS: Cognitive Neuroscience. Philosophy of Mind. Wittgenstein. RESUMO: O uso de conceitos psicológicos na neurociência cognitiva é fortemente criticado por Bennett & Hacker em Philosophical Foundations of Neuroscience. Sua objeção central dirige-se à atribuição ao cérebro pela neurociência de conceitos psicológicos que são significativos apenas quando aplicados a todo o ser. Esse é supostamente o caso de “ver”, “comunicar” e “ler”. Bennett & Hacker identificam em tais atribuições o que eles chamam de falácia mereológica. A revisão crítica do argumento de Bennett & Hacker é uma oportunidade para apresentar o debate sobre filosofia e neurociência psicológica e delinear uma perspectiva wittgensteiniana sobre o significado dos conceitos psicológicos, seu interesse e sua relevância para a pesquisa científica. PALAVRAS-CHAVE: Neurociência Cognitiva. Filosofia da Mente. Wittgenstein. PROMETHEUS – N. 33 – May – August 2020 - E-ISSN: 2176-5960 1.
    [Show full text]
  • Plato (427-347BC)
    Consciousness Studies Edition 2.0 A Wikibook http://en.wikibooks.org/wiki/Consciousness_studies 1 Table of Contents Authors........................................................................................................................................5 Introduction.................................................................................................................................6 A note on Naive Realism........................................................................................................7 Other uses of the term "Consciousness".................................................................................8 Intended audience and how to read this book........................................................................ 9 Part I: Historical Review...........................................................................................................10 Early Ideas.................................................................................................................................10 Aristotle. (c.350 BC). On the Soul....................................................................................... 10 Homer,(c.800-900 BC)The Iliad and Odyssey.....................................................................12 Plato (427-347BC)............................................................................................................... 13 Siddhartha Gautama c.500BC Buddhist Texts.....................................................................15 Seventeenth and Eighteenth Century
    [Show full text]
  • Intersubjectivity, Mimetic Schemas and the Emergence of Language
    Intellectica , 2007/2-3, 46-48 , pp . 123-151 Intersubjectivity, Mimetic Schemas and the Emergence of Language Jordan Zlatev RESUME : Intersubjectivité, schémas mimétiques et l'émergence du langage. L’argument central de cet article est que l’intersubjectivité constitue une propriété essentielle de l’esprit humain. La première partie développe cet argument en contrastant la question de l’intersubjectivité avec la question de la théorie de l’esprit, qui est la base même de la théorie classique de la cognition sociale. La deuxième partie de l’article propose une version particulière de la thèse de la primauté de l’esprit partagé, une version basée sur la notion de mimétisme corporel, c’est-à-dire la capacité d’utiliser notre corps pour ressentir les émotions d’autrui, comprendre ses intentions et finalement comprendre et exprimer des intentions de communication. De prime abord, le mimétisme corporel a lieu entre les gens (et à un moindre degré, des autres animaux évolués, comme les grands singes et les dauphins) mais il est progressivement internalisé sous la forme de schémas mimétiques (Zlatev 2005, 2007); ces schémas mimétiques sont des concepts préverbaux dont certaines propriétés contribuent à expliquer l’émergence du langage en tant que système sémiotique, conventionnel et normatif. De façon dialectique, l’intersubjectivité est la condition préalable de l’émergence d’un tel système sémiotique, qui la développe et la reconfigure en retour, faisant ainsi des êtres humains la quintessence de l’“espèce intersubjective”. Finalement, l’article propose une manière partiellement nouvelle d’expliquer l’autisme. MOTS CLE : cognition sociale, imitation, mimétisme corporel, représentation, conscience, autisme ABSTRACT : In this article I argue that intersubjectivity constitutes an essential characteristic of the human mind.
    [Show full text]
  • Yes, AI Can Match Human Intelligence
    Yes, AI Can Match Human Intelligence William J. Rapaport Department of Computer Science and Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science University at Buffalo, The State University of New York, Buffalo, NY 14260-2500 [email protected] http://www.cse.buffalo.edu/∼rapaport/ August 28, 2021 Abstract This is a draft of the “Yes” side of a proposed debate book, Will AI Match (or Even Exceed) Human Intelligence? (Routledge).1 The “No” position will be taken by Selmer Bringsjord, and will be followed by rejoinders on each side. AI should be considered as the branch of computer science that investi- gates whether, and to what extent, cognition is computable. Computability is a logical or mathematical notion. So, the only way to prove that something— including (some aspect of) cognition—is not computable is via a logical or mathematical argument. Because no such argument has met with general acceptance (in the way that other proofs of non-computability, such as the Halting Problem, have been generally accepted), there is no logical reason to think that AI won’t eventually match human intelligence. Along the way, I discuss the Turing Test as a measure of AI’s success at showing the com- putability of various aspects of cognition, and I consider the potential road- blocks set by consciousness, qualia, and mathematical intuition. 1Our preferred title is Will AI Succeed?. 1 Contents 1 Yes, (Real) Intelligence (Probably) Is (Artificially) Computable 5 2 What Is AI? 6 2.1 A Very, Very Brief History of AI . 6 2.1.1 Turing on “Machine” Intelligence .
    [Show full text]