Intentional Systems and the Artificial Intelligence (Ai) Hermeneutic Network: Agency and Intentionality in Expressive Computational Systems

Total Page:16

File Type:pdf, Size:1020Kb

Intentional Systems and the Artificial Intelligence (Ai) Hermeneutic Network: Agency and Intentionality in Expressive Computational Systems INTENTIONAL SYSTEMS AND THE ARTIFICIAL INTELLIGENCE (AI) HERMENEUTIC NETWORK: AGENCY AND INTENTIONALITY IN EXPRESSIVE COMPUTATIONAL SYSTEMS A Thesis Presented to The Academic Faculty by Jichen Zhu In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Literature, Culture, and Communication Georgia Institute of Technology August 2009 INTENTIONAL SYSTEMS AND THE ARTIFICIAL INTELLIGENCE (AI) HERMENEUTIC NETWORK: AGENCY AND INTENTIONALITY IN EXPRESSIVE COMPUTATIONAL SYSTEMS Approved by: Dr. D. Fox Harrell, Advisor Dr. Michael Mateas School of Literature, Culture, and Computer Science Department Communication University of California, Santa Cruz Georgia Institute of Technology Dr. Jay D. Bolter Dr. Nick Montfort School of Literature, Culture, and Program in Writing and Humanistic Communication Studies Georgia Institute of Technology Massachusetts Institute of Technology Dr. Kenneth J. Knoespel Date Approved: 05, July 2009 School of Literature, Culture, and Communication Georgia Institute of Technology ACKNOWLEDGEMENTS I am extremely fortunate to have the guidance and encouragement of my advisor, Fox Harrell. None of the work below would have been possible without his close attention and constant support, down to his offering painstaking comments. His vigor and open-mindedness have deeply influenced this dissertation and my approach to research. I am also greatly grateful to members of my committee, Jay Bolter, Ken Knoespel, Michael Mateas, and Nick Montfort. Michael and Ken were among the first ones who introduced to me the concept of \expressive AI" and works in science studies respectively, which have joined to orient the foundation of this dissertation. Jay and Nick have provided me with valuable comments and advice at the times when I needed them the most. All of them have left firm stamps on this dissertation. Especially, I would like to thank the following individuals who have offered their discussions, comments, and proof-reading at the crucial stages of my dissertation: Kurt Belgum, Sooraj Bhat, Jill Coffin, Steve Hodges, Madhur Khandelwal, Ozge¨ Samanci, and Geoff Thomas. Many thanks to Santi Onta~n´on,in particular, for shar- ing his computer science perspectives and for offering constant support throughout the entire process. Their support has made this dissertation significantly better. I have also benefited greatly from the weekly discussions with the current and past members of the Imagination, Computation, and Expression (ICE) Lab, especially Kenny Chow, Ben Medler, Donna Sammander, Tonguc Sezen, Digdem Sezen, and Daniel Upton. I am indebted to the diverse supportive communities in the Digital Media Pro- gram and Georgia Tech at large. I have benefited a great deal from my teachers, iii particularly, Carl DiSalvo, Charles Isbell, Janet Murray, Nancy Nersessian, Ashwin Ram, Mark Riedl, Eugene Thacker, Bruce Walker, and Lisa Yaszek. I have learned as much from my fellow collegues. At the risk of leaving out many names, I would like to thank Calvin Ashmore, Steven Dow, Clara Fernandez, Riccardo Fusaroli, Sergio Goldenberg, Mike Helms, David Jimison, Hartmut Koenitz, Hyun-Jean Lee, Brian Schrank, and Brian Sherwell. I also appreciate the administrative and technical sup- port I received from Matthew Mcintyre, and Grantley Bailey and Melanie Richard, respectively. Many thanks to my friends here in Georgia Tech and also in Atlanta, many of whom have been mentioned above. Their company has provided balance to my life and made these five years some of the best years for me. I am equally grateful to my teachers, colleagues, and friends from Pittsburgh, Montreal, and Shanghai. Finally, I would like to thank my family particularly my mother for their uncon- ditional support. iv TABLE OF CONTENTS ACKNOWLEDGEMENTS ............................ iii LIST OF TABLES................................. ix LIST OF FIGURES ................................ x SUMMARY..................................... xii I INTRODUCTION: INTENTIONAL SYSTEMS.............. 1 1.1 The ELIZA Effect . .3 1.2 Intentional Systems . .5 1.3 Motivations . 15 1.4 A Brief Account of Memory, Reverie Machine ............ 17 1.5 Overview . 18 II THEORETICAL FRAMEWORK ..................... 21 2.1 Perspectives on Computing . 23 2.1.1 The Computer as a Stand-Alone Thinking Machine . 24 2.1.2 The Computer as a Tool . 26 2.1.3 The Computer as a Medium . 29 2.1.4 Messages of Social Struggle . 30 2.1.5 Narcissus' Mirror . 33 2.2 System Intentionality . 36 2.2.1 The Chinese Room Argument . 38 2.2.2 The Intentional Stance . 40 2.2.3 The Ghost outside the Machine: Social Perspectives . 42 2.3 Reading Systems . 48 2.3.1 Hermeneutics . 48 2.3.2 Software Studies . 54 2.3.3 Cognitive Semantics . 57 2.3.4 AI and HCI Analyses . 61 v 2.4 AI-based Interactive Narrative . 72 2.4.1 Expressive AI Practice . 73 2.4.2 Computational Narrative . 74 III THE AI HERMENEUTIC NETWORK .................. 78 3.1 Intentional Systems as Text . 81 3.1.1 What is Text?......................... 81 3.1.2 A Hermeneutic Framework . 83 3.2 Constructing the Discursive Machine . 85 3.2.1 Lessons from Alife: Discursive Strategies . 87 3.2.2 The \Epidemic" of Intentional Vocabulary . 90 3.2.3 The Code Level . 95 3.2.4 The Presentation Level . 100 3.2.5 Constructing the Literature . 102 3.2.6 User's Hermeneutic Reading . 103 3.3 A Network Model . 104 IV THE SECRET OF HAPPINESS: A CLOSE READING OF THE COPY- CAT SYSTEM................................ 107 4.1 Method . 109 4.1.1 Principles of Hermeneutics . 109 4.1.2 An Integrated Method . 110 4.2 The Corpus and Potential Limitations . 113 4.3 The Technical-Social-Cultural Context of Copycat .......... 114 4.4 Content Analysis of the Technical Literature . 119 4.4.1 The Two Aspects of Copycat ................. 119 4.4.2 Intentional Vocabulary: Connecting the Two Languages . 124 4.4.3 Leveraging the Two Languages of AI . 133 4.5 Ideological Analysis . 137 V AGENCY PLAY AS A SCALE OF INTENTIONALITY . 140 vi 5.1 Scale of Intentionality and System Agency . 141 5.1.1 Scale of Intentionality . 141 5.1.2 System Agency . 144 5.1.3 System Agency in Interactive Narratives . 147 5.2 A Situated Approach to Agency . 149 5.2.1 A Dance of Agency . 150 5.2.2 Agency as Free Will . 151 5.2.3 Agency as Resistance . 153 5.2.4 Absence of Agency . 153 5.3 Agency Play as an Expressive Tool . 154 5.3.1 Agency Play . 155 5.3.2 Agency Relationship . 157 5.3.3 Agency Scope . 159 5.3.4 Agency Dynamics . 159 5.3.5 User Input Direction . 160 VI MEMORY, REVERIE MACHINE: A CASE STUDY . 162 6.1 Motivation and Historical Context . 164 6.1.1 Machine Memories, Reveries and Daydreams . 166 6.1.2 Stream of Consciousness Literature and AI . 167 6.1.3 Stream of Consciousness Literature and Cognitive Linguistics 170 6.1.4 Challenges of Engaging Legacy Forms . 171 6.1.5 Related Works . 172 6.2 Deployment of \Scale of Intentionality" and \Agency Play" . 180 6.2.1 Main Indicators of System Intentionality . 180 6.2.2 Examples of Various Levels of System Intentionality . 182 6.3 Major Components . 187 6.3.1 Dynamic Narration of Affect Using the Alloy Conceptual Blending Algorithm . 189 6.3.2 The Emotional State Machine . 190 vii 6.3.3 Memory Structuring and Retrieval . 191 VII CONCLUSION AND FUTURE WORK.................. 193 7.1 Revisiting the Major Arguments . 194 7.1.1 The Formation of System Intentionality . 194 7.1.2 Design Implications . 196 7.2 Contributions . 197 7.3 Future Directions . 200 APPENDIX A SAMPLE DATA FROM THE TECHNICAL LITERATURE ON COPYCAT ............................... 203 REFERENCES................................... 216 viii LIST OF TABLES 1 Framework of Intentional Systems . 14 2 Multiple Readings of ELIZA ....................... 15 3 AI Approach for Analyzing Systems . 64 4 HCI Principles of Usability . 69 5 Comparison between the AI and HCI Analyses . 70 6 Timeline of Major AI Developments between 1980 and 1999 . 116 7 Usage of Intentional Vocabulary . 127 8 Usage of Cognitive Faculties in Naming Functions and Structures . 128 9 Comparison of Copycat with humans and other forms of life . 129 10 Indicators of \Scale of Intentionality" in MRM ............. 182 11 Use of Intentional Vocabulary from paper \The Copycat Project: A Model of Mental Fluidity and Analogy-making." . 203 ix LIST OF FIGURES 1 George Lewis and His Voyager System . .2 2 Original Transcript of ELIZA (Excerpt) . .4 3 An Early Model of Tamagotchi . .6 4 Four Characteristics of Intentional Systems (Prototype Model) . .7 5 Screenshot of AARON 's Output . .8 6 Screenshot of a Simulation of Braitenberg Vehicles . 10 7 Original Transcript of SHRDLU (Excerpt) . 11 8 The Main Fragment of the Antikythera Mechanism . 24 9 Screenshots of Heider and Simmel's 1944 Film Experiments . 37 10 Roomba Vacuum Cleaner with \Spotty Leopard" Costume . 41 11 Mechanical Turkish Chess Player by Baron Wolfgang von Kempelen in 1769 . 42 12 CD Cover Art of Cope's 1997 Album Classical Music Composed by Computer: Experiments in Musical Intelligence ............ 63 13 The AI Hermeneutic Network . 80 14 Sample code of ELIZA from Norvig's Textbook Paradigms of Artificial Intelligence Programming ........................ 96 15 System Diagram of an Artificial Neural Network . 99 16 A Partial Screenshot of the A.L.I.C.E Artificial Intelligence Founda- tion's Website . 102 17 A Network Model of the AI Hermeneutic Network . 105 18 Screenshot of a Java Re-implementation of Copycat by Scott Bolland 114 19 A Comparison between the Intentional and Technical Descriptions of Copycat .................................. 124 20 Source Code for a Bottom-up Scout Codelet . 130 21 Source Code for One Kind of \Happiness" . 133 22 Occurrences of Intentional Vocabulary in \The Copycat Project: A Model of Mental Fluidity and Analogy-making" . 135 x 23 Density of Intentional Vocabulary in \The Copycat Project: A Model of Mental Fluidity and Analogy-making" . 136 24 Lumi`ereBrothers' Early Film \Workers Leaving a Factory," screened publicly in Paris in 1895 .
Recommended publications
  • University of Navarra Ecclesiastical Faculty of Philosophy
    UNIVERSITY OF NAVARRA ECCLESIASTICAL FACULTY OF PHILOSOPHY Mark Telford Georges THE PROBLEM OF STORING COMMON SENSE IN ARTIFICIAL INTELLIGENCE Context in CYC Doctoral dissertation directed by Dr. Jaime Nubiola Pamplona, 2000 1 Table of Contents ABBREVIATIONS ............................................................................................................................. 4 INTRODUCTION............................................................................................................................... 5 CHAPTER I: A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE .................................. 9 1.1. THE ORIGIN AND USE OF THE TERM “ARTIFICIAL INTELLIGENCE”.............................................. 9 1.1.1. Influences in AI................................................................................................................ 10 1.1.2. “Artificial Intelligence” in popular culture..................................................................... 11 1.1.3. “Artificial Intelligence” in Applied AI ............................................................................ 12 1.1.4. Human AI and alien AI....................................................................................................14 1.1.5. “Artificial Intelligence” in Cognitive Science................................................................. 16 1.2. TRENDS IN AI........................................................................................................................... 17 1.2.1. Classical AI ....................................................................................................................
    [Show full text]
  • Abstract: This Paper Argues Against Daniel Dennett's Conception of What
    Abstract: This paper argues against Daniel Dennett’s conception of what is labeled as the Intentional Stance. Daniel Dennett thinks that conceiving of human beings, as Intentional Systems will reveal aspects about human action that a physical examination of the composition of their bodies will always fail to capture. I will argue against this claim by a reexamination of the Martian’s thought experiment. I will also analyze Dennett’s response to the famous Knowledge argument in order to point out a seeming contradiction in his view which will make my point stronger. Lastly, I will try to conclude with general remarks about my own view. Due to numerous advancements in the field of neurology, philosophers are now reconsidering their views on mental states. This in turn led to reconsideration of common views on action and intentionality in contemporary philosophy. In this paper I will try to defend a strict version of the Physicalist view of human nature, which states that everything about human action and intentionality can be explained by the study of physical matter, physical laws and physical history of humans. This will be attained by exploring the contrast between Daniel Dennett’s Intentional and Physical stances, and by showing how the former “fails” in face of the latter (contra Dennett), especially when considering the evolution of the human species. After that, I will try to point out an inconsistency in Dennett’s view especially in his dealing with Frank Jackson’s Knowledge Argument. I will end with a brief conclusion about the view I am advocating. A central claim of Dennett’s thesis is that if a system holds true beliefs and could be assigned beliefs and desires, then, given the proper amount of practical reasoning, that system’s actions should be relatively accurately predicted, and we can say, then, that the system is an intentional system to whom we can apply the intentional stance.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • Oral History Interview with Severo Ornstein and Laura Gould
    An Interview with SEVERO ORNSTEIN AND LAURA GOULD OH 258 Conducted by Bruce H. Bruemmer on 17 November 1994 Woodside, CA Charles Babbage Institute Center for the History of Information Processing University of Minnesota, Minneapolis Severo Ornstein and Laura Gould Interview 17 November 1994 Abstract Ornstein and Gould discuss the creation and expansion of Computer Professionals for Social Responsibility (CPSR). Ornstein recalls beginning a listserve at Xerox PARC for those concerned with the threat of nuclear war. Ornstein and Gould describe the movement of listserve participants from e-mail discussions to meetings and forming a group concerned with computer use in military systems. The bulk of the interview traces CPSR's organizational growth, fundraising, and activities educating the public about computer-dependent weapons systems such as proposed in the Strategic Defense Initiative (SDI). SEVERO ORNSTEIN AND LAURA GOULD INTERVIEW DATE: November 17, 1994 INTERVIEWER: Bruce H. Bruemmer LOCATION: Woodside, CA BRUEMMER: As I was going through your oral history interview I began to wonder about your political background and political background of the people who had started the organization. Can you characterize that? You had mentioned in your interview that you had done some antiwar demonstrations and that type of thing which is all very interesting given also your connection with DARPA. ORNSTEIN: Yes, most of the people at DARPA knew this. DARPA was not actually making bombs, you understand, and I think in the interview I talked about my feelings about DARPA at the time. I was strongly against the Vietnam War and never hid it from anyone. I wore my "Resist" button right into the Pentagon and if they didn't like it they could throw me out.
    [Show full text]
  • Iaj 10-3 (2019)
    Vol. 10 No. 3 2019 Arthur D. Simons Center for Interagency Cooperation, Fort Leavenworth, Kansas FEATURES | 1 About The Simons Center The Arthur D. Simons Center for Interagency Cooperation is a major program of the Command and General Staff College Foundation, Inc. The Simons Center is committed to the development of military leaders with interagency operational skills and an interagency body of knowledge that facilitates broader and more effective cooperation and policy implementation. About the CGSC Foundation The Command and General Staff College Foundation, Inc., was established on December 28, 2005 as a tax-exempt, non-profit educational foundation that provides resources and support to the U.S. Army Command and General Staff College in the development of tomorrow’s military leaders. The CGSC Foundation helps to advance the profession of military art and science by promoting the welfare and enhancing the prestigious educational programs of the CGSC. The CGSC Foundation supports the College’s many areas of focus by providing financial and research support for major programs such as the Simons Center, symposia, conferences, and lectures, as well as funding and organizing community outreach activities that help connect the American public to their Army. All Simons Center works are published by the “CGSC Foundation Press.” The CGSC Foundation is an equal opportunity provider. InterAgency Journal FEATURES Vol. 10, No. 3 (2019) 4 In the beginning... Special Report by Robert Ulin Arthur D. Simons Center for Interagency Cooperation 7 Military Neuro-Interventions: The Lewis and Clark Center Solving the Right Problems for Ethical Outcomes 100 Stimson Ave., Suite 1149 Shannon E.
    [Show full text]
  • 1. a Dangerous Idea
    About This Guide This guide is intended to assist in the use of the DVD Daniel Dennett, Darwin’s Dangerous Idea. The following pages provide an organizational schema for the DVD along with general notes for each section, key quotes from the DVD,and suggested discussion questions relevant to the section. The program is divided into seven parts, each clearly distinguished by a section title during the program. Contents Seven-Part DVD A Dangerous Idea. 3 Darwin’s Inversion . 4 Cranes: Getting Here from There . 8 Fruits of the Tree of Life . 11 Humans without Skyhooks . 13 Gradualism . 17 Memetic Revolution . 20 Articles by Daniel Dennett Could There Be a Darwinian Account of Human Creativity?. 25 From Typo to Thinko: When Evolution Graduated to Semantic Norms. 33 In Darwin’s Wake, Where Am I?. 41 2 Darwin's Dangerous Idea 1. A Dangerous Idea Dennett considers Darwin’s theory of evolution by natural selection the best single idea that anyone ever had.But it has also turned out to be a dangerous one. Science has accepted the theory as the most accurate explanation of the intricate design of living beings,but when it was first proposed,and again in recent times,the theory has met with a backlash from many people.What makes evolution so threatening,when theories in physics and chemistry seem so harmless? One problem with the introduction of Darwin’s great idea is that almost no one was prepared for such a revolutionary view of creation. Dennett gives an analogy between this inversion and Sweden’s change in driving direction: I’m going to imagine, would it be dangerous if tomorrow the people in Great Britain started driving on the right? It would be a really dangerous place to be because they’ve been driving on the left all these years….
    [Show full text]
  • Backmatter In
    Appendices Bibliography This book is a little like the previews of coming attractions at the movies; it's meant to whet your appetite in several directions, without giving you the complete story about anything. To ®nd out more, you'll have to consult more specialized books on each topic. There are a lot of books on computer programming and computer science, and whichever I chose to list here would be out of date by the time you read this. Instead of trying to give current references in every area, in this edition I'm listing only the few most important and timeless books, plus an indication of the sources I used for each chapter. Computer science is a fast-changing ®eld; if you want to know what the current hot issues are, you have to read the journals. The way to start is to join the Association for Computing Machinery, 1515 Broadway, New York, NY 10036. If you are a full-time student you are eligible for a special rate for dues, which as I write this is $25 per year. (But you should write for a membership application with the current rates.) The Association publishes about 20 monthly or quarterly periodicals, plus the newsletters of about 40 Special Interest Groups in particular ®elds. 341 Read These! If you read no other books about computer science, you must read these two. One is an introductory text for college computer science students; the other is intended for a nonspecialist audience. Abelson, Harold, and Gerald Jay Sussman with Julie Sussman, Structure and Interpretation of Computer Programs, MIT Press, Second Edition, 1996.
    [Show full text]
  • Oliver Knill: March 2000 Literature: Peter Norvig, Paradigns of Artificial Intelligence Programming Daniel Juravsky and James Martin, Speech and Language Processing
    ENTRY ARTIFICIAL INTELLIGENCE [ENTRY ARTIFICIAL INTELLIGENCE] Authors: Oliver Knill: March 2000 Literature: Peter Norvig, Paradigns of Artificial Intelligence Programming Daniel Juravsky and James Martin, Speech and Language Processing Adaptive Simulated Annealing [Adaptive Simulated Annealing] A language interface to a neural net simulator. artificial intelligence [artificial intelligence] (AI) is a field of computer science concerned with the concepts and methods of symbolic knowledge representation. AI attempts to model aspects of human thought on computers. Aspectrs of AI: computer vision • language processing • pattern recognition • expert systems • problem solving • roboting • optical character recognition • artificial life • grammars • game theory • Babelfish [Babelfish] Online translation system from Systran. Chomsky [Chomsky] Noam Chomsky is a pioneer in formal language theory. He is MIT Professor of Linguistics, Linguistic Theory, Syntax, Semantics and Philosophy of Language. Eliza [Eliza] One of the first programs to feature English output as well as input. It was developed by Joseph Weizenbaum at MIT. The paper appears in the January 1966 issue of the "Communications of the Association of Computing Machinery". Google [Google] A search engine emerging at the end of the 20'th century. It has AI features, allows not only to answer questions by pointing to relevant webpages but can also do simple tasks like doing arithmetic computations, convert units, read the news or find pictures with some content. GPS [GPS] General Problem Solver. A program developed in 1957 by Alan Newell and Herbert Simon. The aim was to write a single computer program which could solve any problem. One reason why GPS was destined to fail is now at the core of computer science.
    [Show full text]
  • The Evolution of Lisp
    1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp.
    [Show full text]
  • An Investigation of the Search Space of Lenat's AM Program Redacted for Privacy Abstract Approved: Thomas G
    AN ABSTRACT OF THE THESIS OF Prafulla K. Mishra for the degree of Master of Sciencein Computer Science presented on March 10,1988. Title: An Investigation of the Search Space of Lenat's AM Program Redacted for Privacy Abstract approved:_ Thomas G. Dietterich. AM is a computer program written by Doug Lenat that discoverselementary mathematics starting from some initial knowledge of set theory. Thesuccess of this program is not clearly understood. This work is an attempt to explore the searchspace of AM in order to un- derstand the success and eventual failure of AM. We show thatoperators play an important role in AM. Operators of AMcan be divided into critical and unneces- sary. The critical operators can be further divided into implosive and explosive. Heuristics are needed only for the explosive operators. Most heuristics and operators in AM can be removed without significant ef- fects. Only three operators (Coalesce, Invert, Repeat) andone initial concept (Bag Union) are enough to discover most of the elementary math concepts that AM discovered. An Investigation of the Search Space of Lenat's AM Program By Prafulla K. Mishra A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Master of Science Completed March 10, 1988 Commencement June 1988 APPROVED: Redacted for Privacy Professor of Computer Science in charge of major Redacted for Privacy Head of Department of Computer Science Redacted for Privacy Dean of GradtiaUe School Date thesis presented March 10, 1988 Typed by Prafulla K. Mishra for Prafulla K. Mishra ACKNOWLEDGEMENTS I am indebted to Tom Dietterich for his help, guidanceand encouragement throughout this work.
    [Show full text]
  • System Intentionality and the Artificial Intelligence Hermeneutic Network
    System Intentionality and the Artificial Intelligence Hermeneutic Network: the Role of Intentional Vocabulary ∗ Jichen Zhu D. Fox Harrell Georgia Institute of Technology Georgia Institute of Technology 686 Cherry Street, suite 336 686 Cherry Street, suite 336 Atlanta, GA, USA Atlanta, GA, USA [email protected] [email protected] ABSTRACT Lewis's interactive music system Voyager, Gil Weinberg & Computer systems that are designed explicitly to exhibit Scott Driscoll's robotic drummer Haile), visual arts (e.g., intentionality embody a phenomenon of increasing cultural Harold Cohen's painting program AARON ), and storytelling importance. In typical discourse about artificial intelligence (e.g., Michael Mateas's drama manager in Fa¸cade). The rel- (AI) systems, system intentionality is often seen as a tech- atively unexplored phenomenon of system intentionality also nical and ontological property of a program, resulting from provides artists and theorists with novel expressive possibil- its underlying algorithms and knowledge engineering. Influ- ities [26]. enced by hermeneutic approaches to text analysis and draw- ing from the areas of actor-network theory and philosophy To fully explore the design space of intentional systems, of mind, this paper proposes a humanistic framework for whether in the forms of art installations, consumer prod- analysis of AI systems stating that system intentionality is ucts, or otherwise, requires a thorough understanding of how narrated and interpreted by its human creators and users. system intentionality is formed. Many technologists regard We pay special attention to the discursive strategies em- the phenomenon of system intentionality as a technical prop- bedded in source code and technical literature of software erty of a system, directly proportionally to the complexity of systems that include such narration and interpretation.
    [Show full text]
  • The Intentional Stance Toward Robots
    The Intentional Stance Toward Robots: Conceptual and Methodological Considerations Sam Thellman ([email protected]) Department of Computer & Information Science, Linkoping¨ University Linkoping,¨ Sweden Tom Ziemke ([email protected]) Department of Computer & Information Science, Linkoping¨ University Linkoping,¨ Sweden Abstract intentional stance toward a robot. Marchesi et al. (2018) developed a questionnaire-based method specifically for as- It is well known that people tend to anthropomorphize in inter- pretations and explanations of the behavior of robots and other sessing whether people adopt the intentional stance toward interactive artifacts. Scientific discussions of this phenomenon robots. These studies all provide insight into people’s folk- tend to confuse the overlapping notions of folk psychology, psychological theories about robots. However, none of them theory of mind, and the intentional stance. We provide a clarifi- cation of the terminology, outline different research questions, assessed how such theories affect people’s predictions of be- and propose a methodology for making progress in studying havior to shed light on the usefulness of taking the intentional the intentional stance toward robots empirically. stance in interactions with robots. Keywords: human-robot interaction; social cognition; inten- Moreover, research that has so far explicitly addressed the tional stance; theory of mind; folk psychology; false-belief task intentional stance toward robots in many cases conflated the intentional stance with overlapping but different notions, such Introduction as folk psychology and theory of mind. In particular, the question whether it is useful for people to predict robot behav- The use of folk psychology in interpersonal interactions has ior by attributing it to mental states (what we in the present been described as “practically indispensable” (Dennett, 1989, paper will call “the intentional stance question”) tends to p.
    [Show full text]