On John Mccarthy's 80Th Birthday, in Honor Of

Total Page:16

File Type:pdf, Size:1020Kb

On John Mccarthy's 80Th Birthday, in Honor Of On John McCarthy’s 80th Birthday, in Honor of his Contributions Patrick J. Hayes and Leora Morgenstern Abstract John McCarthy’s contributions to computer science and artificial intelligence are legendary. He invented Lisp, made substantial contributions to early work in timesharing and the theory of computation, and was one of the founders of artificial intelligence and knowl- edge representation. This article, written in honor of McCarthy’s 80th birthday, presents a brief biography, an overview of the major themes of his research, and a discussion of several of his major papers. Introduction Fifty years ago, John McCarthy embarked on a bold and unique plan to achieve human-level intelligence in computers. It was not his dream of an intelligent com- puter that was unique, or even first: Alan Turing (Tur- ing 1950) had envisioned a computer that could con- verse intelligently with humans back in 1950; by the mid 1950s, there were several researchers (including Herbert Simon, Allen Newell, Oliver Selfridge, and Marvin Min- sky) dabbling in what would be called artificial intelli- gence. What distinguished McCarthy’s plan was his emphasis on using mathematical logic both as a lan- guage for representing the knowledge that an intelligent machine should have, and as a means for reasoning with that knowledge. This emphasis on mathematical logic was to lead to the development of the logicist approach to artificial intelligence, as well as to the development of the computer language Lisp. John McCarthy’s contributions to computer science and artificial intelligence are legendary. He revolution- ized the use of computers with his innovations in time- sharing; he invented Lisp, one of the longest-lived com- puter languages in use; he made substantial contribu- tions to early work in the mathematical theory of com- putation; he was one of the founders of the field of artifi- cial intelligence; and he foresaw the need for knowledge Figure 1: John McCarthy representation before the field of AI was even properly born. John McCarthy turned 80 on September 4, 2007. This article is a celebration of that milestone. It in- Copyright c 2007, Association for the Advancement of Ar- tificial Intelligence (www.aaai.org). All rights reserved. cludes a glimpse of McCarthy’s life, an overview of the fress 1951), a conference that joined together lead- major themes of his work, and a discussion of several of ing researchers in different areas related to cognitive his major papers. The aim is to introduce a more com- science, including mathematicians Alan Turing and plete picture of McCarthy’s long-term research to the Claude Shannon, and psychologist Karl Lashley. As AI Magazine readership. We hope it will help readers he listened to the discussions comparing computers and appreciate the range and depth of innovation in Mc- the brain, McCarthy had a watershed moment. From Carthy’s research. that time on, his chief interests related to the devel- opment of machines that could think like people. In- Background deed, some of the example problems that are present in McCarthy’s papers, such as the monkeys and bananas McCarthy was born in Boston in 1927 to John Patrick problem, come from the Hixon Symposium: Karl Lash- and Ida Glatt McCarthy, immigrants from, respectively, ley described an ape pulling over a box and climbing on Ireland and Lithuania. The Depression started a few it in order to get a bunch of bananas, and McCarthy years after his birth; McCarthy’s parents lost their took that as a base level for intelligent reasoning. house, and the family — which now included a second McCarthy began to consider the possibility of con- child — became briefly peripatetic. They lived for a structing intelligent computers. This was all at a rather short while in New York and then in Cleveland before abstract level, since at that point, no computers were finally settling in Los Angeles, where the senior Mc- available to him. He conceived of two interacting fi- Carthy worked as an organizer for the Amalgamated nite automata, one representing the human brain, and Clothing Workers. one representing the environment. When he arrived Like many child prodigies, John McCarthy was partly at Princeton in 1949, he spoke to John von Neumann self-educated. Due to childhood illness, he began school about his ideas; von Neumann was interested and told a year late, but he quickly made up the time on his own, him to write them up. But McCarthy, upon further skipped several grades, and wound up graduating from reflection, was dissatisfied with his original idea. He re- high school two years early. When he was a teenager alized that he needed to somehow represent the knowl- he developed an interest in mathematics and decided he edge in the human brain, and that a pair of finite au- wanted to go to the California Institute of Technology. tomata, even if they could accurately represent the be- At fifteen, he bought the calculus textbook then used havior of a human brain interacting with the environ- in the Cal Tech freshman calculus course and taught ment, didn’t have enough structure to represent hu- it to himself in its entirety. In the hubris of youth, he man knowledge. Although he hadn’t yet named it, Mc- applied only to Cal Tech, writing a one-sentence state- Carthy had at this point already identified the need ment of purpose on his college application: “I intend for knowledge representation. Twenty years later, Mc- to be a professor of mathematics.” When he arrived Carthy was to formalize this intuition when he made at Cal Tech, he discovered that the textbook for the the distinction between a metaphysically adequate rep- course had been changed. But the self-study paid off. resentation and an epistemologically adequate represen- Before classes started, he bumped into one of the in- tation (McCarthy & Hayes 1969). (McCarthy’s ideas on structors for freshman calculus, asked him a detailed interacting automata were independently invented and question related to one of the textbook problems on developed by Larry Fogel some years later; see (Fogel which he had been working, and showed him his note- 1962).) book with worked-out problems. ”I don’t want you in my class,” that instructor said, and arranged for him to McCarthy finished his Ph.D. in two years, on a prob- be given credit for freshman calculus. The same thing lem in partial differential equations that he had discov- happened when he met the instructor for sophomore ered. He stayed on for a couple of years at Princeton as calculus. Ultimately, he was plunked down, at the age an instructor. It was during this time that he met Mar- of 16, in graduate math class. vin Minsky, who began his graduate studies at Prince- ton just as McCarthy was starting his instructorship. McCarthy and Minsky discovered a shared passion for Birth of AI research into constructing intelligent machines; the two After McCarthy received his Bachelors of Science from were to collaborate on many projects over the next Cal Tech, he began his graduate studies there. For decade. In 1952, McCarthy spent a summer working McCarthy, the intellectual ferment at the time was pal- at Bell Labs. He approached Claude Shannon with the pable. The excitement generated by the use of early idea of collecting papers on the topic of intelligent ma- computers to decode enemy messages — and thus help chines. Claude Shannon, in addition to his seminal con- win World War II — was still running high. The the- tributions in information theory, analog circuit design, oretical foundations for computing had recently been and computational linguistics, had recently published laid by Church and Turing. There was a growing inter- a groundbreaking paper on how one could program a est in the workings of the mind, in what would later be machine to play chess. Shannon agreed to collaborate called cognitive science. on such a collection, but, recalls McCarthy, didn’t want In September 1948, McCarthy went to the Hixon the title to be too provocative. As a result, they called Symposium on Cerebral Mechanisms in Behavior (Jef- the planned volume Automata Studies (Shannon & Mc- Carthy 1956). As the papers began to come in, Mc- project: figuring out how to get computers to do com- Carthy (at this point an acting assistant professor at monsense reasoning. The work was radically different Stanford) was disappointed when he realized that the from that of his AI colleagues in at least two ways. papers, while all related to automata studies in some First, previous work in AI had focused on getting a way, had little to do with what he regarded as getting computer to replicate activities that are challenging machines to do intelligent reasoning. for humans, such as playing chess and proving theo- So he decided to nail the flag to the mast and be rems of mathematics. In contrast, McCarthy was con- explicit about the sort of research and the sort of pa- cerned with mundane and seemingly trivial tasks, such pers that he wanted to encourage. He therefore came as constructing a plan to get to the airport. Second, up with “artificial intelligence” — a term that would he was proposing using the tools of mathematical logic make it clear that the goal was to construct machines for proving something other than theorems in math- that acted in a truly intelligent manner. It was this ematical domains. His paper, “Programs With Com- term that he used in 1955 when he began writing — mon Sense” (McCarthy 1959) (often referred to as the with Minsky, Shannon, and Nathaniel Rochester, a re- Advice-Taker paper), articulated the need for a com- searcher at IBM — the proposal to fund the first con- puter to be able to perform commonsense reasoning and ference dedicated to the topic, the famous Dartmouth the need for a computer to have a formal representation conference on artificial intelligence.
Recommended publications
  • John Mccarthy – Father of Artificial Intelligence
    Asia Pacific Mathematics Newsletter John McCarthy – Father of Artificial Intelligence V Rajaraman Introduction I first met John McCarthy when he visited IIT, Kanpur, in 1968. During his visit he saw that our computer centre, which I was heading, had two batch processing second generation computers — an IBM 7044/1401 and an IBM 1620, both of them were being used for “production jobs”. IBM 1620 was used primarily to teach programming to all students of IIT and IBM 7044/1401 was used by research students and faculty besides a large number of guest users from several neighbouring universities and research laboratories. There was no interactive computer available for computer science and electrical engineering students to do hardware and software research. McCarthy was a great believer in the power of time-sharing computers. John McCarthy In fact one of his first important contributions was a memo he wrote in 1957 urging the Director of the MIT In this article we summarise the contributions of Computer Centre to modify the IBM 704 into a time- John McCarthy to Computer Science. Among his sharing machine [1]. He later persuaded Digital Equip- contributions are: suggesting that the best method ment Corporation (who made the first mini computers of using computers is in an interactive mode, a mode and the PDP series of computers) to design a mini in which computers become partners of users computer with a time-sharing operating system. enabling them to solve problems. This logically led to the idea of time-sharing of large computers by many users and computing becoming a utility — much like a power utility.
    [Show full text]
  • HUMAN-TYPE COMMON SENSE NEEDS EXTENSIONS to LOGIC John Mccarthy, Stanford University
    HUMAN-TYPE COMMON SENSE NEEDS EXTENSIONS TO LOGIC John McCarthy, Stanford University • Logical AI (artificial intelligence) is based on programs that represent facts about the world in languages of mathematical logic and decide what actions will achieve goals by logical rea- soning. A lot has been accomplished with logic as is. • This was Leibniz’s goal, and I think we’ll eventually achieve it. When he wrote Let us calculate, maybe he imagined that the AI prob- lem would be solved and not just that of a logical language for expressing common sense facts. We can have a language adequate for expressing common sense facts and reasoning before we have the ideas needed for human- level AI. 1 • It’s a disgrace that logicians have forgotten Leibniz’s goal, but there’s an excuse. Non- monotonic reasoning is needed for common sense, but it can yield conclusions that aren’t true in all models of the premises—just the preferred models. • Almost 50 years work has gone into logical AI and its rival, AI based on imitating neuro- physiology. Both have achieved some success, but neither is close to human-level intelligence. • The common sense informatic situation, in contrast to bounded informatic situations, is key to human-level AI. • First order languages will do, especially if a heavy duty axiomatic set theory is included, e.g. A × B, AB, list operations, and recur- sive definition are directly included. To make reasoning as concise as human informal set- theoretic reasoning, many theorems of set the- ory need to be taken as axioms.
    [Show full text]
  • Cyc: a Midterm Report
    AI Magazine Volume 11 Number 3 (1990) (© AAAI) Articles The majority of work After explicating the need for a large common- We have come a in knowledge repre- sense knowledge base spanning human consen- long way in this . an sentation has dealt sus knowledge, we report on many of the lessons time, and this article aversion to with the technicali- learned over the first five years of attempting its presents some of the ties of relating predi- construction. We have come a long way in terms lessons learned and a addressing cate calculus to of methodology, representation language, tech- description of where the problems niques for efficient inferencing, the ontology of other formalisms we are and briefly the knowledge base, and the environment and that arise in and with the details infrastructure in which the knowledge base is discusses our plans of various schemes being built. We describe the evolution of Cyc for the coming five actually for default reason- and its current state and close with a look at our years. We chose to representing ing. There has almost plans and expectations for the coming five years, focus on technical been an aversion to including an argument for how and why the issues in representa- large bodies addressing the prob- project might conclude at the end of this time. tion, inference, and of knowledge lems that arise in ontology rather than actually represent- infrastructure issues with content. ing large bodies of knowledge with content. such as user interfaces, the training of knowl- However, deep, important issues must be edge enterers, or existing collaborations and addressed if we are to ever have a large intelli- applications of Cyc.
    [Show full text]
  • Some Alternative Formulations of the Event Calculus
    In: "Computational Logic: Logic Programming and Beyond - Essays in Honour of Robert A. Kowalski", Lecture Notes in Artificial Intelligence, Vol. 2408, pub. Springer Verlag, 2002, pages 452–490. Some Alternative Formulations of the Event Calculus Rob Miller1 and Murray Shanahan2 1 University College London, London WC1E 6BT, U.K. [email protected] http://www.ucl.ac.uk/~uczcrsm/ 2 Imperial College of Science, Technology and Medicine, London SW7 2BT, U.K. [email protected] http://www-ics.ee.ic.ac.uk/~mpsha/ Abstract. The Event Calculus is a narrative based formalism for rea- soning about actions and change originally proposed in logic program- ming form by Kowalski and Sergot. In this paper we summarise how variants of the Event Calculus may be expressed as classical logic ax- iomatisations, and how under certain circumstances these theories may be reformulated as “action description language” domain descriptions using the Language . This enables the classical logic Event Calculus to E inherit various provably correct automated reasoning procedures recently developed for . E 1 Introduction The “Event Calculus” was originally introduced by Bob Kowalski and Marek Ser- got [33] as a logic programming framework for representing and reasoning about actions (or events) and their effects, especially in database applications. Since then many alternative formulations, implementations and applications have been developed. The Event Calculus has been reformulated in various logic program- ming forms (e.g. [11], [12], [21], [23], [29], [53], [58], [71], [72], [73],[74]), in classical logic (e.g. [62], [42], [43]), in modal logic (e.g. [2], [3], [4], [5], [6], [7]) and as an “action description language” ([21], [22]).
    [Show full text]
  • Conference Program
    Program & Exhibit Guide AAAI 96 Thirteenth National Conference on Artificial Intelligence IAAI 96 Eighth Conference on Innovative Applications of Artificial Intelligence KDD 96 Second International Conference on Knowledge Discovery and Data Mining Sponsored by the American Association for Artificial Intelligence August 2-8, 1996 • Portland, Oregon IAAI-96 Contents Program Chair: Howard E. Shrobe, Mas- Acknowledgements / 2 sachusetts Institute of Technology AAAI-96 Invited Talks / 26 Program Cochair: Ted E. Senator, National As- AAAI-96 Technical Program / 32 sociation of Securities Dealers Conference at a Glance / 8 Corporate Sponsorship / 2 KDD-96 Exhibition / 38 General Conference Chair: Usama M. Fayyad, Fellows / 2 Microsoft Research General Information / 3 Program Cochairs: Jiawei Han, Simon Fraser IAAI-96 Program / 22 University and Evangelos Simoudis, IBM KDD-96 Program / 9 Almaden Research Center Robot Competition and Exhibition / 43 Publicity Chair: Padhraic Smyth, University of SIGART/AAAI Doctoral Consortium / 30 California, Irvine Special Events/Programs / 30 Sponsorship Chair: Gregory Piatetsky-Shapiro, Special Meetings / 31 GTE Laboratories Technical Program Tuesday / 32 Demo Session and Exhibits Chair: Tej Anand, Technical Program Wednesday / 34 NCR Corporation Technical Program Thursday / 36 A complete listing of the AAAI-96, IAAI-96 Tutorial Program / 20 and KDD-96 Program Committee members Workshop Program / 19 appears in the AAAI-96/IAAI-96 and KDD-96 Proceedings. Thanks to all! Acknowledgements The American Association
    [Show full text]
  • Applications of Circumscription to Formalizing Common Sense Knowledge
    APPLICATIONS OF CIRCUMSCRIPTION TO FORMALIZING COMMON SENSE KNOWLEDGE John McCarthy Computer Science Department Stanford University Stanford, CA 94305 [email protected] http://www-formal.stanford.edu/jmc/ 1986 Abstract We present a new and more symmetric version of the circumscrip- tion method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects of various entities. Included are non- monotonic treatments of is-a hierarchies, the unique names hypothe- sis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously defined domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briefly explored. 1 INTRODUCTION AND NEW DEFINITION OF CIRCUMSCRIPTION (McCarthy 1980) introduces the circumscription method of nonmonotonic reasoning and gives motivation, some mathematical properties and some ex- 1 amples of its application. The present paper is logically self-contained, but motivation may be enhanced by reading the earlier paper. We don't repeat its arguments about the importance of nonmonotonic reasoning in AI, and its examples are instructive. Here we give a more symmetric definition of circumscription and appli- cations to the formal expression of common sense facts. Our long term goal (far from realized in the present paper) is to express these facts in a way that would be suitable for inclusion in a general purpose database of com- mon sense knowledge. We imagine this database to be used by AI programs written after the initial preparation of the database.
    [Show full text]
  • The Frame Problem, Then and Now
    The Frame Problem, Then and Now Vladimir Lifschitz University of Texas at Austin My collaboration with John McCarthy started in 1984. He was interested then in what he called the commonsense law of inertia. That idea is related to actions, such as moving an object to a different location, or, for instance, toggling a light switch. According to this law, whatever we know about the state of affairs before executing an action can be presumed, by default, to hold after the action as well. Formalizing this default would resolve the difficulty known in AI as the frame problem. Reasoning with defaults is nonmonotonic. John proposed a solution to the frame problem based on the method of nonmonotonic reasoning that he called circumscription [1]. And then something unpleasant happened. Two researchers from Yale University discovered that John's proposed solution was incorrect [2]. Their counterexample involved the actions of loading a gun and shooting, and it became known as the \Yale Shooting Scenario." Twenty years later their paper received the AAAI Classic Paper Award [3]. But that was not all. Automated reasoning is notoriously difficult, and the presence of defaults adds yet another level of complexity. Even assuming that the problem with Yale Shooting is resolved, was there any hope, one could ask, that the commonsense law of inertia would ever become part of usable software? So John's proposal seemed unsound and non-implementable. It also seemed unnecessary, because other researchers have proposed approaches to the frame problem that did not require nonmonotonic reasoning [4, 5, 6]. There were all indications that his project just wouldn't fly.
    [Show full text]
  • The Event Calculus Explained Murray Shanahan
    The Event Calculus Explained Murray Shanahan Department of Electrical and Electronic Engineering, Imperial College, Exhibition Road, London SW7 2BT, England. Email: [email protected] Abstract This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change. Introduction Central to many complex computer programs that take decisions about how they or other agents should act is some form of representation of the effects of actions, both their own and those of other agents. To the extent that the design of such programs is to be based on sound engineering principles, rather than ad hoc methods, it’s vital that the subject of how actions and their effects are represented has a solid theoretical basis. Hence the need for logical formalisms for representing action, which, although they may or may not be realised directly in computer programs, nevertheless offer a theoretical yardstick against which any actually deployed system of representation can be measured. This article presents one such formalism, namely the event calculus. There are many others, the most prominent of which is probably the situation calculus [McCarthy & Hayes, 1969], and the variant of the event calculus presented here should be thought of as just one point in a space of possible action formalisms which the community has yet to fully understand.
    [Show full text]
  • 1956 and the Origins of Artificial Intelligence Computing
    Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Rebecca E. Skinner copyright 2012 by Rebecca E. Skinner ISBN 978-0-9894543-1-5 Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing Chapter .5. Preface Chapter 1. Introduction Chapter 2. The Ether of Ideas in the Thirties and the War Years Chapter 3. The New World and the New Generation in the Forties Chapter 4. The Practice of Physical Automata Chapter 5. Von Neumann, Turing, and Abstract Automata Chapter 6. Chess, Checkers, and Games Chapter 7. The Impasse at End of Cybernetic Interlude Chapter 8. Newell, Shaw, and Simon at the Rand Corporation Chapter 9. The Declaration of AI at the Dartmouth Conference Chapter 10. The Inexorable Path of Newell and Simon Chapter 11. McCarthy and Minsky begin Research at MIT Chapter 12. AI’s Detractors Before its Time Chapter 13. Conclusion: Another Pregnant Pause Chapter 14. Acknowledgements Chapter 15. Bibliography Chapter 16. Endnotes Chapter .5. Preface Introduction Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing is a history of the origins of AI. AI, the field that seeks to do things that would be considered intelligent if a human being did them, is a universal of human thought, developed over centuries. Various efforts to carry this out appear- in the forms of robotic machinery and more abstract tools and systems of symbols intended to artificially contrive knowledge. The latter sounds like alchemy, and in a sense it certainly is. There is no gold more precious than knowledge. That this is a constant historical dream, deeply rooted in the human experience, is not in doubt.
    [Show full text]
  • Rigor Mortis: a Response to Nilsson's "Logic and Artificial Intelligence"
    Artificial Intelligence 47 (199l) 57-77 57 Elsevier Rigor mortis: a response to Nilsson's "Logic and artificial intelligence" Lawrence Birnbaum The Institute for the Learning Sciences and Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, 1L 60201, USA Received June 1988 Revised December 1989 Abstract Birnbaum, L., Rigor mortis: a response to Nilsson's "Logic and artificial intelligence", Artificial Intelligence 47 (1991) 57-77. Logicism has contributed greatly to progress in AI by emphasizing the central role of mental content and representational vocabulary in intelligent systems. Unfortunately, the logicists' dream of a completely use-independent characterization of knowledge has drawn their attention away from these fundamental AI problems, leading instead to a concentration on purely formalistic issues in deductive inference and model-theoretic "semantics". In addi- tion, their failure to resist the lure of formalistic modes of expression has unnecessarily curtailed the prospects for intellectual interaction with other AI researchers. 1. Introduction A good friend recently told me about a discussion he had with one of his colleagues about what to teach in a one-semester introductory artificial intellig- ence course for graduate students, given the rather limited time available in such a course. He proposed what he assumed were generally agreed to be central issues and techniques in AI--credit assignment in learning, means-ends analysis in problem-solving, the representation and use of abstract planning knowledge in the form of critics, and so on. Somewhat to his surprise, in his colleague's view one of the most important things for budding AI scientists to learn was..
    [Show full text]
  • Reasoning, Nonmonotonic Logics, and the Frame Problem
    From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Default geasoning, Nonmonotonic Logics, and the Frame Problem Steve Hanks and Drew McDermott’ Department of Computer Science, Yale University Box 2158 Yale Station New Haven, CT 06520 Abstract we must jump to the conclusion that an observed situation or object ia typical. For example, I may know that I typically meet with my Nonmonotonic formal systems have been proposed as an exten- advisor on Thursday afternoons, but I can’t deduce that I will ac- sion to classical first-order logic that will capture the process of hu- tually have a meeting nezt Thursday because I don’t know whether man “default reasoning” or “plausible inference” through their infer- next Thursday is typical. While certain facts may allow me to de- ence mechanisms just as modus ponena provides a model for deduc- duce that next Thursday is not typical (e.g. if I learn he will be out tive reasoning. But although the technical properties of these logics of town all next week), in general there will be no way for me to have been studied in detail and many examples of human default deduce that it ia. What we want to do in cases like this is to jump to reasoning have been identified, for the most part these logics have the conclusion that next Thursday is typical based on two pieces of not actually been applied to practical problems to see whether they information: first that most Thursdays are typical, and second that produce the expected results.
    [Show full text]
  • Foundational Issues in Artificial Intelligence and Cognitive Science Impasse and Solution
    FOUNDATIONAL ISSUES IN ARTIFICIAL INTELLIGENCE AND COGNITIVE SCIENCE IMPASSE AND SOLUTION Mark H. Bickhard Lehigh University Loren Terveen AT&T Bell Laboratories forthcoming, 1995 Elsevier Science Publishers Contents Preface xi Introduction 1 A PREVIEW 2 I GENERAL CRITIQUE 5 1 Programmatic Arguments 7 CRITIQUES AND QUALIFICATIONS 8 DIAGNOSES AND SOLUTIONS 8 IN-PRINCIPLE ARGUMENTS 9 2 The Problem of Representation 11 ENCODINGISM 11 Circularity 12 Incoherence — The Fundamental Flaw 13 A First Rejoinder 15 The Necessity of an Interpreter 17 3 Consequences of Encodingism 19 LOGICAL CONSEQUENCES 19 Skepticism 19 Idealism 20 Circular Microgenesis 20 Incoherence Again 20 Emergence 21 4 Responses to the Problems of Encodings 25 FALSE SOLUTIONS 25 Innatism 25 Methodological Solipsism 26 Direct Reference 27 External Observer Semantics 27 Internal Observer Semantics 28 Observer Idealism 29 Simulation Observer Idealism 30 vi SEDUCTIONS 31 Transduction 31 Correspondence as Encoding: Confusing Factual and Epistemic Correspondence 32 5 Current Criticisms of AI and Cognitive Science 35 AN APORIA 35 Empty Symbols 35 ENCOUNTERS WITH THE ISSUES 36 Searle 36 Gibson 40 Piaget 40 Maturana and Varela 42 Dreyfus 42 Hermeneutics 44 6 General Consequences of the Encodingism Impasse 47 REPRESENTATION 47 LEARNING 47 THE MENTAL 51 WHY ENCODINGISM? 51 II INTERACTIVISM: AN ALTERNATIVE TO ENCODINGISM 53 7 The Interactive Model 55 BASIC EPISTEMOLOGY 56 Representation as Function 56 Epistemic Contact: Interactive Differentiation and Implicit Definition 60 Representational
    [Show full text]