Computationalism New Directions

Computationalism New Directions

Computationalism New Directions edited by Matthias Scheutz A Bradford Book The MIT Press Cambridge, Massachusetts London, England 2002 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or informa- tion storage and retrieval) without permission in writing from the publisher. This book was set in Sabon by Achorn Graphic Services, Inc., on the Miles 33 system and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Computationalism : new directions / edited by Matthias Scheutz. p. cm. “A Bradford book.” Includes bibliographical references and index. ISBN 0-262-19478-3 (hc: alk. paper) 1. Computer science. 2. Artificial intelligence. I. Scheutz, Matthias. QA76.C54747 2002 004—dc21 2002019570 Contents Authors vii Preface ix 1 Computationalism—The Next Generation 1 Matthias Scheutz 2 The Foundations of Computing 23 Brian Cantwell Smith 3 Narrow versus Wide Mechanism 59 B. Jack Copeland 4 The Irrelevance of Turing Machines to Artificial Intelligence 87 Aaron Sloman 5 The Practical Logic of Computer Work 129 Philip E. Agre 6 Symbol Grounding and the Origin of Language 143 Stevan Harnad 7 Authentic Intentionality 159 John Haugeland Epilogue 175 References 187 Index 199 Authors Philip E. Agre, Department of Information Studies, University of California, Los Angeles, Los Angeles, CA 90095–1520, USA [email protected] http://dlis.gseis.ucla.edu/pagre/ B. Jack Copeland, Philosophy Department, University of Canterbury, Christchurch, New Zealand [email protected] http://www.phil.canterbury.ac.nz/jack_copeland/ Stevan Harnad, Cognitive Sciences Center, ECS, Southampton University, Highfield, Southampton SO17 1BJ, United Kingdom [email protected] http://cogsci.soton.ac.uk/harnad/ John Haugeland, Department of Philosophy, University of Chicago, Chicago, IL 60637, USA [email protected] Matthias Scheutz, Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA [email protected] http://www.nd.edu/ϳmscheutz/ Aaron Sloman, School of Computer Science, The University of Birmingham, Birmingham B15 2TT, UK [email protected] http://www.bham.ac.uk/ϳaxs Brian Cantwell Smith, Departments of Philosophy and Computer Science, Duke University, Durham, NC 27708–0402, USA [email protected] http://www.ageofsig.org/people/bcsmith/ Preface Are minds computers? Or, to put it in more philosophical jargon, are mental states computational states? And if so, can human cognition then be understood in terms of programs? Computationalism—the view that mental states are computational states—is based on the conviction that there are program descriptions of mental processes and that, at least in principle, it is possible for computers, that is, machines of a particular kind, to possess mentality. In its early days cognitive science rallied around computationalism, but in recent times this paradigmatic computational view of mind has come increasingly under attack. Connectionists and dynamicists have tried to replace it with alternative models. Biologists and neuroscientists have at- tempted to understand the mind directly at the level of the brain, thus skipping the “computational level.” Social theorists and roboticists have argued that the essence of intelligence is to be found in situated interac- tion with the external world, rather than in a purely internal world of symbol manipulation. Philosophers have argued that traditional con- ceptions of computationalism (and more generally functionalism) are at best conceptually inadequate, if not vacuous (e.g., leading to the ab- surd view that any physical system can be viewed as implementing any computation). Many of these critiques share a common theme. Computation fails as an explanatory notion for mind, the critics claim, because computation, assumed to be defined solely in abstract syntactic terms, necessarily ne- glects the real-time, embodied, real-world constraints with which cogni- tive systems intrinsically cope. Although these views have led some researchers to abandon com- putationalism altogether, an increasing number is willing to reconsider x Preface the very notion of computation, motivated in part by the recogni- tion that real-world computers, like minds, must also deal with issues of embodiment, interaction, physical implementation, and semantics. This recognition raises the possibility that classical computationalism failed not because computing is irrelevant to mind, but because purely “logical” or “abstract” theories of computation fail to deal with issues that are vital to both real-world computers and minds. Perhaps the prob- lem is not with computing per se, but with our present understanding of computing, in which case the situation can be repaired by develop- ing a successor notion of computation that not only respects the classical (and critical) limiting results about algorithms, grammars, complexity bounds, and so on, but also does justice to real-world concerns of daily computational practice. Such a notion that takes computing to be not abstract, syntactic, disembodied, isolated, or nonintentional, but con- crete, semantic, embodied, interactive, and intentional offers a much better chance of serving as a possible foundationY for a realistic theory of mind. L Computationalism: New Directions is aF first attempt to stake out the territory for computationalism based on a “successor” notion of compu- tation. It covers a broad intellectual territory,M from historic developments of the notions of computation and mechanism in the computationalist paradigm, to questions about theA role of Turing machines and computa- tional practice in artificial intelligenceE research; from different construals of computation and theirT role in the computational theory of mind, to the nature of intentionality and the origin of language. The first chapter serves both as historic overview of the computation- alist thinking and as introduction to the later chapters. It attempts to extract a historic trajectory that ties the mechanist views of past centuries to present perspectives on computation. Various references to later chap- ters point to places where the arguments are developed in more detail. In the second chapter, Brian Smith examines various attempts to an- swer the question “what is computation?” Focusing on formal symbol manipulation and effective computability—two out of about a dozen dif- ferent ways of construing “computation”—he shows that neither of them can do justice to the three conceptual criteria he sets forth. His investiga- tion leads to the claim that “computation” is not subject matter and even- tually to the demand for a new metaphysics. Team-Fly® Preface xi B. Jack Copeland also points to a crucial distinction in chapter three, that between a narrow and a wide construal of “mechanism.” The wide conception countenances the possibility of information-processing ma- chines that cannot be mimicked by a universal Turing machine, allowing in particular the mind to be such a machine. Copeland shows that arguments for a narrow mechanism—the view that the mind is a ma- chine equivalent to a Turing machine—are vitiated by various closely re- lated fallacies, including the “equivalence fallacy” and the “simulation fallacy.” Chapter 4 takes on the issue of whether minds are computational in the Turing-machine sense from a quite different perspective. Here, Aaron Sloman criticizes the common view that the notion of a Turing machine is directly relevant to artificial intelligence. He shows that computers are the result of a convergence of two strands of historic developments of machines and discusses their relevance to artificial intelligence as well as their similarity to various aspects of the brain. Although these historic developments have nothing to do with Turing machines or the mathemat- ical theory of computation, he claims they have everything to do with the task of understanding, modeling, or replicating human as well as animal intelligence. In chapter 5 Phil Agre reveals five “dissociations,” that is, intellectual tensions between two opposing conceptions such as “mind versus body,” that have accompanied artificial intelligence (and computationalism) from its very beginning. He shows that although it is recognized that the two concepts underwriting each opposition are distinct, they are uninten- tionally conflated in the writings of the field. To overcome these difficul- ties, Agre advocates a “critical” technical practice that may be able to listen to and learn from reality by building systems and understanding the ways in which they do and do not work. In chapter 6 Stevan Harnad, advocating a narrow conception of mean- ing, shows how per se meaningless symbols for categories are connected to what they mean: they are grounded in the capacity to sort, label, and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their seman- tic interpretations. He points out that not all categories need to be grounded this way and that language allows us to “steal” categories quickly and effortlessly through hearsay instead of having to earn them xii Preface through risky and time-consuming sensorimotor trial-and-error learning. It is through language that an agent (e.g., a robot) can acquire categories it could not have acquired through its sensors. John Haugeland, then, broadens the discussion about meaning

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    217 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us