Parallel Computation and Computers for Artificial Intelligence the Kluwer International Series in Engineering and Computer Science
Total Page:16
File Type:pdf, Size:1020Kb
PARALLEL COMPUTATION AND COMPUTERS FOR ARTIFICIAL INTELLIGENCE THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE PARALLEL PROCESSING AND FIFTH GENERATION COMPUTING Consulting Editor Doug DeGroot PARALLEL COMPUTATION AND COMPUTERS FOR ARTIFICIAL INTELLIGENCE edited by JANUSZ S. KOW AUK Boeing Computer Services, Bellevue, Washington and University of Washington, Seattle, Washington KLUWER ACADEMIC PUBLISHERS Boston/Dordrecht/Lancaster Distributors for North America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, Massachusetts 02061, USA Distributors for the UK and Ireland: Kluwer Academic Publishers MTP Press Limited Falcon House, Queen Square Lancaster LAI IRN, UNITED KINGDOM Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 22 3300 AH Dordrecht, THE NETHERLANDS Library of Congress Cataloging-in-Publication Data Parallel computation and computers for aritificial intelligence. (The Kluwer international series in engineering and computer science; SECS . Parallel processing and fifth generation computing) Bibliography: p. I. Parallel processing (Electronic computers) 2. Artificial intelligence. I. Kowalik, Janusz S. II. Series. QA76.5.P3147 1987 006.3 87-3749 ISBN-13: 978-1-4612-9188-6 e-ISBN-13: 978-1-4613-1989-4 DOl: 10.1007/978-1-4613-1989-4 Copyright © 1988 by Kluwer Academic Publishers Softcover reprint of the hardcover 1st edition 1988 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, Massachusetts 02061 Typeset by Macmillan India Ltd, Bangalore 25. CONTENTS Preface vii Introduction IX Contributors xix PART I: PARALLEL COMPUTATION 1. Parallel Processing in Artificial Intelligence 3 Scott E. Fahlman 2. Parallel Computing Using Multilisp 21 Robert H. Halstead, Jr. 3. Execution of Common Lisp Programs in a Parallel Environment 51 Patrick F. McGehearty and Edward J. Krall 4. Qlisp 63 Richard P. Gabriel and John McCarthy 5. Restricted AND-Parallel Execution of Logic Programs 91 Doug DeGroot 6. ParIog: Parallel Programming in Logic 109 Keith Clark and Steve Gregory 7. Data-driven Processing of Semantic Nets 131 Lubomir Bic v vi Contents PART II: PARALLEL COMPUTERS 151 8. Application of the Butterfly Parallel Processor in Artificial Intelligence 153 Donald C. Allen, and N. S. Sridharan 9. On the Range of Applicability of an Artificial Intelligence Machine 165 David E. Shaw 10. Low-level Vision on Warp and the Apply Programming Model 185 Leonard G. C. Harney, Jon A. Webb, and I-Chen Wu 11. AHR: A Parallel Computer for Pure Lisp 201 Adolfo Guzman 12. FAIM-l: An Architecture for Symbolic Multiprocessing 223 Alan L. Davis 13. Overview of AI Application-Oriented Parallel Processing Research in Japan 247 Ryutarou Ohbuchi APPENDIX 261 A Survey on Special Purpose Computer Architecture for AI 263 Benjamin W. Wah and Guo-Jie Li PREFACE It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming. Researchers in the field and users of AI software hope that it will be possible to find and exploit high degrees of parallelism in large AI programs in order to reduce their processing time. We have reason to believe that this hope is justified. Parallel computation may prove useful in shortening the processing time in AI applications that require substantial but not excessive speed-ups. What we mean is that parallel processing alone could not and will not overcome the exponential complexity that characterizes very hard AI problems. Also, we should keep in mind that some AI problems involve large amounts of numerical processing. In such applications both the numerical and the symbolic components of the hybrid software systems have to be computed in parallel modes to achieve significant speedups. Many AI computer programs are pattern-directed. In pattern-directed computer programs, distinct computational modules, representing chunks of knowledge, are activated by successful pattern matches that occur in data bases. In contrast to conventional computer programs, the pattern-directed modules do not call each other explicitly but cooperate indirectly via commonly accessible data bases. This structure of knowledge-based computer programs has the following major consequences: 1. The programs are very flexible; the system components are loosely connected and each module, such as an if-then rule, can be added or dropped without necessarily destroying the rest of the system. ix x Introduction 2. Multiple modules can be processed in parallel, since the conditions that trigger their execution may be satisfied by more than one module. The reader interested in pattern-directed programming is referred to Bratko [1], who presents a lucid discussion of the topic. For our purposes, it suffices to observe that AI computer program organization often lends itself naturally to parallel computation. Some specialized AI systems such as blackboard architectures, also offer a very natural possibility for large-grain parallelism. Nii [2] enumerated and described three methods for using multiple processors in the blackboard systems: 1. Partitioning the solution space on the blackboard into separate, loosely coupled regions. 2. Using multiprocessors to place the blackboard data in a shared memory and distributing the knowledge sources on different processors. 3. Partitioning the problem into independent subproblems and solving each subproblem on a separate processor. Still another source of parallelism can be found in AI programming languages such as Prolog and Lisp. Prolog clauses can be regarded as pattern-directed modules, which we have just discussed. Parallel Lisp's on the other hand, allow parallel execution by using special constructs and extensions to sequential dialects of this language. We hope that by now our reader suspects that parallel processing may indeed play an increasingly important role in AI research and applications and is willing to take a closer look at some chapters of the book. The first part of the book, entitled Parallel Computation, opens with Scott Fahlman's chapter on "Parallel Processing in Artificial Intelligence." He divides parallel approaches to AI into three broad categories: 1. General programming approaches, such as current dialects of Lisp offering small granularity parallelism, or blackboards, which utilize larger modules of computation. 2. Specialized programming languages, such as Prolog or OPS5. 3. The active memory approach, which attempts to apply massive parallelism to the problems oflocating relevant information in large knowledge bases, doing simple inferences, and identifying stored descriptions that match given inputs. The last approach is the most radical departure from the current knowledge based methodologies. It replaces clever, hand-crafted programming by a massively parallel brute force method. It also offers some hope for fundamental advances in AI and may help us to understand how the human brain functions. The general programming approach is represented in the book by the next three chapters. Chapter 2, by Robert Halstead, describes Multilisp, which is a Introduction xi version of the Lisp-like language Scheme, developed at MIT and extended to specify parallel execution using thefuture construct. In Multilisp a function may return a promissory note instead of an actual value and then attempt to find a processor for performing the actual computation. The future construct creates parallelism by manipulating partly computed data. Multilisp has been implemented on Concert, a 28-processor shared-memory machine, and Butterfly (see Chapter 8). The performance of Concert M ultilisp on two parallel test programs, tree insertion and Quicksort, is presented and discussed. One of the author's conclusions is that constructs such as future are only part of what is needed to exploit fully different levels of parallelism. In particular, we should have constructs that would allow various parts of a program to execute concurrently, that is a means to do parallel programming