Test & Evaluation of AI-Enabled and Autonomous Systems: a Literature Review

Total Page:16

File Type:pdf, Size:1020Kb

Test & Evaluation of AI-Enabled and Autonomous Systems: a Literature Review INSTITUTE FOR DEFENSE ANALYSES Test & Evaluation of AI-enabled and Autonomous Systems: A Literature Review Heather M. Wojton, Project Leader Daniel J. Porter John W. Dennis September 2020 This publication has not been approved by the sponsor for distribution and release. Reproduction or use of this material is not authorized without prior permission from the responsible IDA Division Director. IDA Document NS-D-14331 Log: H 2020-000326 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 The Institute for Defense Analyses is a nonprofit corporation that operates three Federally Funded Research and Development Centers. Its mission is to answer the most challenging U.S. security and science policy questions with objective analysis, leveraging extraordinary scientific, technical, and analytic expertise. About This Publication This work was conducted by the Institute for Defense Analyses (IDA) under contract HQ0034-19-D-0001, Task 229990, “Test Science,” for the Office of the Director, Operational Test and Evaluation. The views, opinions, and findings should not be construed as representing the official position of either the Department of Defense or the sponsoring organization. Acknowledgments The IDA Technical Review Committee was chaired by Mr. Robert R. Soule and consisted of Rachel A. Haga, John T. Haman, Mark R. Herrera, and Brian D. Vickers from the Operational Evaluation Division, and Nicholas J. Kaminski from the Science & Technology Division For more information: Heather M. Wojton, Project Leader [email protected], (703) 845-6811 Robert R. Soule, Director, Operational Evaluation Division [email protected] • (703) 845-2482 Copyright Notice © 2020 Institute for Defense Analyses 4850 Mark Center Drive, Alexandria, Virginia 22311-1882 • (703) 845-2000 This material may be reproduced by or for the U.S. Government pursuant to the copyright license under the clause at DFARS 252.227-7013 [Feb. 2014]. INSTITUTE FOR DEFENSE ANALYSES IDA Document NS-D-14331 Test & Evaluation of AI-enabled and Autonomous Systems: A Literature Review Heather M. Wojton, Project Leader Daniel J. Porter John W. Dennis Executive Summary This paper summarizes a subset of the literature regarding the challenges to and recommendations for the test, evaluation, verification, and validation (TEV&V) of autonomous military systems. This literature review is meant for informational purposes only and does not make any recommendations of its own. A synthesis of the literature identified the following categories of TEV&V challenges: 1. Problems arising from the complexity of autonomous systems; 2. Challenges imposed by the structure of the current acquisition system; 3. Lack of methods, tools, and infrastructure for testing; 4. Novel safety and security issues; 5. A lack of consensus on policy, standards, and metrics; 6. Issues around how to integrate humans into the operation and testing of these systems. Recommendations for how to test autonomous military systems can be sorted into five broad groups: 1. Use certain processes for writing requirements, or for designing and developing systems; 2. Make targeted investments to develop methods or tools, improve our test infrastructure, or enhance our workforce's AI skillsets; 3. Use specific proposed test frameworks; 4. Employ novel methods for system safety or cybersecurity; 5. Adopt specific proposed policies, standards, or metrics. 1 Table of Contents Introduction .............................................................................................................. 1 T&E Challenges for Autonomy .............................................................................. 4 Challenge #1: System Complexity ....................................................................................4 Task Complexity ....................................................................................................................... 4 The State-Space Explosion ........................................................................................................ 5 Stochastic/Non-Deterministic/Chaotic Processes ..................................................................... 5 Learning Systems ...................................................................................................................... 5 Multi-Agent/Component Evaluation ......................................................................................... 6 Novelty .......................................................................................................................................6 System Opacity.......................................................................................................................... 6 Challenge #2: Acquisition System Limitations................................................................7 Requirements ............................................................................................................................. 7 Processes ................................................................................................................................... 8 Stovepiping ................................................................................................................................ 8 Challenge #3: Lack of Methods, Tools, or Infrastructure .............................................9 Method Needs............................................................................................................................ 9 Scalability ................................................................................................................................ 10 Instrumentation ........................................................................................................................ 11 Simulation ............................................................................................................................... 11 Ranges and Infrastructure ........................................................................................................ 11 Personnel ................................................................................................................................. 12 Challenge #4: Safety & Security .....................................................................................12 Safety ....................................................................................................................................... 12 Security .................................................................................................................................... 13 Challenge #5: Lack of Policy, Standards, or Metrics ...................................................13 Policy ....................................................................................................................................... 13 Metrics ..................................................................................................................................... 13 Standards ................................................................................................................................. 14 Challenge #6: Human-System Interaction.....................................................................14 Trust ........................................................................................................................................ 15 Teaming ................................................................................................................................... 15 Human Judgment & Control ................................................................................................... 16 Summary of Challenges ...................................................................................................16 T&E Recommendations for Autonomy ............................................................... 18 Recommendation #1: Requirements, Design, & Development Pipeline .....................18 Requirements ........................................................................................................................... 18 Assurance Aiding Designs ...................................................................................................... 19 i Safety Middleware ............................................................................................................ 19 Built-in Transparency and Traceability ............................................................................ 20 Explainability .................................................................................................................... 21 “Common, Open Architectures with Reusable Modules” ................................................ 21 Modularity ........................................................................................................................ 21 Open Architectures ........................................................................................................... 22 Capability Reuse ............................................................................................................... 22 Design & Development Processes........................................................................................... 23 Coordinate, Integrate, & Extend Activities ............................................................................. 23 Recommendation #2: Methods, Tools, Infrastructure, & Workforce ........................24 Leverage Existing Methods ....................................................................................................
Recommended publications
  • University of Navarra Ecclesiastical Faculty of Philosophy
    UNIVERSITY OF NAVARRA ECCLESIASTICAL FACULTY OF PHILOSOPHY Mark Telford Georges THE PROBLEM OF STORING COMMON SENSE IN ARTIFICIAL INTELLIGENCE Context in CYC Doctoral dissertation directed by Dr. Jaime Nubiola Pamplona, 2000 1 Table of Contents ABBREVIATIONS ............................................................................................................................. 4 INTRODUCTION............................................................................................................................... 5 CHAPTER I: A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE .................................. 9 1.1. THE ORIGIN AND USE OF THE TERM “ARTIFICIAL INTELLIGENCE”.............................................. 9 1.1.1. Influences in AI................................................................................................................ 10 1.1.2. “Artificial Intelligence” in popular culture..................................................................... 11 1.1.3. “Artificial Intelligence” in Applied AI ............................................................................ 12 1.1.4. Human AI and alien AI....................................................................................................14 1.1.5. “Artificial Intelligence” in Cognitive Science................................................................. 16 1.2. TRENDS IN AI........................................................................................................................... 17 1.2.1. Classical AI ....................................................................................................................
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • Iaj 10-3 (2019)
    Vol. 10 No. 3 2019 Arthur D. Simons Center for Interagency Cooperation, Fort Leavenworth, Kansas FEATURES | 1 About The Simons Center The Arthur D. Simons Center for Interagency Cooperation is a major program of the Command and General Staff College Foundation, Inc. The Simons Center is committed to the development of military leaders with interagency operational skills and an interagency body of knowledge that facilitates broader and more effective cooperation and policy implementation. About the CGSC Foundation The Command and General Staff College Foundation, Inc., was established on December 28, 2005 as a tax-exempt, non-profit educational foundation that provides resources and support to the U.S. Army Command and General Staff College in the development of tomorrow’s military leaders. The CGSC Foundation helps to advance the profession of military art and science by promoting the welfare and enhancing the prestigious educational programs of the CGSC. The CGSC Foundation supports the College’s many areas of focus by providing financial and research support for major programs such as the Simons Center, symposia, conferences, and lectures, as well as funding and organizing community outreach activities that help connect the American public to their Army. All Simons Center works are published by the “CGSC Foundation Press.” The CGSC Foundation is an equal opportunity provider. InterAgency Journal FEATURES Vol. 10, No. 3 (2019) 4 In the beginning... Special Report by Robert Ulin Arthur D. Simons Center for Interagency Cooperation 7 Military Neuro-Interventions: The Lewis and Clark Center Solving the Right Problems for Ethical Outcomes 100 Stimson Ave., Suite 1149 Shannon E.
    [Show full text]
  • An Investigation of the Search Space of Lenat's AM Program Redacted for Privacy Abstract Approved: Thomas G
    AN ABSTRACT OF THE THESIS OF Prafulla K. Mishra for the degree of Master of Sciencein Computer Science presented on March 10,1988. Title: An Investigation of the Search Space of Lenat's AM Program Redacted for Privacy Abstract approved:_ Thomas G. Dietterich. AM is a computer program written by Doug Lenat that discoverselementary mathematics starting from some initial knowledge of set theory. Thesuccess of this program is not clearly understood. This work is an attempt to explore the searchspace of AM in order to un- derstand the success and eventual failure of AM. We show thatoperators play an important role in AM. Operators of AMcan be divided into critical and unneces- sary. The critical operators can be further divided into implosive and explosive. Heuristics are needed only for the explosive operators. Most heuristics and operators in AM can be removed without significant ef- fects. Only three operators (Coalesce, Invert, Repeat) andone initial concept (Bag Union) are enough to discover most of the elementary math concepts that AM discovered. An Investigation of the Search Space of Lenat's AM Program By Prafulla K. Mishra A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Master of Science Completed March 10, 1988 Commencement June 1988 APPROVED: Redacted for Privacy Professor of Computer Science in charge of major Redacted for Privacy Head of Department of Computer Science Redacted for Privacy Dean of GradtiaUe School Date thesis presented March 10, 1988 Typed by Prafulla K. Mishra for Prafulla K. Mishra ACKNOWLEDGEMENTS I am indebted to Tom Dietterich for his help, guidanceand encouragement throughout this work.
    [Show full text]
  • Arxiv:1404.7828V4 [Cs.NE] 8 Oct 2014 to Those Who Contributed to the Present State of the Art
    Deep Learning in Neural Networks: An Overview Technical Report IDSIA-03-14 / arXiv:1404.7828 v4 [cs.NE] (88 pages, 888 references) Jurgen¨ Schmidhuber The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sull’Intelligenza Artificiale University of Lugano & SUPSI Galleria 2, 6928 Manno-Lugano Switzerland 8 October 2014 Abstract In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distin- guished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. LATEX source: http://www.idsia.ch/˜juergen/DeepLearning8Oct2014.tex Complete BIBTEX file (888 kB): http://www.idsia.ch/˜juergen/deep.bib Preface This is the preprint of an invited Deep Learning (DL) overview. One of its goals is to assign credit arXiv:1404.7828v4 [cs.NE] 8 Oct 2014 to those who contributed to the present state of the art. I acknowledge the limitations of attempting to achieve this goal. The DL research community itself may be viewed as a continually evolving, deep network of scientists who have influenced each other in complex ways. Starting from recent DL results, I tried to trace back the origins of relevant ideas through the past half century and beyond, sometimes using “local search” to follow citations of citations backwards in time.
    [Show full text]
  • Computational Creativity: Three Generations of Research and Beyond
    Computational Creativity: Three Generations of Research and Beyond Debasis Mitra Department of Computer Science Florida Institute of Technology [email protected] Abstract In this article we have classified computational creativity research activities into three generations. Although the 2. Philosophical Angles respective system developers were not necessarily targeting Philosophers try to understand creativity from the their research for computational creativity, we consider their works as contribution to this emerging field. Possibly, the historical perspectives – how different acts of creativity first recognition of the implication of intelligent systems (primarily in science) might have happened. Historical toward the creativity came with an AAAI Spring investigation of the process involved in scientific discovery Symposium on AI and Creativity (Dartnall and Kim, 1993). relied heavily on philosophical viewpoints. Within We have here tried to chart the progress of the field by philosophy there is an ongoing old debate regarding describing some sample projects. Our hope is that this whether the process of scientific discovery has a normative article will provide some direction to the interested basis. Within the computing community this question researchers and help creating a vision for the community. transpires in asking if analyzing and computationally emulating creativity is feasible or not. In order to answer this question artificial intelligence (AI) researchers have 1. Introduction tried to develop computing systems to mimic scientific One of the meanings of the word “create” is “to produce by discovery processes (e.g., BACON, KEKADA, etc. that we imaginative skill” and that of the word “creativity” is “the will discuss), almost since the beginning of the inception of ability to create,’ according to the Webster Dictionary.
    [Show full text]
  • Evolution of Complexity in Real-World Domains
    Evolution of Complexity in Real-World Domains A Dissertation Presented to The Faculty of the Graduate School of Arts and Sciences Brandeis University Department of Computer Science Jordan B. Pollack, Advisor In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy by Pablo Funes May, 2001 This dissertation, directed and approved by Pablo Funes’s committee, has been accepted and approved by the Graduate Faculty of Brandeis University in partial fulfillment of the requirements for the degree of: DOCTOR OF PHILOSOPHY Dean of Arts and Sciences Dissertation Committee: Jordan B. Pollack, Dept. of Computer Science, Chair. Martin Cohn, Dept. of Computer Science Timothy J. Hickey, Dept. of Computer Science Dario Floreano, ISR, École Polytechnique Fédérale de Lausanne c Copyright by Pablo Funes 2001 in memoriam Everé Santiago Funes (1913-2000) vii Acknowledgments Elizabeth Sklar collaborated on the work on coevolving behavior with live creatures (chapter 3). Hugues Juillé collaborated with the Tron GP architecture (section 3.3.3) and the nov- elty engine (section 3.3.7). Louis Lapat collaborated on EvoCAD (section 2.9). Thanks to Jordan Pollack for the continuing support and for being there when it really matters. Thanks to Betsy Sklar, my true American friend. And to Richard Watson for the love and the focus on the real science. Also to all the people who contributed in one way or another, in no particular order: José Castaño, Adriana Villella, Edwin De Jong, Barry Werger, Ofer Melnik, Isabel Ennes, Sevan Ficici, Myrna Fox, Miguel Schneider, Maja Mataric, Martin Cohn, Aroldo Kaplan, Otilia Vainstok. And mainly to my family and friends, among them: María Argüello, Santiago Funes, Soledad Funes, Carmen Argüello, María Josefa González, Faustino Jorge, Martín Leven- son, Inés Armendariz, Enrique Pujals, Carlos Brody, Ernesto Dal Bo, Martín Galli, Marcelo Oglietti.
    [Show full text]
  • QKSA: Quantum Knowledge Seeking Agent
    QKSA: Quantum Knowledge Seeking Agent motivation, core thesis and baseline framework Aritra Sarkar Department of Quantum & Computer Engineering, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, The Netherlands 1 Introduction In this article we present the motivation and the core thesis towards the implementation of a Quantum Knowledge Seeking Agent (QKSA). QKSA is a general reinforcement learning agent that can be used to model classical and quantum dynamics. It merges ideas from universal artificial general intelligence, con- structor theory and genetic programming to build a robust and general framework for testing the capabilities of the agent in a variety of environments. It takes the artificial life (or, animat) path to artificial general intelligence where a population of intelligent agents are instantiated to explore valid ways of modeling the perceptions. The multiplicity and survivability of the agents are defined by the fitness, with respect to the explainability and predictability, of a resource-bounded computational model of the environment. This general learning approach is then employed to model the physics of an environment based on subjective ob- server states of the agents. A specific case of quantum process tomography as a general modeling principle is presented. The various background ideas and a baseline formalism is discussed in this article which sets the groundwork for the implementations of the QKSA that are currently in active development. Section 2 presents a historic overview of the motivation behind this research In Section 3 we survey some general reinforcement learning models and bio-inspired computing techniques that forms a baseline for the design of the QKSA.
    [Show full text]
  • A Paradigm for Genetically Breeding Populations of Computer Programs to Solve Problems
    Computer Science Department June 1990 GENETIC PROGRAMMING: A PARADIGM FOR GENETICALLY BREEDING POPULATIONS OF COMPUTER PROGRAMS TO SOLVE PROBLEMS John R. Koza ([email protected]) Computer Science Department Stanford University Margaret Jacks Hall Stanford, CA 94305 TABLE OF CONTENTS 1............ INTRODUCTION AND OVERVIEW 1 1.1. ........EXAMPLES OF PROBLEMS REQUIRING DISCOVERY OF A COMPUTER PROGRAM 1 1.2. ........SOLVING PROBLEMS REQUIRING DISCOVERY OF A COMPUTER PROGRAM 3 2............ BACKGROUND ON GENETIC ALGORITHMS 6 3............ THE ÒGENETIC PROGRAMMINGÓ PARADIGM 8 3.1. ........THE STRUCTURES UNDERGOING ADAPTATION 8 3.2. ........THE SEARCH SPACE 10 3.3. ........THE INITIAL STRUCTURES 10 3.4. ........THE FITNESS FUNCTION 10 3.5. ........THE OPERATIONS THAT MODIFY THE STRUCTURES 11 3.5.1....... THE FITNESS PROPORTIONATE REPRODUCTION OPERATION 11 3.5.2....... THE CROSSOVER (RECOMBINATION) OPERATION 12 3.6. ........THE STATE OF THE SYSTEM 13 3.7. ........IDENTIFYING THE RESULTS AND TERMINATING THE ALGORITHM 14 3.8. ........THE PARAMETERS THAT CONTROL THE ALGORITHM 14 4............ EXPERIMENTAL RESULTS 16 4.1. ........MACHINE LEARNING OF A FUNCTION 16 4.1.1....... BOOLEAN 11-MULTIPLEXER FUNCTION 16 4.1.2 ......THE BOOLEAN 6-MULTIPLEXER AND 3-MULTIPLEXER FUNCTIONS 24 4.1.3 ......NON-RANDOMNESS OF THESE RESULTS 24 4.2. ........PLANNING 27 4.2.1. .....BLOCK STACKING 27 4.2.2. .....CORRECTLY STACKING BLOCKS 29 4.2.3....... EFFICIENTLY STACKING BLOCKS 31 4.2.4. .....A PARSIMONIOUS EXPRESSION FOR STACKING BLOCKS 32 4.2.5. .....ARTIFICIAL ANT - TRAVERSING THE JOHN MUIR TRAIL 33 4.3. ........SYMBOLIC FUNCTION IDENTIFICATION 35 4.3.1....... SEQUENCE INDUCTION 35 4.3.1.1...
    [Show full text]
  • Artificial Intelligence an Unfair Review of Margaret Boden's the Creative
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence ELSEVIER Artificial Intelligence 79 (1995) 97-109 Book Review An unfair review of Margaret Boden’s The Creative Mind* from the perspective of creative systems David Perkins Project Zero, Harvard Graduate School of Education, Cambridge, MA 02138, USA Changing the rules of the game is often unsettling, but it can be fruitful. Consider for instance the tale of Alexander the Great and the Gordian knot. According to Greek legend, the knot itself was an intricate tangle tied by Gordius, king of Phrygia. An oracle forecast that whosoever could undo the Gordian knot would rule all of Asia. Alexander wanted that prophecy working for him. Not one to hover over niceties, he drew his sword and cut it through, thus “untying” the knot. Alexander’s move both impresses and annoys. It impresses with its decisiveness of action and its creativity. It annoys because Alexander changed the tacit rules. To cut the knot through is not what one would imagine undoing the knot should amount to. It’s unfair! All this connects in a couple of ways to Margaret Boden’s 1991 book The Creative Mind. First of all, in her exploration of creative thought, Boden addresses the key question: What really makes the difference between genuinely creative thought and not-so-creative, albeit effective, thought? At the heart of her answer lies the idea of changing the rules. Often, highly productive thinking occurs within the boundaries of established tacit or explicit rule systems.
    [Show full text]
  • Evolution of Cooperative Problem Solving in an Artificial Economy
    ARTICLE Communicated by Terrence Sejnowski Evolution of Cooperative Problem Solving in an Artificial Economy Eric B. Baum Igor Durdanovic NEC Research Institute, Princeton, NJ 08540 We address the problem of how to reinforce learning in ultracomplex environments, with huge state-spaces, where one must learn to exploit a compact structure of the problem domain. The approach we propose is to simulate the evolution of an artificial economy of computer programs. The economy is constructed based on two simple principles so as to assign credit to the individual programs for collaborating on problem solutions. We find empirically that starting from programs that are random computer code, we can develop systems that solve hard problems. In particular, our economy learned to solve almost all random Blocks World problems with goal stacks that are 200 blocks high. Competing methods solve such problems only up to goal stacks of at most 8 blocks. Our economy has also learned to unscramble about half a randomly scrambled Rubik’s cube and to solve several commercially sold puzzles. 1 Introduction Reinforcement learning (cf. Sutton & Barto, 1998) formalizes the problem of learning from interaction with an environment. The two standard ap- proaches are value iteration and policy iteration. At the current state of the art, however, neither appears to offer much hope for addressing ul- tracomplex problems. Standard approaches to value iteration depend on enumerating the state-space and are thus problematic when the state-space is huge. Moreover, they depend on finding an evaluation function showing progress after a single action, and such a function may be extremely hard to learn or even to represent.
    [Show full text]
  • The International Dictionary of Artificial Intelligence
    1 The International Dictionary of Artificial Intelligence William J. Raynor, Jr. Glenlake Publishing Company, Ltd. Chicago • London • New Delhi Amacom American Management Association New York • Atlanta • Boston • Chicago • Kansas City San Francisco • Washington, D.C. Brussels • Mexico City • Tokyo • Toronto This book is available at a special discount when ordered in bulk quantities. For information, contact Special Sales Department, AMACOM, a division of American Management Association, 1601 Broadway, New York, NY 10019. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional service. If legal advice or other expert assistance is required, the services of a competent professional person should be sought. © 1999 The Glenlake Publishing Company, Ltd. All rights reserved. Printed in the Unites States of America ISBN: 0-8144-0444-8 This publication may not be reproduced, stored in a retrieval system, or transmitted in whole or in part, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. AMACOM American Management Association New York • Atlanta • Boston • Chicago • Kansas City • San Francisco • Washington, D.C. Brussels • Mexico City • Tokyo • Toronto Printing number 10 9 8 7 6 5 4 3 2 1 Page i Table of Contents About the Author iii Acknowledgements v List of Figures, Graphs, and Tables vii Definition of Artificial Intelligence (AI) Terms 1 Appendix: Internet Resources 315 Page iii About the Author William J. Raynor, Jr. earned a Ph.D.
    [Show full text]