Artificial Intelligence Illuminated

Total Page:16

File Type:pdf, Size:1020Kb

Artificial Intelligence Illuminated Artificial Intelligence Illuminated Ben Coppin JONES AND BARTLETT PUBLISHERS World Headquarters Jones and Bartlett Publishers Jones and Bartlett Publishers Jones and Bartlett Publishers 40 Tall Pine Drive Canada International Sudbury, MA 01776 2406 Nikanna Road Barb House, Barb Mews 978-443-5000 Mississauga, ON L5C 2W6 London W6 7PA [email protected] CANADA UK www.jbpub.com Copyright © 2004 by Jones and Bartlett Publishers, Inc. Cover image © Photodisc Library of Congress Cataloging-in-Publication Data Coppin, Ben. Artificial intelligence illuminated / by Ben Coppin.--1st ed. p. cm. Includes bibliographical references and index. ISBN 0-7637-3230-3 1. Artificial intelligence. I. Title. Q335.C586 2004 006.3--dc22 2003020604 3382 All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without written permission from the copyright owner. Acquisitions Editor: Stephen Solomon Production Manager: Amy Rose Marketing Manager: Matthew Bennett Editorial Assistant: Caroline Senay Manufacturing Buyer: Therese Bräuer Cover Design: Kristin E. Ohlin Text Design: Kristin E. Ohlin Composition: Northeast Compositors Technical Artist: George Nichols Printing and Binding: Malloy, Inc. Cover Printing: Malloy, Inc. Printed in the United States of America 08 07 06 05 04 10 9 8 7 6 5 4 3 2 1 For Erin This page intentionally left blank Preface Who Should Read This Book This book is intended for students of computer science at the college level, or students of other subjects that cover Artificial Intelligence. It also is intended to be an interesting and relevant introduction to the subject for other students or individuals who simply have an interest in the subject. The book assumes very little knowledge of computer science, but does assume some familiarity with basic concepts of algorithms and computer systems. Data structures such as trees, graphs, and stacks are explained briefly in this book, but if you do not already have some familiarity with these concepts, you should probably seek out a suitable book on algorithms or data structures. It would be an advantage to have some experience in a programming lan- guage such as C++ or Java, or one of the languages commonly used in Arti- ficial Intelligence research, such as PROLOG and LISP, but this experience is neither necessary nor assumed. Many of the chapters include practical exercises that require the reader to develop an algorithm or program in a programming language of his or her choice. Most readers should have no difficulty with these exercises. How- ever, if any reader does not have the necessary skills he or she simply should describe in words (or in pseudocode) how his or her programs work, giving as much detail as possible. vi Preface How to Read This Book This book can be read in several ways. Some readers will choose to read the chapters through in order from Chapter 1 through Chapter 21. Any chapter that uses material which is presented in another chapter gives a clear refer- ence to that chapter, and readers following the book from start to finish should not need to jump forward at any point, as the chapter dependencies tend to work in a forward direction. Another perfectly reasonable way to use this book is as a reference. When a reader needs to know more about a particular subject, he or she can pick up this book and select the appropriate chapter or chapters, and can be illumi- nated on the subject (at least, that is the author’s intent!) Chapter 12 contains a diagram that shows how the dependencies between chapters work (Section 12.6.2). This diagram shows, for example, that if a reader wants to read Chapter 8, it would be a good idea to already have read Chapter 7. This book is divided into six parts, each of which is further divided into a number of chapters. The chapters are laid out as follows: Part 1: Introduction to Artificial Intelligence Chapter 1: A Brief History of Artificial Intelligence Chapter 2: Uses and Limitations Chapter 3: Knowledge Representation Part 2: Search Chapter 4: Search Methodologies Chapter 5: Advanced Search Chapter 6: Game Playing Part 3: Logic Chapter 7: Propositional and Predicate Logic Chapter 8: Inference and Resolution for Problem Solving Chapter 9: Rules and Expert Systems Preface vii Part 4: Machine Learning Chapter 10: Introduction to Machine Learning Chapter 11: Neural Networks Chapter 12: Probabilistic Reasoning and Bayesian Belief Networks Chapter 13: Artificial Life: Learning through Emergent Behavior Chapter 14: Genetic Algorithms Part 5: Planning Chapter 15: Introduction to Planning Chapter 16: Planning Methods Part 6: Advanced Topics Chapter 17: Advanced Knowledge Representation Chapter 18: Fuzzy Reasoning Chapter 19: Intelligent Agents Chapter 20: Understanding Language Chapter 21: Machine Vision Each chapter includes an introduction that explains what the chapter cov- ers, a summary of the chapter, some exercises and review questions, and some suggestions for further reading. There is a complete bibliography at the back of the book. This book also has a glossary, which includes a brief definition of most of the important terms used in this book. When a new term is introduced in the text it is highlighted in bold, and most of these words are included in the glossary. The only such terms that are not included in the glossary are the ones that are defined in the text, but that are not used elsewhere in the book. The use of third person pronouns is always a contentious issue for authors of text books, and this author has chosen to use he and she interchangeably. In some cases the word “he” is used, and in other cases “she.” This is not intended to follow any particular pattern, or to make any representations about the genders, but simply is in the interests of balance. viii Preface The first few chapters of this book provide introductory material, explain- ing the nature of Artificial Intelligence and providing a historical back- ground, as well as describing some of the connections with other disciplines. Some readers will prefer to skip these chapters, but it is advis- able to at least glance through Chapter 3 to ensure that you are familiar with the concepts of that chapter, as they are vital to the understanding of most of the rest of the book. Acknowledgments Although I wrote this book single-handedly, it was not without help. I would like to thank, in chronological order, Frank Abelson; Neil Salkind and everyone at Studio B; Michael Stranz, Caroline Senay, Stephen Solomon, and Tracey Chapman at Jones & Bartlett; also a number of peo- ple who read chapters of the book: Martin Charlesworth, Patrick Coyle, Peter and Petra Farrell, Robert Kealey, Geoffrey Price, Nick Pycraft, Chris Swannack, Edwin Young, my parents, Tony and Frances, and of course Erin—better late than never. Thanks also to: The MIT Press for the excerpt from ‘Learning in Multiagent Systems’ by Sandip Sen and Gerhard Weiss, © 2001, The MIT Press. The MIT Press for the excerpt from ‘Adaptation in Natural and Artificial Systems’ by John H. Holland, © 1992, The MIT Press. The MIT Press for the excerpt from ‘The Artificial Life Roots of Artificial Intelligence’ by Luc Steels, © 1994, the Massachusetts Institute of Technol- ogy. The IEEE for the excerpt from ‘Steps Towards Artificial Intelligence’ by Marvin Minsky, © 2001, IEEE. I have attempted to contact the copyright holders of all copyrighted quotes used in this book. If I have used any quotes without permission, then this was inadvertent, and I apologize. I will take all measures possible to rectify the situation in future printings of the book. Contents Preface v PA R T 1 Introduction to Artificial Intelligence 1 Chapter 1 A Brief History of Artificial Intelligence 3 1.1 Introduction 3 1.2 What Is Artificial Intelligence? 4 1.3 Strong Methods and Weak Methods 5 1.4 From Aristotle to Babbage 6 1.5 Alan Turing and the 1950s 7 1.6 The 1960s to the 1990s 9 1.7 Philosophy 10 1.8 Linguistics 11 1.9 Human Psychology and Biology 12 1.10 All Programming Languages 12 1.10.1 PROLOG 13 1.10.2 LISP 14 1.11 Chapter Summary 15 1.12 Review Questions 16 1.13 Further Reading 17 x Contents Chapter 2 Uses and Limitations 19 2.1 Introduction 19 2.2 The Chinese Room 20 2.3 HAL—Fantasy or Reality? 21 2.4 AI in the 21st Century 23 2.5 Chapter Summary 24 2.6 Review Questions 24 2.7 Further Reading 25 Chapter 3 Knowledge Representation 27 3.1 Introduction 27 3.2 The Need for a Good Representation 28 3.3 Semantic Nets 29 3.4 Inheritance 31 3.5 Frames 32 3.5.1 Why Are Frames Useful? 34 3.5.2 Inheritance 34 3.5.3 Slots as Frames 35 3.5.4 Multiple Inheritance 36 3.5.5 Procedures 37 3.5.6 Demons 38 3.5.7 Implementation 38 3.5.8 Combining Frames with Rules 40 3.5.9 Representational Adequacy 40 3.6 Object-Oriented Programming 41 3.7 Search Spaces 42 3.8 Semantic Trees 44 3.9 Search Trees 46 3.9.1 Example 1: Missionaries and Cannibals 47 3.9.2 Improving the Representation 49 3.9.3 Example 2: The Traveling Salesman 50 3.9.4 Example 3: The Towers of Hanoi 54 3.9.5 Example 4: Describe and Match 56 3.10 Combinatorial Explosion 57 3.11 Problem Reduction 57 Contents xi 3.12 Goal Trees 58 3.12.1 Top Down or Bottom Up? 60 3.12.2 Uses of Goal Trees 61 Example 1: Map Coloring Example 2: Proving Theorems Example 3: Parsing Sentences 63 Example 4: Games 3.13 Chapter Summary 64 3.14 Review Questions 65 3.15 Exercises 65 3.16 Further Reading 66 PA R T 2 Search 69 Chapter 4 Search Methodologies
Recommended publications
  • NI\SI\ I-J!'.~!.:}',-~)N, Vifm:Nu\ National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23665
    https://ntrs.nasa.gov/search.jsp?R=19830020606 2020-03-21T01:58:58+00:00Z NASA Technical Memorandum 83216 NASA-TM-8321619830020606 DECISION-MAKING AND PROBLEM SOLVING METHODS IN AUTOMATION TECHNOLOGY W. W. HANKINS) J. E. PENNINGTON) AND L. K. BARKER MAY 1983 : "J' ,. (,lqir; '.I '. .' I_ \ i !- ,( ~, . L!\NGLEY F~I:~~~:r.. r~CH (TN! El~ L1B!(,!\,f(,(, NA~;I\ NI\SI\ I-J!'.~!.:}',-~)N, Vifm:nU\ National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23665 TABLE OF CONTENTS Page SUMMARY ••••••••••••••••••••••••••••••••• •• 1 INTRODUCTION •• .,... ••••••••••••••••••••••• •• 2 DETECTION AND RECOGNITION •••••••••••••••••••••••• •• 3 T,ypes of Sensors Computer Vision CONSIGHT-1 S,ystem SRI Vision Module Touch and Contact Sensors PLANNING AND SCHEDULING ••••••••••••••••••••••••• •• 7 Operations Research Methods Linear Systems Nonlinear S,ystems Network Models Queueing Theory D,ynamic Programming Simulation Artificial Intelligence (AI) Methods Nets of Action Hierarchies (NOAH) System Monitoring Resource Allocation Robotics LEARNING ••••• ••••••••••••••••••••••••••• •• 13 Expert Systems Learning Concept and Linguistic Constructions Through English Dialogue Natural Language Understanding Robot Learning The Computer as an Intelligent Partner Learning Control THEOREM PROVING ••••••••••••••••••••••••••••• •• 15 Logical Inference Disproof by Counterexample Proof by Contradiction Resolution Principle Heuristic Methods Nonstandard Techniques Mathematical Induction Analogies Question-Answering Systems Man-Machine Systems
    [Show full text]
  • AI-KI-B Heuristic Search
    AI-KI-B Heuristic Search Ute Schmid & Diedrich Wolter Practice: Johannes Rabold Cognitive Systems and Smart Environments Applied Computer Science, University of Bamberg last change: 3. Juni 2019, 10:05 Outline Introduction of heuristic search algorithms, based on foundations of problem spaces and search • Uniformed Systematic Search • Depth-First Search (DFS) • Breadth-First Search (BFS) • Complexity of Blocks-World • Cost-based Optimal Search • Uniform Cost Search • Cost Estimation (Heuristic Function) • Heuristic Search Algorithms • Hill Climbing (Depth-First, Greedy) • Branch and Bound Algorithms (BFS-based) • Best First Search • A* • Designing Heuristic Functions • Problem Types Cost and Cost Estimation • \Real" cost is known for each operator. • Accumulated cost g(n) for a leaf node n on a partially expanded path can be calculated. • For problems where each operator has the same cost or where no information about costs is available, all operator applications have equal cost values. For cost values of 1, accumulated costs g(n) are equal to path-length d. • Sometimes available: Heuristics for estimating the remaining costs to reach the final state. • h^(n): estimated costs to reach a goal state from node n • \bad" heuristics can misguide search! Cost and Cost Estimation cont. Evaluation Function: f^(n) = g(n) + h^(n) Cost and Cost Estimation cont. • \True costs" of an optimal path from an initial state s to a final state: f (s). • For a node n on this path, f can be decomposed in the already performed steps with cost g(n) and the yet to perform steps with true cost h(n). • h^(n) can be an estimation which is greater or smaller than the true costs.
    [Show full text]
  • 2011-2012 ABET Self-Study Questionnaire for Engineering
    ABET SELF-STUDY REPOrt for Computer Engineering July 1, 2012 CONFIDENTIAL The information supplied in this Self-Study Report is for the confidential use of ABET and its authorized agents, and will not be disclosed without authorization of the institution concerned, except for summary data not identifiable to a specific institution. Table of Contents BACKGROUND INFORMATION ................................................................................... 5 A. Contact Information ............................................................................................. 5 B. Program History ................................................................................................... 6 C. Options ................................................................................................................. 7 D. Organizational Structure ...................................................................................... 8 E. Program Delivery Modes ..................................................................................... 8 F. Program Locations ............................................................................................... 9 G. Deficiencies, Weaknesses or Concerns from Previous Evaluation(s) ................. 9 H. Joint Accreditation ............................................................................................... 9 CRITERION 1. STUDENTS ........................................................................................... 10 A. Student Admissions ..........................................................................................
    [Show full text]
  • Adversarial Search
    Artificial Intelligence CS 165A Oct 29, 2020 Instructor: Prof. Yu-Xiang Wang ® ExamplEs of heuristics in A*-search ® GamEs and Adversarial SEarch 1 Recap: Search algorithms • StatE-spacE diagram vs SEarch TrEE • UninformEd SEarch algorithms – BFS / DFS – Depth Limited Search – Iterative Deepening Search. – Uniform cost search. • InformEd SEarch (with an heuristic function h): – Greedy Best-First-Search. (not complete / optimal) – A* Search (complete / optimal if h is admissible) 2 Recap: Summary table of uninformed search (Section 3.4.7 in the AIMA book.) 3 Recap: A* Search (Pronounced “A-Star”) • Uniform-cost sEarch minimizEs g(n) (“past” cost) • GrEEdy sEarch minimizEs h(n) (“expected” or “future” cost) • “A* SEarch” combines the two: – Minimize f(n) = g(n) + h(n) – Accounts for the “past” and the “future” – Estimates the cheapest solution (complete path) through node n function A*-SEARCH(problem, h) returns a solution or failure return BEST-FIRST-SEARCH(problem, f ) 4 Recap: Avoiding Repeated States using A* Search • Is GRAPH-SEARCH optimal with A*? Try with TREE-SEARCH and B GRAPH-SEARCH 1 h = 5 1 A C D 4 4 h = 5 h = 1 h = 0 7 E h = 0 Graph Search Step 1: Among B, C, E, Choose C Step 2: Among B, E, D, Choose B Step 3: Among D, E, Choose E. (you are not going to select C again) 5 Recap: Consistency (Monotonicity) of heuristic h • A heuristic is consistEnt (or monotonic) provided – for any node n, for any successor n’ generated by action a with cost c(n,a,n’) c(n,a,n’) • h(n) ≤ c(n,a,n’) + h(n’) n n’ – akin to triangle inequality.
    [Show full text]
  • Search Algorithms ◦ Breadth-First Search (BFS) ◦ Uniform-Cost Search ◦ Depth-First Search (DFS) ◦ Depth-Limited Search ◦ Iterative Deepening  Best-First Search
    9-04-2013 Uninformed (blind) search algorithms ◦ Breadth-First Search (BFS) ◦ Uniform-Cost Search ◦ Depth-First Search (DFS) ◦ Depth-Limited Search ◦ Iterative Deepening Best-First Search HW#1 due today HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3 Formulate — Search — Execute 1. Goal formulation 2. Problem formulation 3. Search algorithm 4. Execution A problem is defined by four items: 1. initial state 2. actions or successor function 3. goal test (explicit or implicit) 4. path cost (∑ c(x,a,y) – sum of step costs) A solution is a sequence of actions leading from the initial state to a goal state Search algorithms have the following basic form: do until terminating condition is met if no more nodes to consider then return fail; select node; {choose a node (leaf) on the tree} if chosen node is a goal then return success; expand node; {generate successors & add to tree} Analysis ◦ b = branching factor ◦ d = depth ◦ m = maximum depth g(n) = the total cost of the path on the search tree from the root node to node n h(n) = the straight line distance from n to G n S A B C G h(n) 5.8 3 2.2 2 0 Uninformed search strategies use only the information available in the problem definition Breadth-first search ◦ Uniform-cost search Depth-first search ◦ Depth-limited search ◦ Iterative deepening search • Expand shallowest unexpanded node • Implementation: – fringe is a FIFO queue, i.e., new successors go at end idea: order the branches under each node so that the “most promising” ones are explored first g(n) is the total cost
    [Show full text]
  • Guru Nanak Dev University Amritsar
    FACULTY OF ENGINEERING & TECHNOLOGY SYLLABUS FOR B.TECH. COMPUTER ENGINEERING (Credit Based Evaluation and Grading System) (SEMESTER: I to VIII) Session: 2020-21 Batch from Year 2020 To Year 2024 GURU NANAK DEV UNIVERSITY AMRITSAR Note: (i) Copy rights are reserved. Nobody is allowed to print it in any form. Defaulters will be prosecuted. (ii) Subject to change in the syllabi at any time. Please visit the University website time to time. 1 CSA1: B.TECH. (COMPUTER ENGINEERING) SEMESTER SYSTEM (Credit Based Evaluation and Grading System) Batch from Year 2020 to Year 2024 SCHEME SEMESTER –I S. Course Credit Course Title L T P No. Code s 1. CEL120 Engineering Mechanics 3 1 0 4 Engineering Graphics & Drafting 2. MEL121 3 0 1 4 Using AutoCAD 3 MTL101 Mathematics-I 3 1 0 4 4. PHL183 Physics 4 0 1 5 5. MEL110 Introduction to Engg. Materials 3 0 0 3 6. Elective-I 2 0 0 2 SOA- 101 Drug Abuse: Problem, Management 2 0 0 2 7. and Prevention -Interdisciplinary Course (Compulsory) List of Electives–I: 1. PBL121 Punjabi (Compulsory)OR 2 0 0 2. PBL122* w[ZYbh gzikph OR 2 0 0 2 Punjab History & Culture 3. HSL101* 2 0 0 (1450-1716) Total Credits: 20 2 2 24 SEMESTER –II S. Course Code Course Title L T P Credits No. 1. CYL197 Engineering Chemistry 3 0 1 4 2. MTL102 Mathematics-II 3 1 0 4 Basic Electrical &Electronics 3. ECL119 4 0 1 5 Engineering Fundamentals of IT & Programming 4. CSL126 2 1 1 4 using Python 5.
    [Show full text]
  • Lecture Notes
    Lecture Notes Ganesh Ramakrishnan August 15, 2007 2 Contents 1 Search and Optimization 3 1.1 Search . 3 1.1.1 Depth First Search . 5 1.1.2 Breadth First Search (BFS) . 6 1.1.3 Hill Climbing . 6 1.1.4 Beam Search . 8 1.2 Optimal Search . 9 1.2.1 Branch and Bound . 10 1.2.2 A∗ Search . 11 1.3 Constraint Satisfaction . 11 1.3.1 Algorithms for map coloring . 13 1.3.2 Resource Scheduling Problem . 15 1 2 CONTENTS Chapter 1 Search and Optimization Consider the following problem: Cab Driver Problem A cab driver needs to find his way in a city from one point (A) to another (B). Some routes are blocked in gray (probably be- cause they are under construction). The task is to find the path(s) from (A) to (B). Figure 1.1 shows an instance of the problem.. Figure 1.2 shows the map of the united states. Suppose you are set to the following task Map coloring problem Color the map such that states that share a boundary are colored differently.. A simple minded approach to solve this problem is to keep trying color combinations, facing dead ends, back tracking, etc. The program could run for a very very long time (estimated to be a lower bound of 1010 years) before it finds a suitable coloring. On the other hand, we could try another approach which will perform the task much faster and which we will discuss subsequently. The second problem is actually isomorphic to the first problem and to prob- lems of resource allocation in general.
    [Show full text]
  • Lecture 9: Games I
    Lecture 9: Games I CS221 / Spring 2018 / Sadigh Course plan Search problems Markov decision processes Constraint satisfaction problems Adversarial games Bayesian networks Reflex States Variables Logic "Low-level intelligence" "High-level intelligence" Machine learning CS221 / Spring 2018 / Sadigh 1 • This lecture will be about games, which have been one of the main testbeds for developing AI programs since the early days of AI. Games are distinguished from the other tasks that we've considered so far in this class in that they make explicit the presence of other agents, whose utility is not generally aligned with ours. Thus, the optimal strategy (policy) for us will depend on the strategies of these agents. Moreover, their strategies are often unknown and adversarial. How do we reason about this? A simple game Example: game 1 You choose one of the three bins. I choose a number from that bin. Your goal is to maximize the chosen number. A B C -50 50 1 3 -5 15 CS221 / Spring 2018 / Sadigh 3 • Which bin should you pick? Depends on your mental model of the other player (me). • If you think I'm working with you (unlikely), then you should pick A in hopes of getting 50. If you think I'm against you (likely), then you should pick B as to guard against the worst case (get 1). If you think I'm just acting uniformly at random, then you should pick C so that on average things are reasonable (get 5 in expectation). Roadmap Games, expectimax Minimax, expectiminimax Evaluation functions Alpha-beta pruning CS221 / Spring 2018 / Sadigh 5 Game tree Key idea: game tree Each node is a decision point for a player.
    [Show full text]
  • An English-Persian Dictionary of Algorithms and Data Structures
    An EnglishPersian Dictionary of Algorithms and Data Structures a a zarrabibasuacir ghodsisharifedu algorithm B A all pairs shortest path absolute p erformance guarantee b alphab et abstract data typ e alternating path accepting state alternating Turing machine d Ackermans function alternation d Ackermanns function amortized cost acyclic digraph amortized worst case acyclic directed graph ancestor d acyclic graph and adaptive heap sort ANSI b adaptivekdtree antichain adaptive sort antisymmetric addresscalculation sort approximation algorithm adjacencylist representation arc adjacencymatrix representation array array index adjacency list array merging adjacency matrix articulation p oint d adjacent articulation vertex d b admissible vertex assignmentproblem adversary asso ciation list algorithm asso ciative binary function asso ciative array binary GCD algorithm asymptotic b ound binary insertion sort asymptotic lower b ound binary priority queue asymptotic space complexity binary relation binary search asymptotic time complexity binary trie asymptotic upp er b ound c bingo sort asymptotically equal to bipartite graph asymptotically tightbound bipartite matching
    [Show full text]
  • Uninformed Search Strategies
    Chapter 1 Uninformed Search Strategies Exercise 1.1 Given the search problem defined by the following graph, where A is the initial state and L is the goal state and the cost of each action is indicated on the corresponding edge E 2 L 1 3 4 3 1 B 1 A 2 G 1 I 1 1 3 3 2 2 D 1 C F 1 H Solve the search problem using the following search strategies: 1. breadth-first with elimination of repeated states 2. uniform-cost with elimination of repeated states 3. depth-first with elimination of repeated states 1 Answer of exercise 1.1 Breadth-first search with elimination of repeated states. 1 A 2 3 4 B E F 5 6 C A F G H D IF L Uniform-cost search with elimination of repeated states. 1 A c=1 c=3 c=3 2 5 6 B E F c=2 c=4 3 7 C A F G H c=6 c=6 c=3 4 8 9 D G I c=9 B F LI In case of ties, nodes have been considered in lexicographic order. 2 Depth-first search with elimination of repeated states. 1 A 2 5 B E F 3 6 C A F G 4 7 D H 8 B G I 9 F I L 3 Exercise 1.2 You have three containers that may hold 12 liters, 8 liters and 3 liters of water, respectively, as well as access to a water faucet. You canfill a container from the faucet, pour it into another container, or empty it onto the ground.
    [Show full text]
  • Prathyusha Engineering College Lecture Notes
    PRATHYUSHA ENGINEERING COLLEGE LECTURE NOTES Course code : CS8691 Name of the course : ARTIFICIAL INTELLIGENCE Regulation : 2017 Course faculty : Dr.KAVITHA.V.R PRATHYUSHA ENGINEERING COLLEGE Department of Computer Science and Engineering VI Semester (EVEN 2019-20) CS8691ARTIFICIAL INTELLIGENCE OBJECTIVES: To understand the various characteristics of Intelligent agents To learnthe different search strategies in AI To learn to represent knowledge in solving AI problems To understand the different ways of designing software agents To know about the various applications of AI. UNIT IINTRODUCTION 09 Introduction–Definition -Future of Artificial Intelligence –Characteristics of Intelligent Agents–Typical Intelligent Agents –Problem Solving Approach to Typical AI problems. UNIT II PROBLEM SOLVING METHODS 9 Problem solving Methods -Search Strategies-Uninformed -Informed -Heuristics -Local Search Algorithms and Optimization Problems -Searching with Partial Observations -Constraint Satisfaction Problems –Constraint Propagation -Backtracking Search -Game Playing -Optimal Decisions in Games –Alpha -Beta Pruning - Stochastic Games UNIT IIIKNOWLEDGE REPRESENTATION 9 First Order Predicate Logic –Prolog Programming –Unification –Forward Chaining-Backward Chaining – Resolution –Knowledge Representation -Ontological Engineering-Categories and Objects –Events -Mental Events and Mental Objects -Reasoning Systems for Categories -Reasoning with Default Information UNIT IV SOFTWARE AGENTS 9 Architecture for Intelligent Agents –Agent communication –Negotiation
    [Show full text]
  • Minimax Search • Alpha-Beta Search • Games with an Element of Chance • State of the Art
    Foundations of AI 6. Adversarial Search Search Strategies for Games, Games with Chance, State of the Art Wolfram Burgard & Luc De Raedt & Bernhard Nebel 1 Contents • Game Theory • Board Games • Minimax Search • Alpha-Beta Search • Games with an Element of Chance • State of the Art 2 Games & Game Theory • When there is more than one agent, the future is not anymore easily predictable for the agent • In competitive environments (when there are conflicting goals), adversarial search becomes necessary • Mathematical game theory gives the theoretical framework (even for non-competitive environments) • In AI, we usually consider only a special type of games, namely, board games, which can be characterized in game theory terminology as – Extensive, deterministic, two-player, zero-sum games with perfect information 3 Why Board Games? • Board games are one of the oldest branches of AI (Shannon, Turing, Wiener, and Shanon 1950). • Board games present a very abstract and pure form of competition between two opponents and clearly require a form on “intelligence”. • The states of a game are easy to represent. • The possible actions of the players are well defined. • Realization of the game as a search problem • The world states are fully accessible • It is nonetheless a contingency problem, because the characteristics of the opponent are not known in advance. Note: Nowadays, we also consider sport games 4 Problems • Board games are not only difficult because they are contingency problems, but also because the search trees can become astronomically large. • Examples: – Chess: On average 35 possible actions from every position, 100 possible moves 35100 nodes in the search tree (with “only” approx.
    [Show full text]