Prathyusha Engineering College Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

Prathyusha Engineering College Lecture Notes PRATHYUSHA ENGINEERING COLLEGE LECTURE NOTES Course code : CS8691 Name of the course : ARTIFICIAL INTELLIGENCE Regulation : 2017 Course faculty : Dr.KAVITHA.V.R PRATHYUSHA ENGINEERING COLLEGE Department of Computer Science and Engineering VI Semester (EVEN 2019-20) CS8691ARTIFICIAL INTELLIGENCE OBJECTIVES: To understand the various characteristics of Intelligent agents To learnthe different search strategies in AI To learn to represent knowledge in solving AI problems To understand the different ways of designing software agents To know about the various applications of AI. UNIT IINTRODUCTION 09 Introduction–Definition -Future of Artificial Intelligence –Characteristics of Intelligent Agents–Typical Intelligent Agents –Problem Solving Approach to Typical AI problems. UNIT II PROBLEM SOLVING METHODS 9 Problem solving Methods -Search Strategies-Uninformed -Informed -Heuristics -Local Search Algorithms and Optimization Problems -Searching with Partial Observations -Constraint Satisfaction Problems –Constraint Propagation -Backtracking Search -Game Playing -Optimal Decisions in Games –Alpha -Beta Pruning - Stochastic Games UNIT IIIKNOWLEDGE REPRESENTATION 9 First Order Predicate Logic –Prolog Programming –Unification –Forward Chaining-Backward Chaining – Resolution –Knowledge Representation -Ontological Engineering-Categories and Objects –Events -Mental Events and Mental Objects -Reasoning Systems for Categories -Reasoning with Default Information UNIT IV SOFTWARE AGENTS 9 Architecture for Intelligent Agents –Agent communication –Negotiation and Bargaining –Argumentation among Agents –Trust and Reputation in Multi-agent systems. UNIT V APPLICATIONS 9 AI applications –Language Models –Information Retrieval-Information Extraction –Natural Language Processing -Machine Translation –Speech Recognition –Robot –Hardware –Perception –Planning –Moving TOTAL :45 PERIODS OUTCOMES: Upon completion of the course, the students will be able to: Use appropriate search algorithms for any AI problem Represent a problem using first order and predicate logic Provide the apt agent strategy to solve a given problem Design software agents to solve a problem Design applications for NLP that useArtificial Intelligence. TEXT BOOKS: 1.S. Russell and P. Norvig, "Artificial Intelligence: A Modern Approach, Prentice Hall, Third Edition, 2009. 2. Bratko, ―Prolog: Programming for Artificial Intelligence, Fourth edition, Addison-Wesley Educational Publishers Inc., 2011. REFERENCES: 1.M. Tim Jones, ―Artificial Intelligence: A Systems Approach(Computer Science), Jones and Bartlett Publishers, Inc.; First Edition, 2008 2.Nils J. Nilsson, ―The Quest for Artificial Intelligence, Cambridge University Press, 2009. 3.William F. Clocksin and Christopher S. Mellish, Programming in Prolog: Using the ISO Standard, Fifth Edition, Springer, 2003. 4.Gerhard Weiss, ―Multi Agent Systems, Second Edition, MIT Press, 2013. 5.David L. Poole and Alan K. Mackworth, ―Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, 2010 UNIT I INTRODUCTION 9 Introduction–Definition -Future of Artificial Intelligence –Characteristics of Intelligent Agents–Typical Intelligent Agents –Problem Solving Approach to Typical AI problems. 1. INTRODUCTION Artificial intelligence is already all around us, from web search to video games. AI methods plan the driving directions, filter email spam, and focus the digital cameras on faces. AI lets us guide our phone with our voice and read foreign newspapers in English. Beyond today's applications, AI is at the core of many new technologies that will shape our future. From self-driving cars to household robots, advancements in AI help transform science fiction into real systems The foundations of Artificial Intelligence The various disciplines that contributed ideas, viewpoints, and techniques to AI are given below : Philosophy(428 B.C. – present) Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational part of the mind. He developed an informal system of syllogisms for proper reasoning, which allowed one to generate conclusions mechanically, given initial premises. Computer Human Brain Computational units 1 CPU, 108 gates 1011 neurons Storage units 1010 bits RAM 1011 neurons 1011 bits disk 1014 synapses Cycle time 10-9 sec 10-3 sec Bandwidth 1010 bits/sec 1014 bits/sec Memory updates/sec 109 1014 Table 1.1 A crude comparison of the raw computational resources available to computers (circa 2003 ) and brain. The computer’s numbers have increased by at least by a factor of 10 every few years. The brain’s numbers have not changed for the last 10, 000 years. Brains and digital computers perform quite different tasks and have different properties. Table 1.1 shows that there are 10000 times more neurons in the typical human brain than there are gates in the CPU of a typical high- end computer. Moore’s Law predicts that the CPU’s gate count will equal the brain’s neuron count around 2020. Psycology(1879 – present) The origin of scientific psychology are traced back to the work of German physiologist Hermann von Helmholtz(1821-1894) and his student Wilhelm Wundt(1832 – 1920) In 1879, Wundt opened the first laboratory of experimental psychology at the University of Leipzig. In US, the development of computer modeling led to the creation of the field of cognitive science. The field can be said to have started at the workshop in September 1956 at MIT. Computer engineering (1940-present) For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice. AI also owes a debt to the software side of computer science, which has supplied the operating systems, programming languages, and tools needed to write modern programs Control theory and Cybernetics (1948-present) Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock with a regulator that kept the flow of water running through it at a constant, predictable pace. Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems that maximize an objective function over time. Linguistics (1957-present) Modem linguistics and AI, then, were "born" at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing. 1.1.3 The History of Artificial Intelligence The gestation of artificial intelligence (1943-1955) There were a number of early examples of work that can be characterized as AI, but it was Alan Turing who first articulated a complete vision of AI in his 1950 article "Computing Machinery and Intelligence." Therein, he introduced the Turing test, machine learning, genetic algorithms, and reinforcement learning. The birth of artificial intelligence (1956) McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory, neural nets, and the study of intelligence. They organized a two- month workshop at Dartmouth in the summer of 1956. Perhaps the longest-lasting thing to come out of the workshop was an agreement to adopt McCarthy's new name for the field: artificial intelligence. Early enthusiasm, great expectations (1952-1969) The early years of AI were full of successes-in a limited way. General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal problem solver machine. The order in which the program considered subgoals and possible actions was similar to that in which humans approached the same problems. Thus, GPS was probably the first program to embody the "thinking humanly" approach. At IBM, Nathaniel Rochester and his colleagues produced some of the first AI programs. Herbert Gelernter (1959) constructed the Geometry Theorem Prover, which was able to prove theorems that many students of mathematics would find quite tricky. Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford. Tom Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ tests, such as the one in Figure 1.1 Figure 1.1 The Tom Evan’s ANALOGY program could solve geometric analogy problems as shown. A dose of reality (1966-1973) From the beginning, AI researchers were not shy about making predictions of their coming successes. The following statement by Herbert Simon in 1957 is often quoted: “It is not my aim to surprise or shock you-but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied. Knowledge-based systems: The key to power? (1969-1979) Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and Carl Djerassi. AI becomes an industry (1980-present) In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running
Recommended publications
  • NI\SI\ I-J!'.~!.:}',-~)N, Vifm:Nu\ National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23665
    https://ntrs.nasa.gov/search.jsp?R=19830020606 2020-03-21T01:58:58+00:00Z NASA Technical Memorandum 83216 NASA-TM-8321619830020606 DECISION-MAKING AND PROBLEM SOLVING METHODS IN AUTOMATION TECHNOLOGY W. W. HANKINS) J. E. PENNINGTON) AND L. K. BARKER MAY 1983 : "J' ,. (,lqir; '.I '. .' I_ \ i !- ,( ~, . L!\NGLEY F~I:~~~:r.. r~CH (TN! El~ L1B!(,!\,f(,(, NA~;I\ NI\SI\ I-J!'.~!.:}',-~)N, Vifm:nU\ National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23665 TABLE OF CONTENTS Page SUMMARY ••••••••••••••••••••••••••••••••• •• 1 INTRODUCTION •• .,... ••••••••••••••••••••••• •• 2 DETECTION AND RECOGNITION •••••••••••••••••••••••• •• 3 T,ypes of Sensors Computer Vision CONSIGHT-1 S,ystem SRI Vision Module Touch and Contact Sensors PLANNING AND SCHEDULING ••••••••••••••••••••••••• •• 7 Operations Research Methods Linear Systems Nonlinear S,ystems Network Models Queueing Theory D,ynamic Programming Simulation Artificial Intelligence (AI) Methods Nets of Action Hierarchies (NOAH) System Monitoring Resource Allocation Robotics LEARNING ••••• ••••••••••••••••••••••••••• •• 13 Expert Systems Learning Concept and Linguistic Constructions Through English Dialogue Natural Language Understanding Robot Learning The Computer as an Intelligent Partner Learning Control THEOREM PROVING ••••••••••••••••••••••••••••• •• 15 Logical Inference Disproof by Counterexample Proof by Contradiction Resolution Principle Heuristic Methods Nonstandard Techniques Mathematical Induction Analogies Question-Answering Systems Man-Machine Systems
    [Show full text]
  • AI-KI-B Heuristic Search
    AI-KI-B Heuristic Search Ute Schmid & Diedrich Wolter Practice: Johannes Rabold Cognitive Systems and Smart Environments Applied Computer Science, University of Bamberg last change: 3. Juni 2019, 10:05 Outline Introduction of heuristic search algorithms, based on foundations of problem spaces and search • Uniformed Systematic Search • Depth-First Search (DFS) • Breadth-First Search (BFS) • Complexity of Blocks-World • Cost-based Optimal Search • Uniform Cost Search • Cost Estimation (Heuristic Function) • Heuristic Search Algorithms • Hill Climbing (Depth-First, Greedy) • Branch and Bound Algorithms (BFS-based) • Best First Search • A* • Designing Heuristic Functions • Problem Types Cost and Cost Estimation • \Real" cost is known for each operator. • Accumulated cost g(n) for a leaf node n on a partially expanded path can be calculated. • For problems where each operator has the same cost or where no information about costs is available, all operator applications have equal cost values. For cost values of 1, accumulated costs g(n) are equal to path-length d. • Sometimes available: Heuristics for estimating the remaining costs to reach the final state. • h^(n): estimated costs to reach a goal state from node n • \bad" heuristics can misguide search! Cost and Cost Estimation cont. Evaluation Function: f^(n) = g(n) + h^(n) Cost and Cost Estimation cont. • \True costs" of an optimal path from an initial state s to a final state: f (s). • For a node n on this path, f can be decomposed in the already performed steps with cost g(n) and the yet to perform steps with true cost h(n). • h^(n) can be an estimation which is greater or smaller than the true costs.
    [Show full text]
  • 2011-2012 ABET Self-Study Questionnaire for Engineering
    ABET SELF-STUDY REPOrt for Computer Engineering July 1, 2012 CONFIDENTIAL The information supplied in this Self-Study Report is for the confidential use of ABET and its authorized agents, and will not be disclosed without authorization of the institution concerned, except for summary data not identifiable to a specific institution. Table of Contents BACKGROUND INFORMATION ................................................................................... 5 A. Contact Information ............................................................................................. 5 B. Program History ................................................................................................... 6 C. Options ................................................................................................................. 7 D. Organizational Structure ...................................................................................... 8 E. Program Delivery Modes ..................................................................................... 8 F. Program Locations ............................................................................................... 9 G. Deficiencies, Weaknesses or Concerns from Previous Evaluation(s) ................. 9 H. Joint Accreditation ............................................................................................... 9 CRITERION 1. STUDENTS ........................................................................................... 10 A. Student Admissions ..........................................................................................
    [Show full text]
  • Search Algorithms ◦ Breadth-First Search (BFS) ◦ Uniform-Cost Search ◦ Depth-First Search (DFS) ◦ Depth-Limited Search ◦ Iterative Deepening  Best-First Search
    9-04-2013 Uninformed (blind) search algorithms ◦ Breadth-First Search (BFS) ◦ Uniform-Cost Search ◦ Depth-First Search (DFS) ◦ Depth-Limited Search ◦ Iterative Deepening Best-First Search HW#1 due today HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3 Formulate — Search — Execute 1. Goal formulation 2. Problem formulation 3. Search algorithm 4. Execution A problem is defined by four items: 1. initial state 2. actions or successor function 3. goal test (explicit or implicit) 4. path cost (∑ c(x,a,y) – sum of step costs) A solution is a sequence of actions leading from the initial state to a goal state Search algorithms have the following basic form: do until terminating condition is met if no more nodes to consider then return fail; select node; {choose a node (leaf) on the tree} if chosen node is a goal then return success; expand node; {generate successors & add to tree} Analysis ◦ b = branching factor ◦ d = depth ◦ m = maximum depth g(n) = the total cost of the path on the search tree from the root node to node n h(n) = the straight line distance from n to G n S A B C G h(n) 5.8 3 2.2 2 0 Uninformed search strategies use only the information available in the problem definition Breadth-first search ◦ Uniform-cost search Depth-first search ◦ Depth-limited search ◦ Iterative deepening search • Expand shallowest unexpanded node • Implementation: – fringe is a FIFO queue, i.e., new successors go at end idea: order the branches under each node so that the “most promising” ones are explored first g(n) is the total cost
    [Show full text]
  • Lecture Notes
    Lecture Notes Ganesh Ramakrishnan August 15, 2007 2 Contents 1 Search and Optimization 3 1.1 Search . 3 1.1.1 Depth First Search . 5 1.1.2 Breadth First Search (BFS) . 6 1.1.3 Hill Climbing . 6 1.1.4 Beam Search . 8 1.2 Optimal Search . 9 1.2.1 Branch and Bound . 10 1.2.2 A∗ Search . 11 1.3 Constraint Satisfaction . 11 1.3.1 Algorithms for map coloring . 13 1.3.2 Resource Scheduling Problem . 15 1 2 CONTENTS Chapter 1 Search and Optimization Consider the following problem: Cab Driver Problem A cab driver needs to find his way in a city from one point (A) to another (B). Some routes are blocked in gray (probably be- cause they are under construction). The task is to find the path(s) from (A) to (B). Figure 1.1 shows an instance of the problem.. Figure 1.2 shows the map of the united states. Suppose you are set to the following task Map coloring problem Color the map such that states that share a boundary are colored differently.. A simple minded approach to solve this problem is to keep trying color combinations, facing dead ends, back tracking, etc. The program could run for a very very long time (estimated to be a lower bound of 1010 years) before it finds a suitable coloring. On the other hand, we could try another approach which will perform the task much faster and which we will discuss subsequently. The second problem is actually isomorphic to the first problem and to prob- lems of resource allocation in general.
    [Show full text]
  • An English-Persian Dictionary of Algorithms and Data Structures
    An EnglishPersian Dictionary of Algorithms and Data Structures a a zarrabibasuacir ghodsisharifedu algorithm B A all pairs shortest path absolute p erformance guarantee b alphab et abstract data typ e alternating path accepting state alternating Turing machine d Ackermans function alternation d Ackermanns function amortized cost acyclic digraph amortized worst case acyclic directed graph ancestor d acyclic graph and adaptive heap sort ANSI b adaptivekdtree antichain adaptive sort antisymmetric addresscalculation sort approximation algorithm adjacencylist representation arc adjacencymatrix representation array array index adjacency list array merging adjacency matrix articulation p oint d adjacent articulation vertex d b admissible vertex assignmentproblem adversary asso ciation list algorithm asso ciative binary function asso ciative array binary GCD algorithm asymptotic b ound binary insertion sort asymptotic lower b ound binary priority queue asymptotic space complexity binary relation binary search asymptotic time complexity binary trie asymptotic upp er b ound c bingo sort asymptotically equal to bipartite graph asymptotically tightbound bipartite matching
    [Show full text]
  • Artificial Intelligence a Modern Approach
    Artificial Intelligence A Modern Approach SECOND EDITION Stuart Russell Peter Prentice Hall Series in Artificial Intelligence Artificial Intelligence A Modern Approach Second Edition PRENTICE HALL SERIES IN ARTIFICIAL INTELLIGENCE Stuart Russell and Peter Editors PONCE Computer Vision: A Modern Approach GRAHAM ANSI Common Lisp MARTIN and Processing NEAPOLITAN Learning Bayesian Networks RUSSELL NORVIG Intelligence: A Modern Approach Artificial Intelligence A Modern Approach Second Edition Stuart J. Russell and Peter Norvig Contributing writers: John F. Canny Douglas D. Edwards Jitendra M. Malik Sebastian Education, Upper Saddle River; New Jersey 07458 Library of Congress Data CIP Data on file. Vice President and Editorial Director, ECS: Marcia J. Publisher: Alan R. Apt Associate Editor: Toni Dianne Holm Editorial Assistant: Patrick Lindner Vice President and Director of Production and Manufacturing, ESM: David Riccardi Executive Managing Editor: Vince Assistant Managing Editor: Camille Trentacoste Production Editor: Irwin Zucker Manufacturing Manager: Trudy Pisciotti Manufacturing Buyer: Lisa Director, Creative Services: Paul Creative Director: Carole Art Editor: Greg Art Director: Heather Scott Assistant to Art Director: Geoffrey Cassar Cover Designers: Stuart Russell and Peter Norvig Cover Image Creation: Stuart Russell and Peter Norvig; Tamara Newnam and Van Acker Interior Designer: Stuart Russell and Peter Norvig Marketing Manager: Pamela Shaffer Marketing Assistant: 2003, 1995 Education, Inc. Education, Inc., Upper Saddle New Jersey 07458 All rights reserved. No part of this book may he reproduced, in any form or by any means, without permission in writing from the The author and publisher of this hook have their best efforts in preparing this hook. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness.
    [Show full text]
  • Intelligent Agents Search Algorithms
    Intelligent Agents Search Algorithms Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: July 9, 2015 U. Schmid (CogSys) Intelligent Agents last change: July 9, 2015 1 / 50 Search Algorithms Problem solving, planning (and most other areas of AI) depend on intelligent search Remember: typically state-spaces are much too large to apply standard graph search algorithms (e.g. the Dijkstra algorithm) Search for a (shortest) path from an initial state to a state which fulfills all goals typically relies on a variant of depth-first search where only a small part of states are explored. A good heuristics can reduce search effort dramatically. However, the heuristics must be carefully designed in a way that (1) the (shortest) solution has a chance to be found (completeness) and that (2) the result is indeed an admissible solution (soundness). U. Schmid (CogSys) Intelligent Agents last change: July 9, 2015 2 / 50 Outline Outline Introduction of running example Search Tree Uniformed Systematic Search Depth-First Search (DFS) Breadth-First Search (BFS) Complexity of Blocks-World Cost-based Optimal Search Uniform Cost Search Cost Estimation (Heuristic Function) Heuristic Search Algorithms Hill Climbing (Depth-First, Greedy) Branch and Bound Algorithms (BFS-based) Best First Search A* Designing Heuristic Functions U. Schmid (CogSys) Intelligent Agents last change: July 9, 2015 3 / 50 Repetition: Problem, State Space, Search Tree Running Example In the following: We are not concerned how a single state transformation is calculated (see state-space planning) We represent problems via graphs with abstract states (nodes) and abstract actions (arcs). If we label arcs with numbers, the numbers represent costs of the different actions (time, resources).
    [Show full text]
  • 6.034 Final Examination December 18,2006 ----- \ Name
    6.034 Final Examination December 18,2006 ----- \ Name : - -- - - - - - - . - Quiz number Maximum Score Grader There are 30 pages in this final, including this one and a tear-off sheet provided at the end with duplicate drawings and data. You must do Quiz 5. All others are optional. Quiz 1 Problem 1: Search (75 points) This problem has a certain resemblance to a problem on Quiz I, but it is not the same. Several 6.034 students are stranded in a strange city at the node marked S. Desperate for a meal, they want to get to Hose's food truck located on the map at the node marked T. All the streets are one-way streets. Each link between node pairs is 1 unit long. Part A (10 points) Amy calculates how many paths terminating at nodes T or D she would produce if she used the British Museum Algorithm. As expected with the British Museum Algorithm, she does not use an extended list. Her result is: Part B (10points) David decides to use plain branch and bound, without an extended list and with no estimate of remaining distance, to compute the shortest path between S and T. If there is ever a choice in which path to extend, he picks the path closest to the top of the page. Before he can be sure he has the shortest path, he produces n paths starting at S and ending at a node marked T or D and adds those paths to the queue. In this case, n is Part C (15 points) James thinks it would be better to use an estimate of remaining distance with the basic branch and bound procedure.
    [Show full text]
  • Search Notes
    9/22/11 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence Fall, 2011 Recitation 2, Thursday September 22 Search Me! Prof. Bob Berwick 0. Notation, algorithm, terminology graph is a data structure containing node-to-node connectivity information start is the starting node goal is the target node Here is a simple generic search algorithm: function search(graph, start, goal) 0. Initialize agenda = [ [start] ] # a list of paths extended_list = [ ] # a list of nodes while agenda is not empty: 1. path = agenda.pop(0) # remove first element from agenda 2. if path is-a-path-to-goal then return path # we’ve reached the goal 3. otherwise extend the current path if last node in this path is not in the extended_list: 3a. add the last node of the current path to the extended_list 3b. for each node connected to this last node # look-up using graph 3c. make a new path # don’t add paths with loops! 4. add new paths from step 3c to agenda and reorganize agenda end while return nil path # failure Notes • Search returns either a successful path from start to goal or a nil path indicating a failure to find such a path. • Agenda keeps track of all the paths under consideration, and the way it is maintained is the key to the difference between most of the search algorithms. • Loops in paths: Thou shall not create or consider paths with cycles in step 3. • Exiting the search: Non-optimal searches may actually exit when they find or add a path with a goal node to the agenda (at step 3).
    [Show full text]
  • Heuristics and Generate and Test
    HEURISTICS AND GENERATE AND TEST Dr. Krishnendu Guha Assistant Professor (On Contract) National Institute of Technology (NIT), Jamshedpur Email: [email protected] • A heuristic is a method that improves the efficiency of the search HEURISTICS process • These are like tour guides • They are good to the level that they may neglect the points in general interesting directions • They are bad to the level that they may neglect points of interest to particular individuals • Some heuristics help in the search process without sacrificing any claims to entirety that the process might previously had • Others may occasionally cause an excellent path to be overlooked • By sacrificing entirety it increases efficiency • Heuristics may not find the best solution every time but guarantee that they find a good solution in a reasonable time • These are particularly useful in solving tough and complex problems, solutions of which would require infinite time, i.e. far longer than a lifetime for the problems which are not solved in any other way HEURISTIC SEARCH TECHNIQUES • Many traditional search algorithms are used in AI applications • For complex problems, the traditional algorithms are unable to find the solutions within some practical time and space limits • Consequently, many special techniques are developed, using heuristic functions. • The algorithms that use heuristic functions are called heuristic algorithms • Heuristic algorithms are not really intelligent; they appear to be intelligent because they achieve better performance • • Heuristic algorithms are more efficient because they take advantage of feedback from the data to direct the search path. • Uninformed search algorithms or Brute-force algorithms, search through the search space all possible candidates for the solution checking whether each candidate satisfies the problem’s statement.
    [Show full text]
  • 6.034 Artificial Intelligence Final Examination Fall 2000
    6.034 Artificial Intelligence Final Examination Fall 2000 Name E-mail TA Recitation Instructor Problem Max Score Grader Number Problem 1 20 B D L P R U D L-P W Problem 2 19 B D L P R U D L-P W Problem 3 12 B D L P R U D L-P W Problem 4 18 B D L P R U D L-P W Problem 5 17 B D L P R U D L-P W Problem 6 14 B D L P R U D L-P W Total 100 Problem 1, Search (19 points) Part 1 (11 points) This problem is similar to a problem given on the 1999 final, but different in important details. You are trying to find a path from the state S to state G in the following directed graph using various search algorithms. You are NOT to use a visited list or expanded list. The diagram is repeated on a tear-off page at the end of this examination for your convenience. S h = 0 h = 0 A H h = 7 h = 0 B h = 0 C h = 0 D h = 4 h = 3 h = 2 h = 1 h = 0 h = 0 E L M N O G h = 0 F 2 Make the following assumptions: • All the links are of length 1. • The heuristic distance estimates (h) from each node to the goal are shown in the figure next to each node. Each distance estimate is known to be less than or equal to the true distance.
    [Show full text]