
Problem-solving agents function Simple-Problem-Solving-Agent( percept) returns an action static: seq, an action sequence, initially empty state, some description of the current world state Solving Problems by Searching goal, a goal, initially null problem, a problem formulation state ← Update-State(state, percept) if seq is empty then Chapter 3 goal ← Formulate-Goal(state) problem ← Formulate-Problem(state, goal) seq ← Search( problem) if seq is failure then return a null action action ← First(seq) seq ← Rest(seq) return action Note: this is offline problem solving; solution executed “eyes closed.” Online problem solving involves acting without complete knowledge. Chapter 3 1 Chapter 3 3 Outline Example: Romania ♦ Problem-solving agents On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest ♦ Problem types Formulate goal: ♦ Problem formulation be in Bucharest ♦ Example problems Formulate problem: ♦ Basic search algorithms states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Chapter 3 2 Chapter 3 4 Example: Romania Example: vacuum world Oradea 71 Neamt Single-state, start in #5. Solution?? 1 2 Zerind 87 75 151 Iasi 3 4 Arad 140 92 Sibiu 99 Fagaras 5 6 118 Vaslui 80 Timisoara Rimnicu Vilcea 7 8 142 111 Pitesti 211 Lugoj 97 70 98 85 Hirsova Mehadia 146 101 Urziceni 75 138 86 Bucharest Dobreta 120 90 Craiova Eforie Giurgiu Chapter 3 5 Chapter 3 7 Problem types Example: vacuum world Deterministic, fully observable =⇒ single-state problem Single-state, start in #5. Solution?? Agent knows exactly which state it will be in; solution is a sequence [Right, Suck] 1 2 Non-observable =⇒ conformant problem Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8} 3 4 Agent may have no idea where it is; solution (if any) is a sequence e.g., Right goes to {2, 4, 6, 8}. Solution?? Nondeterministic and/or partially observable =⇒ contingency problem 5 6 percepts provide new information about current state solution is a contingent plan or a policy 7 8 often interleave search, execution Unknown state space =⇒ exploration problem (“online”) Chapter 3 6 Chapter 3 8 Example: vacuum world Single-state problem formulation Single-state, start in #5. Solution?? A problem is defined by four items: [Right, Suck] 1 2 ♦ initial state e.g., “at Arad” Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8} 3 4 e.g., Right goes to {2, 4, 6, 8}. Solution?? ♦ successor function S(x) = set of action–state pairs [Right,Suck,Left,Suck] e.g., S(Arad)= {hArad → Zerind, Zerindi,...} 5 6 Contingency ♦ goal test, can be ♦ Murphy’s Law (non-deterministic): Suck explicit, e.g., x = “at Bucharest” 7 8 can dirty a clean carpet; start in #5 implicit, e.g., NoDirt(x) ♦ Local sensing (partially-observable): dirt, ♦ path cost (additive) location only, start in {#5,#7}. e.g., sum of distances, number of actions executed, etc. Solution?? c(x, a, y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state Chapter 3 9 Chapter 3 11 Example: vacuum world Selecting a state space Single-state, start in #5. Solution?? Real world is absurdly complex [Right, Suck] 1 2 ⇒ state space must be abstracted for problem solving Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8} ♦ (Abstract) state = set of real states 3 4 e.g., Right goes to {2, 4, 6, 8}. Solution?? [Right,Suck,Left,Suck] ♦ (Abstract) action = complex combination of real actions 5 6 e.g., “Arad → Zerind” represents a complex set Contingency of possible routes, detours, rest stops, etc. ♦ Murphy’s Law (non-deterministic): Suck 7 8 can dirty a clean carpet; start in #5 ♦ Local sensing (partially-observable): dirt, ♦ For guaranteed realizability, any real state “in Arad” location only, start in {#5,#7}. must get to some real state “in Zerind” Solution?? ♦ (Abstract) solution = [Right, while dirt do Suck] set of real paths that are solutions in the real world [Right, if dirt then Suck] ♦ Each abstract action should be “easier” than the original problem! Chapter 3 10 Chapter 3 12 Example: vacuum world state space graph Example: vacuum world state space graph R R L R L R L L S S S S R R R R L R L R L R L R L L L L SS SS S S S S R R L R L R L L S S S S states?? states??: integer dirt and robot locations (ignore dirt amounts etc.) actions?? actions??: Left, Right, Suck, NoOp goal test?? goal test?? path cost?? path cost?? Chapter 3 13 Chapter 3 15 Example: vacuum world state space graph Example: vacuum world state space graph R R L R L R L L S S S S R R R R L R L R L R L R L L L L SS SS S S S S R R L R L R L L S S S S states??: integer dirt and robot locations (ignore dirt amounts etc.) states??: integer dirt and robot locations (ignore dirt amounts etc.) actions?? actions??: Left, Right, Suck, NoOp goal test?? goal test??: no dirt path cost?? path cost?? Chapter 3 14 Chapter 3 16 Example: vacuum world state space graph Example: The 8-puzzle R L R 7 2 4 51 2 3 L S S R R 5 6 4 5 6 L R L R L L SS 8 3 1 7 8 S S R L R Start State Goal State L S S states??: integer locations of tiles (ignore intermediate positions) actions?? states??: integer dirt and robot locations (ignore dirt amounts etc.) goal test?? actions??: Left, Right, Suck, NoOp path cost?? goal test??: no dirt path cost??: 1 per action (0 for NoOp) Chapter 3 17 Chapter 3 19 Example: The 8-puzzle Example: The 8-puzzle 7 2 4 51 2 3 7 2 4 51 2 3 5 6 4 5 6 5 6 4 5 6 8 3 1 7 8 8 3 1 7 8 Start State Goal State Start State Goal State states?? states??: integer locations of tiles (ignore intermediate positions) actions?? actions??: move blank left, right, up, down (ignore unjamming etc.) goal test?? goal test?? path cost?? path cost?? Chapter 3 18 Chapter 3 20 Example: The 8-puzzle Example: robotic assembly P 7 2 4 51 2 3 R R R R 5 6 4 5 6 R 8 3 1 7 8 Start State Goal State states??: real-valued coordinates of robot joint angles states??: integer locations of tiles (ignore intermediate positions) parts of the object to be assembled actions??: move blank left, right, up, down (ignore unjamming etc.) goal test??: = goal state (given) actions??: continuous motions of robot joints path cost?? goal test??: complete assembly with no robot included! path cost??: time to execute Chapter 3 21 Chapter 3 23 Example: The 8-puzzle Tree search algorithms 7 2 4 51 2 3 Basic idea: offline, simulated exploration of state space 5 6 4 5 6 by generating successors of already-explored states (a.k.a. expanding states) 8 3 1 7 8 function Tree-Search( problem, strategy) returns a solution, or failure problem Start State Goal State initialize the frontier using the initial state of loop do states??: integer locations of tiles (ignore intermediate positions) if the frontier is empty then return failure choose a leaf node and remove it from the frontier based on strategy actions??: move blank left, right, up, down (ignore unjamming etc.) if the node contains a goal state then return the corresponding solution goal test??: = goal state (given) else expand the chosen node and add the resulting nodes to the frontier path cost??: 1 per move end [Note: optimal solution of n-Puzzle family is NP-hard] Chapter 3 22 Chapter 3 24 Tree search example Tree search example Arad Arad Sibiu Timisoara Zerind Sibiu Timisoara Zerind Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea Chapter 3 25 Chapter 3 27 Tree search example Implementation: states vs. nodes Arad A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes parent, children, depth, path cost g(x) Sibiu Timisoara Zerind States do not have parents, children, depth, or path cost! parent, action Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea depth = 6 State5 4 Node g = 6 6 1 88 state 7 3 22 The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. Chapter 3 26 Chapter 3 28 Search strategies Breadth-first search A strategy is defined by picking the order of node expansion Expand shallowest unexpanded node Strategies are evaluated along the following dimensions: Implementation: completeness—does it always find a solution if one exists? frontier is a FIFO queue, i.e., new successors go at end time complexity—number of nodes generated/expanded A space complexity—maximum number of nodes in memory optimality—does it always find a least-cost solution? Time and space complexity are measured in terms of B C b—maximum branching factor of the search tree d—depth of the “shallowest” solution m—maximum depth of the state space (may be ∞) D E F G Chapter 3 29 Chapter 3 31 Uninformed search strategies Breadth-first search Uninformed strategies use only the information available Expand shallowest unexpanded node in the problem definition Implementation: ♦ Breadth-first search frontier is a FIFO queue, i.e., new successors go at end ♦ Uniform-cost search A ♦ Depth-first search B C ♦ Depth-limited search ♦ Iterative deepening search D E F G Chapter 3 30 Chapter 3 32 Breadth-first search Properties of breadth-first search Expand shallowest unexpanded node Complete?? Implementation: frontier is a FIFO queue, i.e., new successors go at end A B C D E F G Chapter 3 33 Chapter 3 35 Breadth-first search Properties of breadth-first search Expand shallowest unexpanded node Complete?? Yes (if b is finite) Implementation: Time?? frontier is a FIFO queue, i.e., new successors go at end A B C D E F G Chapter 3 34 Chapter 3 36 Properties of breadth-first search Properties of breadth-first search Complete?? Yes (if b is finite) Complete?? Yes (if b is finite) Time (# of visited nodes)?? 1+ b + b2 + b3 + ..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages29 Page
-
File Size-