9-04-2013  Uninformed (blind) search algorithms ◦ Breadth-First Search (BFS) ◦ Uniform-Cost Search ◦ Depth-First Search (DFS) ◦ Depth-Limited Search ◦ Iterative Deepening  Best-First Search

HW#1 due today

HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Formulate — Search — Execute

1. Goal formulation 2. Problem formulation 3. Search algorithm 4. Execution A problem is defined by four items:

1. initial state 2. actions or successor function 3. goal test (explicit or implicit) 4. path cost (∑ c(x,a,y) – sum of step costs)

A solution is a sequence of actions leading from the initial state to a goal state  Search algorithms have the following basic form:

do until terminating condition is met if no more nodes to consider then return fail; select node; {choose a node (leaf) on the } if chosen node is a goal then return success; expand node; {generate successors & add to tree}

 Analysis ◦ b = branching factor ◦ d = depth ◦ m = maximum depth

 g(n) = the total cost of the path on the search tree from the root node to node n  h(n) = the straight line distance from n to G n S A B C G h(n) 5.8 3 2.2 2 0

Uninformed search strategies use only the information available in the problem definition

 Breadth-first search ◦ Uniform-cost search  Depth-first search ◦ Depth-limited search ◦ Iterative deepening search • Expand shallowest unexpanded node • Implementation: – fringe is a FIFO queue, i.e., new successors go at end idea: order the branches under each node so that the “most promising” ones are explored first

 g(n) is the total cost of the path on the search tree from the root node to node n  sort the open list by increasing g(), that is, consider the shortest partial path first  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Expand deepest unexpanded node  Implementation: ◦ fringe = LIFO queue, i.e., put successors at front  Complete? No: fails in infinite-depth spaces, spaces with loops ◦ Modify to avoid repeated states along path  complete in finite spaces  Time? O(bm): terrible if m is much larger than d ◦ but if solutions are dense, may be much faster than BFS  Space? O(bm), i.e., linear space!  Optimal? No = depth-first search with depth limit l, i.e., nodes at depth l have no successors

 Recursive implementation:

 Number of nodes generated in a depth-limited search to depth d with branching factor b: 0 1 2 d-2 d-1 d NDLS = b + b + b + … + b + b + b

 Number of nodes generated in an iterative deepening search to depth d with branching factor b: 0 1 2 d-2 d-1 NIDS = (d+1)b + d b^ + (d-1)b^ + … + 3b +2b + 1bd

 For b = 10, d = 5, ◦ NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 ◦ NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

 Overhead = (123,456 - 111,111)/111,111 = 11%  Complete? Yes  Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)  Space? O(bd)  Optimal? Only if step cost = 1; otherwise NO

 Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored

 Variety of uninformed search strategies

 Iterative deepening search uses only linear space and not much more time than other uninformed algorithms  Idea: use an evaluation function f(n) for each node ◦ estimate of "desirability" Expand most desirable unexpanded node  Implementation: Order the nodes in the Open List (fringe) in decreasing order of desirability  Special cases: ◦ greedy best-first search ◦ A* search  g(n) path-cost function = cost of the path from the root to node n found so far (less than or equal to g*(n))

 h(n) “heuristic” function estimates the cost of a path from node n to the “closest” goal node (  f(n) evaluation function measure of how likely node n is part of a solution one possibility: f(n) = g(n) + h(n)

Possible evaluation functions:  f(n) = probability that a node is on the right path  f(n) = distance function (measure of the difference between node n & the nearest goal node)  f(n) = g(n) ≡ Uniform Cost  f(n) = h(n) ≡ Greedy  f(n) = g(n) + h(n)≡ A* estimates the total cost of a solution path which goes through node n

 Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal

 e.g., hSLD(n) = straight-line distance from n to Bucharest  Greedy best-first search expands the node that appears to be closest to goal

 Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt   Time? O(bm), but a good heuristic can give dramatic improvement  Space? O(bm) -- keeps all nodes in memory  Optimal? No  Idea: avoid expanding paths that are already expensive  prune longer paths (if there is >1 path from the root to node n, only keep the shortest on the search tree)  Evaluation function f(n) = g(n) + h(n)  g(n) = lowest cost so far to reach n  h(n) = estimated cost from n to goal  f(n) = estimated total cost of path through n to goal f(n) estimates the total cost of a solution path which goes through node n

f(n) = g(n) + h(n)

lowest-cost path “heuristic” estimate from S to n of cost from n to G (found so far) for a node, N,

h(N) heuristic function

N (superscript)

Ng(N) path-cost function (subscript)

 A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.  An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

 Example: hSLD(n) (never overestimates the actual road distance)  Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal  A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n')

 If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path.

 Theorem: If h(n) is consistent, A* using

GRAPH-SEARCH is optimal

 The following figure shows a portion of a partially expanded search tree. Each arc between nodes is labeled with the cost of the corresponding operator, and the leaves are labeled with the value of the heuristic function, h.

Which node (use the node’s letter) will be expanded next by each of the following search algorithms? A h=20

3 5 (a) Depth-first search h=14 B 19 (b) Breadth-first search C D (c) Uniform-cost search 6 h=18 4 5 5 h=15 (d) Greedy search E F G H (e) A* search h=10 h=12 h=8 h=10 Search

DFS BFS Uniform BMA* BestFS Cost f(n) g(n)

Depth Iterative Greedy A* Limited Deepening f(n) =h(n) f(n)=g(n)+h(n)

cf: Animated Search Algorithms at http://www.cs.rmit.edu.au/AI-Search/Product/ * British Museum Algorithm (i.e. Exhaustive Search)