Informed (Or Heuristic) Search Methods

Informed (Or Heuristic) Search Methods

US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Informed (or Heuristic) Search Methods ² Heuristics ² Best-¯rst search ² The algorithm A¤ ² Properties of heuristic functions ² Branch-and-bound & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Heuristics All blind search algorithms that we discussed have time complexity of order O(bd) or something similar. This is unacceptable in real problems! In large search spaces, one can do a lot better by using domain-speci¯c information to speed-up search. Heuristics are \rules of thumb" for selecting the next node to be expanded by a search algorithm. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Best-First Search A blind search algorithm could be improved if we knew the best (or \seemingly best") node to expand. function BestFirstSearch(problem;EvalFn) returns a solution sequence QueuingF n à a function that orders nodes in ascending order of EvalFn return TreeSearch(problem; QueuingF n) The function EvalFn is called the evaluation function. Note: GraphSearch can be used instead of TreeSearch. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Evaluation Functions and Heuristic Functions There is a whole family of best-¯rst search algorithms with di®erent evaluation functions. A key component of many of these algorithms is a heuristic function h such that h(n) = estimated cost of the cheapest path from the state at node n to a goal state h can be any function such that h(n) = 0 if n is a goal node. But in order to ¯nd a good heuristic function, we need domain speci¯c information. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Greedy Best-First Search GreedyBestFirstSearch tries to expand the node that it is closest to the goal, on the grounds that this is likely to lead to a solution quickly. Thus nodes are evaluated using the heuristic function h i.e., f(n) = h(n). function GreedyBestFirstSearch(problem) returns a solution or failure return BestFirstSearch(problem; h) The algorithm is greedy because it prefers to take the biggest possible bite out of the remaining cost to reach the goal. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Example - On the Road to Bucharest Oradea 71 Neamt Zerind 87 151 75 Iasi Arad 140 92 Sibiu 99 Fagaras 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 111 Pitesti 211 Lugoj 97 70 98 Hirsova 146 85 Mehadia 101 Urziceni 75 86 138 Bucharest Dobreta 120 90 Craiova Eforie Giurgiu & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Example (cont'd) hSLD(n)= straight line distance between n and the goal location. Distances for Bucharest are shown below: Arad 366 Mehadia 241 Bucharest 0 Neamt 234 Craiova 160 Oradea 380 Dobreta 242 Pitesti 100 Eforie 161 Rimnicu Vilcea 193 Fagaras 176 Sibiu 253 Giurgiu 77 Timisoara 329 Hirsova 151 Urziceni 80 Iasi 226 Vaslui 199 Lugoj 244 Zerind 374 & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Example (cont'd) (a) The initial state Arad 366 (b) After expanding Arad Arad Sibiu Timisoara Zerind 253 329 374 (c) After expanding Sibiu Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 176 380 193 (d) After expanding Fagaras Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 380 193 Sibiu Bucharest 253 0 & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Greedy Best-First Search (cont'd) Evaluation: ² Complete? No (consider the problem of getting from Iasi to Fagaras; search can oscillate between Iasi and Neamt). ² Time: O(bm) where m is the maximum depth of the search space. ² Space: O(bm) ² Optimal? No (The path Arad-Sibiu-Rimnicu Vilcea-Pitesti-Bucharest is optimal with cost 418. The path through Sibiu and Fagaras has cost 450. ) A good choice of h can reduce space and time substantially. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Greedy Best-First Search (cont'd) The problem with greedy BFS is that it does not take into account the cost of getting to a node n that has minimum h(n). Shall we have an improved algorithm if we take this cost into account? & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Uniform-cost search (UCS) - Revision Modi¯es BFS by always expanding the lowest cost node on the fringe (as measured by the path cost). Example: S 0 S ABC 1 5 15 S A ABC 1 10 5 15 G 5 B 5 S SG 11 15 5 ABC C 15 G G 11 10 (a) (b) & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Uniform-cost search (cont'd) Evaluation: ² Complete? Yes. ¤ ² Time: O(bdC =²e) where b is the branching factor, C¤ is the cost of the optimal solution and every action costs at least ² > 0. ² Space: same as time. ² Optimal? Yes. Completeness and optimality hold under the assumption that the branching factor is ¯nite and the cost never decreases as we go along a path i.e., g(Successor(n)) ¸ g(n) for every node n. The last condition holds e.g., when each action costs at least ² > 0. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The A¤ Search Algorithm Greedy Best-First Search: ² Searches by minimizing the estimated cost h(n) to the goal ² Neither optimal nor complete Uniform Cost Search: ² Searches by minimizing the cost g(n) of the path so far ² Optimal, complete A¤ combines the above algorithms. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The A¤ Algorithm (cont'd) A¤ is a best-¯rst search algorithm with evaluation function f(n) = g(n) + h(n). In this case f(n) is the estimated cost of the cheapest solution through n. function A¤Search(problem) returns a solution or failure return BestFirstSearch(problem; g + h) & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ A¤ Goes to Bucharest See illustration in accompanying ¯le astar-progress.ps or in AIMA. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The A¤ Algorithm (cont'd) Let us assume that A¤ uses TreeSearch as its main subroutine and also: ² The function h is chosen such that it never overestimates the cost to reach a goal. Such an h is called an admissible heuristic. If h is admissible then f(n) never overestimates the actual cost of the best solution through n. ² The branching factor b is ¯nite. ² Every action costs at least ± > 0. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The A¤ Algorithm (cont'd) Evaluation (under the previous assumptions): ² Complete? Yes. ² Time: exponential, unless the error in the heuristic function h grows no faster than the logarithm of the actual path cost. For most heuristics used in practice, the error is at least proportional to the path cost. But even when A¤ takes exponential time, it o®ers a huge improvement compared to blind search. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The A¤ Algorithm (cont'd) Evaluation (cont'd): ² Space: O(bd). This is the main drawback of A¤. The algorithm iterative deepening A¤ (IDA¤) addresses the large space requirements of A¤. ² Optimal? Yes. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Optimality and Completeness of A¤ Proposition. A¤ is optimal. Proof: Let us assume that the cost of the optimal solution is C¤ and a suboptimal goal node G2 appears on the fringe. Then because G2 is suboptimal and h(G2) = 0, we have: ¤ f(G2) = g(G2) + h(G2) = g(G2) > C Now consider a fringe node n which is on the optimal path. Because h does not overestimate the cost to the goal, we have: f(n) = g(n) + h(n) · C¤ So G2 will not be chosen for expansion! & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Optimality and Completeness of A¤ (cont'd) The proof of optimality breaks down when A¤ uses GraphSearch as its main subroutine because GraphSearch can discard the optimal path to a repeated state if it is not the ¯rst one to be generated. To guarantee optimality in this case, we have two options: ² Modify GraphSearch so that it discards the most expensive path found to a node. ² Impose an extra requirement of consistency or monotonicity on h. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Consistent Heuristics De¯nition. A heuristic h is called consistent if for all nodes n; n0 such that n0 is a successor of n generated by any action a, h(n) · c(n; a; n0) + h(n0). This is a form of the general triangle inequality: the sum of the lengths of any two sides of a triangle is greater than the length of the remaining side. Proposition. Every consistent heuristic is also admissible. Most admissible heuristics that one can think of are also consistent (e.g., hSLD)! & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Optimality and Completeness of A¤ (cont'd) Proposition. If h is consistent then the values of f for nodes expanded by A¤ along any path are non-decreasing. Proof: Let n be a node and n0 its successor. Then g(n0) = g(n) + c(n; a; n0) for some action a, and we have f(n0) = g(n0)+h(n0) = g(n)+c(n; a; n0)+h(n0) ¸ g(n)+h(n) = f(n): Thus we can conceptually draw contours in the state space like contours in a topographic map. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ The behaviour of A¤ O N Z I A 380 S F V 400 T R L P H M U B 420 D E C G & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Optimality and Completeness of A¤ (cont'd) A¤ search is complete: as we add contours of increasing f, we must eventually reach a contour where f is equal to the cost of the path to a goal state. In fact, A¤ works as follows: ² It expands all nodes with f(n) < C¤ ² It may then expand some of the nodes right on the \goal contour", for which f(n) = C¤, before selecting a goal node. & % US02 Teqnht NohmosÔnh M. Koumparkhc ' $ Optimality and Completeness of A¤ (cont'd) A¤ expands no nodes with cost f(n) > C¤ where C¤ is the cost of the optimal solution.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    63 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us