1 / 48

Chapter 4: Informed search & exploratIon

Chapter 4: Informed search & exploratIon. Prepared by : Ece UYKUR. Outline. Evaluation of Search Strategies Informed Search Strategies Best-first search Greedy best-first search A * search Admissible Heuristics Memory - Bounded Search Iterative - Deepening A* Search

cecily
Download Presentation

Chapter 4: Informed search & exploratIon

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4: Informedsearch & exploratIon Preparedby: Ece UYKUR

  2. Outline • Evaluation of SearchStrategies • InformedSearchStrategies • Best-first search • Greedy best-first search • A*search • AdmissibleHeuristics • Memory-BoundedSearch • Iterative-Deepening A* Search • RecursiveBest-FirstSearch • SimplifiedMemory-Bounded A* Search • Summary

  3. Evaluation of SearchStrategies • Search algorithms are commonly evaluated according to the following four criteria; • Completeness: does it always find a solution if one exists? • Time complexity: how long does it take as a function of number of nodes? • Space complexity: how much memory does it require? • Optimality: does it guarantee the least-cost solution? • Time and space complexity are measured in terms of: • b – max branching factor of the search tree • d – depth of the least-cost solution • m – max depth of the search tree (may be infinity)

  4. BeforeStarting… • Solution is a sequence of operators that bring you from current state to the goal state. • The search strategy is determined by the order in which the nodes are expanded. • UninformedSearchStrategies; use only information available in the problem formulation. • Breadth-first • Uniform-cost • Depth-first • Depth-limited • Iterative deepening

  5. Uninformed search strategies look for solutions bysystematically generating new states and checkingeach of them against the goal. • This approach is very inefficient in most cases. • Most successor states are “obviously” a bad choice. • Such strategies do not know that because they haveminimal problem-specific knowledge.

  6. InformedSearchStrategies • Informed search strategies exploit problem-specificknowledge as much as possible to drive the search. • They are almost always more efficient than uninformedsearches and often also optimal. • Also called heuristic search. • Usestheknowledge of the problem domain tobuild an evaluationfunction f. • The searchstrategy: For every node n in the search space, f(n) quantifiesthe desirability of expanding n,in order to reach thegoal.

  7. Usesthe desirability value of the nodes in the fringeto decide which node to expand next. • f is typically an imperfect measure of the goodness ofthe node. The right choice of nodes is not always theone suggested by f. • It is possible to build a perfect evaluation function,which will always suggest the right choice. • The general approach is called “best-first search”.

  8. Best-FirstSearch • Choose the most desirable (seemly-best) node forexpansion based on evaluation function. • Lowest cost/highest probability evaluation • Implementation; • Fringeis a priority queue in decreasing order ofdesirability. • Evaluation Function; f(n): select a node for expansion (usually the lowest cost node), desirability of node n. • Heuristic function; h(n): estimates cheapest path cost fromnode n to a goalnode. • g(n) = cost from the initial state to the current state n.

  9. Best-FirstSearchStrategy; • Pick “best” element of Q. (measuredbyheuristicvalue of state) • Addpathextensionsanywhere in Q. (it may be moreefficienttokeepthe Q ordered in someway, so as tomake it easiertofindthe ‘best’ element) • Therearemanypossibleapproachestofindingthebestnode in Q; • Scanning Q tofindlowestvalue, • Sorting Q andpickingthefirst element, • Keepingthe Q sortedbydoing “sorted” insertions, • Keeping Q as apriorityqueue.

  10. Several kinds of best-first search introduced • Greedy best-first search • A* search

  11. GreedyBest-FirstSearch • Estimation function: • Expand the node that appears to be closest to the goal,based on the heuristic function only; • f(n) = h(n) = estimate of cost from n totheclosestgoal • Ex: thestraightlinedistanceheuristics • hSLD(n) = straight-line distance from n to Bucharest • Greedy search expands first the node that appears to be closest to the goal, according to h(n). • “greedy” – at each search step the algorithm alwaystries to get close to the goal as it can.

  12. Romaniawithstep costs in km

  13. hSLD(In(Arad)) = 366 • Notice that the values of hSLD cannot be computed from the problem itself. • It takes some experience to know that hSLD is correlated with actual road distances • Therefore a useful heuristic

  14. Properties of GreedySearch • Complete? • No – can get stuck in loops / followssinglepathtogoal. Ex; Iasi > Neamt > Iasi > Neamt > … • Complete in finite space with repeated-state checking. • Time? • O(b^m) but a good heuristic can give dramatic improvement. • Space? • O(b^m) – keeps all nodes in memory • Optimal? • No.

  15. A* Search • Idea;avoid expanding paths that are already expensive. • Evaluation function: f(n) = g(n) + h(n)with; g(n) – cost so far to reach n h(n) – estimated cost to goal from n f(n) – estimated total cost of path through n to goal • A* search uses an admissible heuristic, that is, h(n)  h*(n) where h*(n) is the true cost from n. Ex: hSLD(n) never overestimates actual road distance. • Theorem: A* search is optimal.

  16. When h(n) = actual cost to goal • Only nodes in the correct path are expanded • Optimal solution is found • When h(n) < actual cost to goal • Additional nodes are expanded • Optimal solution is found • When h(n) > actual cost to goal • Optimal solution can be overlooked

  17. 1 Optimality of A* (standardproof) • Suppose some suboptimal goal G2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G1.

  18. ConsistentHeuristics • A heuristic is consistent if for every node n, every successor n' of ngenerated by any action a, • h(n) ≤ c(n,a,n') + h(n') • n' = successor of n generated by action a • The estimated cost of reaching the goal from n is no greater than the stepcost of getting to n' plus the estimated cost of reaching the goal from n‘ • If h is consistent, we have • f(n') = g(n') + h(n') • = g(n) + c(n,a,n') + h(n') • ≥ g(n) + h(n) • = f(n) • if h(n) is consistent then the values of f(n) along any path are nondecreasing • Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

  19. Optimality of A* (moreusefulproof)

  20. f-contours How do the contours look like when h(n) =0? Uniformed search; Bands circulate around the initial state A* search; Bands stretch toward the goal and is narrowly focused aroundthe optimal path if more accurate heuristicswere used

  21. Properties of A* Search • Complete? • Yes – unless thereareinfinitely many nodes with f  f(G). • Time? • O(b^d ) – Exponential. [(relative error in h) x (length of solution)] • Space? • O(b^d) – Keeps allnodes in memory. • Optimal? • Yes – cannot expand fi+1 until fi is finished.

  22. AdmissibleHeuristics

  23. Relaxed Problem • Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem. • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution. • If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution. • Key point;the optimal solution cost of a relaxed problemis no greater than the optimal solution cost of the realproblem

  24. Dominance • Definition: If h2(n) h1(n) for all n (both admissible)then h2 dominates h1. • For 8-puzzle, h2 indeed dominates h1. • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance • If h2 dominates h1, then h2 is better for search. • For 8-puzzle, search costs: d = 14 IDS = 3,473,941 nodes (IDS = Iterative Deepening Search) A(h1) = 539 nodesA(h2) = 113 nodes d = 24 IDS 54,000,000,000 nodes A(h1) = 39,135 nodesA(h2) = 1,641 nodes

  25. Memory-BoundedSearch • Iterative-Deepening A* Search • RecursiveBest-FirstSearch • SimplifiedMemory-Bounded A* Search

  26. Iterative-Deepening A* Search (IDA*) • The idea of iterative deepening was adapted to theheuristic search context to reduce memory requirements • At each iteration, DFS is performed by using the f-cost (g+h) as the cutoff rather than the depth • Ex;the smallest f-cost of any node that exceeded the cutoff onthe previous iteration

  27. Properties of IDA* • IDA* is complete and optimal • Space complexity: O(bf(G)/δ) ≈ O(bd) • δ : the smallest step cost • f(G): the optimal solution cost • Time complexity: O(α b^d) • α: the number of distinct fvalues smaller than the optimalgoal • Between iterations, IDA* retains only a single number; the f-cost • IDA* has difficulties in implementation when dealing withreal-valued cost

  28. Recursive Best-First Search (RBFS) • Attempt to mimic best-first search but use only linearspace • Can be implemented as a recursive algorithm. • Keep track of the f-value of the best alternative path from anyancestor of the current node. • If the current node exceeds the limit, then the recursion unwindsback to the alternative path. • As the recursion unwinds, the f-value of each node along thepath is replaced with the best f-value of its children.

  29. Ex: TheRoutefinding Problem

  30. Properties of RBFS • RBFS is complete and optimal • Space complexity: O(bd) • Time complexity : worse case O(b^d) • Depend on the heuristics and frequency of “mindchange” • The same states may be explored many times

  31. Simplified Memory-Bounded A* Search (SMA*) • Make use of all available memory M to carry out A* • Expanding the best leaf like A* until memory is full • When full, drop the worst leaf node (with highest f-value) • Like RBFS, backup the value of the forgotten node to its parent ifit is the best among the sub-tree of its parent. • When all children nodes were deleted/dropped, put the parentnode to the fringe again for further expansion.

  32. Properties of SMA* • Is complete if M ≥ d • Is optimal if M ≥ d • Space complexity: O(M) • Time complexity : worse case O(b^d)

  33. Summary • This chapter has examined the application of heuristics to reduce search costs. • We have lookedat number of algorithms that use heuristics, and found that optimality comes at a stiff price interms of search cost, even with good heuristics. • Best-first search is just GENERAL-SEARCH where the minimum-cost nodes (according tosome measure) are expanded first. • If we minimize the estimated cost to reach the goal, h(n), we get greedy search. Thesearch time is usually decreased compared to an uninformed algorithm, but the algorithmis neither optimal nor complete. • Minimizing f(n) = g(n) + h(n) combines the advantages of uniform-cost search andgreedy search. If we handle repeated states and guarantee that h(n) never overestimates,we get A* search.

  34. A* is complete, optimal, and optimally efficient among all optimal search algorithms. Itsspace complexity is still prohibitive. • The time complexity of heuristic algorithms depends on the quality of the heuristic function. • Good heuristics can sometimes be constructed by examining the problem definition or bygeneralizing from experience with the problem class. • We can reduce the space requirement of A* with memory-bounded algorithms such asIDA* (iterative deepening A*) and SMA* (simplified memory-bounded A*).

  35. Thanksforyourpatience

More Related