1 / 121

Efficient Informed Search Strategies for AI Algorithms

Discover how informed search strategies in artificial intelligence utilize problem-specific knowledge to drive efficient search processes, compared to uninformed methods. Explore techniques such as best-first search, A*, and heuristics, optimizing solutions through evaluation functions and heuristic functions. Dive into examples like hill climbing and beam search, understanding the importance of admissible heuristics for optimal results.

jchambers
Download Presentation

Efficient Informed Search Strategies for AI Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 03 Artificial Intelligence Informed (Heuristic) Search Algorithms And Heuristic Functions

  2. Material • Chapter 3 • Section 3.5 Informed (Heuristic) Search Strategies • Section 3.6 Heuristic Functions Exclude memory-bounded heuristic search • Chapter 4 • Section 4.1 Local Search Algorithms and Optimization Problems

  3. Outline • Informed (Heuristic) Search Strategies (Ch 03 Sect 3.5 – 3.6) • Best-first search • Greedy best-first search • A* search • Heuristics • Local Search Algorithms (Ch 04 Sect 4.1) • Hill-climbing search • Simulated annealing search • Local beam search • Genetic algorithms

  4. Review: Tree search function Tree-Search( problem, fringe) returns a solution, or failure fringe ← Insert(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do iffringe is empty then return failure node ← REMOVE-FRONT(fringe) if GOAL-TEST[problem] applied to STATE(node) succeeds return node fringe ← INSERTALL(EXPAND(node, problem), fringe) • A search strategy is defined by picking the order of node expansion

  5. Graph search Fig 3.7 An informal description of the general graph-search algorithm.

  6. Informed Search Strategies • Uninformed search strategies look for solutions by systematically generating new states and checking each of them against the goal • This approach is inefficient in most cases • Most successor states are “obviously” a bad choice • Such strategies do not know that because they have minimal problem-specific knowledge • Informed search strategies exploit problem-specific knowledge as much as possible to drive the search • They are almost always more efficient than uninformed searches and often also optimal • number of tiles out of place • Problem domain

  7. Informed (Heuristic) Searches Strategies -Informed Searches • An informed search strategy uses problem-specific knowledge beyond the definition of the problem itself for finding solutions more efficiently than uninformed strategy. • Examples • Best-first search, • Hill climbing, • Beam search, • A*, • IDA*, • RBFS, • SMA*

  8. Informed Searches • New terms • Heuristics – use an evaluation function f(n) to construct a heuristic function h(n). • Optimal solution – expand the node with the lowest evaluation • Informedness – some guidance on where to look for solutions. • Hill climbing problems • Admissibility – h(n) be an admissible heuristic, required for optimality. • New parameters • g(n) = estimated cost from initial state to state n • h(n) = estimated cost (distance) from state n to closest goal • h(n) is a heuristic function • Robot path planning, h(n) could be Euclidean distance • 8 puzzle, h(n) could be number of tiles out of place • Heuristic search algorithms use h(n) to guide search

  9. The Euclidean distance between points p and q is the length of the line segment connecting them ( ). In Cartesian coordinates, if p = (p1, p2,..., pn) and q = (q1, q2,..., qn) are two points in Euclidean n-space, then the distance (d) from p to q, or from q to p is given by the Pythagorean formula: • d(p, q) =

  10. Informed Search Strategies • Main Idea • Use the knowledge of the problem domain to build an evaluation function f • For every node n in the search space, f(n) quantifies the desirability of expanding n in order to reach the goal • Then, use the desirability value of the nodes in the fringe to decide which node to expand next

  11. Informed Search Strategies • The evaluation function f is typically an imperfect measure of the goodness of the node • i.e., the right choice of nodes is not always the one suggested by f • Note: • Is it possible to build a perfect evaluation function, which will always suggest the right choice. How? • Why don’t we use perfect evaluation functions then?

  12. Standard Assumptions on Search Spaces • The cost of a node increases with the node’s depth. • Transitions costs are non-negative and bounded below, i.e., there is a > 0 such that the cost of each transition is (This guarantees that f(n) is non-decreasing function.) • Each node has only finitely-many successors. • Note: • There are problems that do not satisfy one or more of these assumptions

  13. Best-First Search • Idea: use an evaluation function f(n)for each node • estimate of “desirability” for each node • Strategy: Always expand the most desirable unexpanded node. • Implementation: Order the nodes in fringe in decreasing order of desirability • Special cases: • greedy best-first search • A* search Note: • Since f is only an approximation, ”Best-First” is a misnomer. • Each time we choose the node at that point appears to be the best.

  14. Best-first Search Strategies • Best-first is a family of search strategies, • each has a different evaluation function • Strategies useestimates of the cost of reaching the goal and try to minimize it • Uniform Cost Search also tries to minimize a cost measure. Is it a best-first search strategy? • Not in spirit, because the evaluation function should incorporate a cost estimate of going from the current state to the closest goal state (whereas, the UCS is from root to current node)

  15. Best-First Search Best-first search is an instance of the general TREE-SEARCH or GRAPH SEARCH algorithm: Select a node for expansion based on an evaluation function f(n). The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is expanded first. The choice of fdetermines the search strategy (use f to order the priority queue). Most best-first algorithms include as a component of f a heuristic function, h(n): h(n) = estimated cost of the cheapest path from the state at node n to a goal state. If n is a goal state, then h(n) = 0. Forexample, h(Arad) is a straight-line distance from Arad to Bucharest (destination). Greedy best-first search evaluates nodes by using the heuristic function f(n) = h(n).

  16. Best-First Search • QueueingFn is sort-by-h • Best-first search only as good as heuristic • Example: Look at heuristics for 8 puzzle: • The number of misplaced tiles h1 is an admissible heuristic • At the start state, all the eight tiles are out of position, h1 = 8 • Manhattan Distance (i.e., the city block distance) -The sum of the distances of the tiles from their goal positions h2, • Tiles 1 to 8 in the start state give a Manhattan distance h2, = 3 + 1 + 2 +2 + 2 + 3 + 3 + 2 = 18 • The average solution cost for a randomly generated 8-puzzle is about 22 steps. • The solution cost for this 8-puzzle instance is 26 steps

  17. Admissible heuristics E.g., for the 8-puzzle: • h1(n)= number of misplaced tiles • = 8 (At the start state, all the eight tiles are out of position) • h2(n)= total Manhattan distance (i.e., no. of squares from desired location of each tile) • Tiles 1 to 8 in the start state give a Manhattan distance • h2 (8) = 3 + 1 + 2 +2 + 2 + 3 + 3 + 2 = 18

  18. Example - Best-First Search C(21)

  19. Example T(5) O(7) E(12), B(14) P(15)

  20. Example O(7) E(12), B(14) P(15)

  21. Example O(7) E(12), B(14) P(15)

  22. Example I(4) E(12), B(14) P(15) N(44)

  23. Example I(4) E(12) B(14) P(15) N(44)

  24. Example Z(0) E(12) B(14) P(15) N(44)

  25. Example Z(0) E(12) B(14) P(15) N(44)

  26. Example

  27. Comparison of Search Techniques Where m is the maximum depth of the search space. b is the branching factor. d is the depth of the shallowest solution.

  28. Greedy Best-First Search

  29. Romania with Step Costs in Km – Fig 3.22 Values ofhSLD

  30. Greedy Best-First Search • Evaluation functionf(n) = h(n)(uses heuristic function only) • h(n) = estimate of cost of cheapest path from n to closest goal. • e.g., if the goal is Bucharest, the straight-line distance heuristic hSLD(n) = straight-line distance from n to Bucharest • Greedy best-first search expands the node that appears to be closest to goal

  31. Greedy Best-First Search - Example Arad(366)

  32. Greedy Best-First Search - Example Sibiu(253) Timisoara(329) Zerind(374)

  33. Greedy Best-First Search - Example Fagaras(176) RimilnicuVilcea(193) Timisoara(329) Arad(366)Zerind(374)

  34. Greedy Best-First Search - Example Bucharest(0) RimnicuVilcea(193) Sibiu(253) Timisoara(329) Arad(366) Zerind(374)

  35. Greedy Best-First Search • For this problem, greedy best-first search using hSLD finds a solution without ever expanding a node that is not on the solution path: hence the search cost is minimal. • It is called “greedy” because at each step it tries to get as close to the goal as it can. • It is not optimal because the path via Sibiu and Fagaras to Bucharest (310) is longer than the path through RimnicuVilcea and Pitesti (278km). • Much like depth-first search, greedy best-first search is incomplete even in a finite state space. For example: • Consider the problem is getting from Iasi to Fagaras. The hSLD suggests that Neamt be expanded first because it is closest to Fagaras, but it is a dead end.

  36. Greedy Best-First Search • For this problem, greedy best-first search using hSLDfinds a solution without ever expanding a node that is not on the solution path: hence the search cost is minimal. • It is not optimal because the path via Sibiu and Fagaras to Bucharest is longer than the path through RimnicuVilcea and Pitesti. • It is called “greedy” because at each step it tries to get as close to the goal as it can. • Much like depth-first search, greedy best-first tree-search is incompleteeven in a finite state space. For example: • Consider the problem is getting from Iasi to Fagaras. The hSLD suggests that Neamt be expanded first because it is closest to Fagaras, but it is a dead end.

  37. For example: • Consider the problem is getting from Iasi to Fagaras. The hSLD suggests that Neamt be expanded first because it is closest to Fagaras, but it is a dead end. • The solution is to go first to Vaslui – a step that is actual farther from the goal according to the heuristic – and then continue to Urziceni, Bucharest, and Fagaras. The algorithm will never find this solution, because expanding Neamt puts Iasi back into the frontier. Iasi is closer to Fagaras then Vaslui is, and so Iasi will be expanded again, leading to an infinite loop. • The graph search version is complete in finite space, but not in infinite ones.) • The worst time and space complexity for the tree version is O(bm) , where m is the maximum depth of the search space. • With a good heuristic, the complexity can be reduced substantially. The amount of the reduction depends on the particular problem and on the quality of the heuristic. Iasi(200) Neamt(180) Vaslui (210) Iasi(200) Vaslui(210)

  38. For example: • Consider the problem is getting from Iasi to Fagaras. The hSLD suggests that Neamt be expanded first because it is closest to Fagaras, but it is a dead end. • The solution is to go first to Vaslui – a step that is actual farther from the goal according to the heuristic – and then continue to Urziceni, Bucharest, and Fagaras. The algorithm will never find this solution, because expanding Neamt puts Iasi back into the frontier. Iasi is closer to Fagaras then Vaslui is, and so Iasi will be expanded again, leading to an infinite loop. • The graph search version is complete in finite space, but not in infinite ones.) • The worst time and space complexity for the tree version is O(bm), where m is the maximum depth of the search space. • With a good heuristic, the complexity can be reduced substantially. The amount of the reduction depends on the particular problem and on the quality of the heuristic.

  39. Properties of Greedy Best-First Search • Complete? No • Only in finite spaces with repeated-state checking • Otherwise, can get stuck in loops:, e.g., Iasi Neamt Iasi Neamt • Time?O(bm) — may have to expand all nodes, but a good heuristic can give dramatic improvement • Space?O(bm) -- keeps all nodes in memory • Optimal? No • A good heuristic can nonetheless produce dramatic time/space improvements in practice

  40. A*: A Better Best-First Strategy • g(n) = estimated cost from initial state to state n • h(n) = estimated cost (distance) from state n to closest goal • Greedy Best-first search • minimizes estimated cost h(n) from current node n to goal • is informed but almost always suboptimal and incomplete • Uniform cost search • minimizes actual cost g(n) from initial state to current node n • is, in most cases, optimaland completebut uninformed • A* search • combines the two by minimizing f(n) = g(n) + h(n) • is, under reasonable assumptions, optimaland complete, and • also informed

  41. A* Search • Idea: avoidexpanding paths that are already expensive • Evaluation function f(n) = g(n) + h(n) • g(n)= cost so far to reach n • h(n) = estimated cost from n to goal (cheapest path) • f(n)= estimated total cost of path through n to goal (cheapest solution) • A* search uses an admissible heuristic: • …

  42. A* Search • Idea: avoidexpanding paths that are already expensive • Evaluation function f(n) = g(n) + h(n) • … • f(n)= estimated total cost of path through n to goal (cheapest solution) • A* search uses an admissible heuristic: • for all n, h(n) h∗(n) where h∗(n) is the true/actual cost from n (i.e., an admissible heuristic is one that never overestimates the cost to reach the goal. This implies f(n) never estimates the true cost of a path through n) • e.g., hSLD(n) never overestimates the actual road distance

  43. Conditions for Optimality: Admissibility and Consistency The first condition required for optimality of A*: h(n) to be an admissible heuristic. An admissible heuristic h(n) is one that never overestimates the cost to reach the goal through n. The estimated cost, f(n), of the cheapest path from the current node n to the goal is f(n) = g(n) + h(n), where g(n)is the path cost from the start node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal. e.g., the straight line distance hSLD is admissible.

  44. Conditions for Optimality: Admissibility and Consistency The second condition required for optimality of A*: The consistency required only for application of A* to graph search. A heuristic h(n) is consistent if, for every node n and every successor n′ of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n′ plus the estimated cost of reaching the goal from n′ : h(n) c(n, a, n′) + h(n′), This is a form of the general triangle inequality. e.g., hSLD is a consistent heuristic.

  45. Conditions for Optimality: Admissibility and Consistency • Every consistent heuristic is admissible. • Consistency is a stricter requirement than admissibility. • It is hard to concoct heuristics that are admissible but not consistent.

  46. Romania with Step Costs in Km – Fig 3.22 Values of hSLD

  47. A* Search Example

  48. A* Search Example

More Related