200 likes | 220 Views
Introduction to Artificial Intelligence Heuristic Search. Ruth Bergman Fall 2002. Search Strategies. Uninformed search (= blind search ) have no information about the number of steps or the path cost from the current state to the goal Informed search (= heuristic search )
E N D
Introduction to Artificial IntelligenceHeuristic Search Ruth Bergman Fall 2002
Search Strategies • Uninformed search (= blind search) • have no information about the number of steps or the path cost from the current state to the goal • Informed search (= heuristic search) • have some domain-specific information • we can use this information to speed-up search • e.g. Bucharest is southeast of Arad. • e.g. the number of tiles that are out of place in an 8-puzzle position • e.g. for missionaries and cannibals problem, select moves that move people across the river quickly
Heuristic Search • Let us suppose that we have one piece of information: a heuristic function • h(n) = 0, n a goal node • h(n) > 0, n not a goal node • we can think of h(n) as a “guess” as to how far n is from the goal Best-First-Search(state,h) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) return failure
Heuristics: Example • Travel: h(n) = distance(n, goal)
Heuristics: Example • 8-puzzle: h(n) = tiles out of place h(n) = 3
Example - cont h(n) = 3 h(n) = 2 h(n) = 4 h(n) = 3
h(n) = 3 h(n) = 2 h(n) = 4 h(n) = 3 h(n) = 1 h(n) = 3
h(n) = 3 h(n) = 2 h(n) = 4 h(n) = 3 h(n) = 1 h(n) = 3 h(n) = 2 h(n) = 0 h(n) = 2
Best-First-Search Performance • Completeness • Complete if either finite depth, or minimum drop in h value for each operator • Time complexity • Depends on how good the heuristic function is • A “perfect” heuristic function will lead search directly to the goal • We rarely have “perfect” heuristic function • Space Complexity • Maintains fringe of search in memory • High storage requirement • Optimality • Non-optimal solutions 2 3 x 1 Suppose the heuristic drops to one everywhere except along the path along which the solution lies 1 1 x 1
Iterative Improvement Algorithms • Start with the complete configuration and to make modifications to improve the quality • Consider the states laid out on the surface of a landscape • Keep track of only the current state => Simplification of Best-First-Search • Do not look ahead beyond the immediate neighbors of that state • Ex: amnesiac climb to summit in a thick fog
Hill-Climbing “Like climbing Everest in thick fog with amnesia” • Simple loop that continually moves in the direction of increasing value • does not maintain a search tree, so the node data structure need only record the state and its evaluation. • Always try to make changes that improve the current state • Steepest-ascent: pick the steepest next state Hill-Climbing(state,h) Current = state do forever next = maximum valued successor of current if (value(next) < value(current) return current current = next
Drawbacks • Local maxima : halt with local maxima • Plateaux : random walk • Ridges : oscillate from side to side, limit progress • Random-Restart Hill-Climbing Conducts a series of hill-climbing searches from randomly generated initial states.
Hill-Climbing Performance • Completeness • Not complete, does not use systematic search method • Time complexity • Depends on heuristic function • Space Complexity • Very low storage requirement • Optimality • Non-optimal solutions • Often results in locally optimal solution
Simulated-Annealing • take some upnhill steps to escape the local minimum • Instead of picking the best move, it picks a random move • If the move improves the situation, it is executed. Otherwise, move with some probability less than 1. • Physical analogy with the annealing process: • Allowing liquid to gradually cool until it freezes • The heuristic value is the energy, E • Temperature parameter, T, controls speed of convergence.
Simulated-Annealing(state,schedule) Current = state For t=1,2,… T = schedule(t) If T=0 return current next = a randomly selected successor of current DE = value(next)-value(current) if (DE>0) current = next else current = next with probability Simulated-Annealing Algorithm
20 18 14 10 6 2 5 10 15 20 Simulated Annealing Solution Quality The schedule determines the rate at which the temperature is lowered If the schedule lowers T slowly enough, the algorithm will find a global optimum High temperature T is characterized by a large portion of accepted uphill moves whereas at low temperature only downhill moves are accepted T= 100 – t*5 Probability > 0.9 => If a suitable annealing schedule is chosen, simulated annealing has been found capable of finding a good solution, though this is not guaranteed to be the absolute minimum.
Beam Search • Overcomes storage complexity of Best-First-Search • Maintains the k best nodes in the fringe of the search tree (sorted by the heuristic function) • When k = 1, Beam search is equivalent to Hill-Climbing • When k is infinite, Beam search is equivalent to Best-First-Search • If you add a check to avoid repeated states, memory requirement remains high • Incomplete, search may delete the path to the solution.
Beam Search Algorithm Beam-Search(state,h,k) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) If size(nodes) > k delete last item in nodes return failure
Search Performance 8-Square Heuristic 1: Tiles out of place Heuristic 1: Manhattan distance* *Manhattan distance =.total number of horizontal and vertical moves required to move all tiles to their position in the goal state from their current position. h1 = 7 h2 = 2+1+1+2+1+1+1+0=9 => Choice of heuristic is critical to heuristic search algorithm performance.