290 likes | 649 Views
Heuristic search. Points Definitions Best-first search Hill climbing Problems with hill climbing An example: local and global heuristic functions Simulated annealing The A* procedure Means-ends analysis. Definitions.
E N D
Heuristic search Points Definitions Best-first search Hill climbing Problems with hill climbing An example: local and global heuristic functions Simulated annealing The A* procedure Means-ends analysis
Definitions Heuristics (Greek heuriskein = find, discover): "the study of the methods and rules of discovery and invention". We use our knowledge of the problem to consider some (not all) successors of the current state (preferably just one, as with an oracle). This means pruning the state space, gaining speed, but perhaps missing the solution! In chess: consider one (apparently best) move, maybe a few -- but not all possible legal moves. In the travelling salesman problem: select one nearest city, give up complete search (the greedy technique). This gives us, in polynomial time, an approximate solution of the inherently exponential problem; it can be proven that the approximation error is bounded.
Definitions (2) For heuristic search to work, we must be able to rank the children of a node. A heuristic function takes a state and returns a numeric value -- a composite assessment of this state. We then choose a child with the best score (this could be a maximum or minimum). A heuristic function can help gain or lose a lot, but finding the right function is not always easy. The 8-puzzle: how many misplaced tiles? how many slots away from the correct place? and so on. Water jugs: ??? Chess: no simple counting of pieces is adequate.
Definitions (3) The principal gain -- often spectacular -- is the reduction of the state space. For example, the full tree for Tic-Tac-Toe has 9! leaves. If we consider symmetries, the tree becomes six times smaller, but it is still quite large. With a fairly simple heuristic function we can get the tree down to 40 states. (More on this when we discuss games.) Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.
Best-first search The algorithm select a heuristic function (e.g., distance to the goal); put the initial node(s) on the open list; repeat select N, the best node on the open list; succeed if N is a goal node; otherwise put N on the closed list and add N's children to the open list; until we succeed or the openlist becomes empty (we fail); A closed node reached on a different path is made open. NOTE: "the best" only means "currently appearing the best"...
Hill climbing This is a greedy algorithm: go as high up as possible as fast as possible, without looking around too much. The algorithm select a heuristic function; set C, the current node, to the highest-valued initial node; loop select N, the highest-valued child of C; return C if its value is better than the value of N;otherwise set C to N;
Problems with hill climbing • Local maximum, or the foothill problem: there is a peak, but it is lower than the highest peak in the whole space. • The plateau problem: all local moves are equally unpromising, and all peaks seem far away. • The ridge problem: almost every move takes us down. Random-restart hill climbing is a series of hill-climbing searches with a randomly selected start node whenever the current search gets stuck. See also simulated annealing -- in a moment.
A hill climbing example (2) A local heuristic function Count +1 for every block that sits on the correct thing. The goal state has the value +8. Count -1 for every block that sits on an incorrect thing. In the initial state blocks C, D, E, F, G, H count +1 each. Blocks A, B count -1 each , for the total of +4. Move 1 gives the value +6 (A is now on the correct support). Moves 2a and 2b both give +4 (B and H are wrongly situated). This means we have a local maximum of +6.
A hill climbing example (3) A global heuristic function Count +N for every block that sits on a correct stack of N things. The goal state has the value +28. Count -N for every block that sits on an incorrect stack of N things. That is, there is a large penalty for blocks tied up in a wrong structure. In the initial state C, D, E, F, G, H count -1, -2,-3, -4, -5, -6. A counts -7 , for the total of -28. continued
A hill climbing example (4) Move 1 gives the value -21 (A is now on the correct support). Move 2a gives -16, because C, D, E, F, G, Hcount -1, -2, -3, -4, -5, -1. Move 2b gives -15, because C, D, E, F, Gcount -1, -2, -3, -4, -5. There is no local maximum! Moral: sometimes changing the heuristic function is all we need.
Simulated annealing The intuition for this search method, an improvement on hill-climbing, is the process of gradually cooling a liquid. There is a schedule of "temperatures", changing in time. Reaching the "freezing point" (temperature 0) stops the process. Instead of a random restart, we use a more systematic method.
Simulated annealing (2) The algorithm, one of the versions select a heuristic function f; set C, the current node, to any initial node; set t, the current time, to 0; let schedule(x) be a table of "temperatures”; loop t = t + 1; if schedule(t) = 0, return C; select at random N, any child of C; if f(N) > f(C), set C to N, otherwise set C to N with probability e(f(N)-f(C))/schedule(t);
A few other search ideas Stochastic beam search: select w nodes at random; nodes with higher values have a higher probability of selection. Genetic algorithms: generate nodes like in stochastic beam search, but from two parents rather than from one. (This topic is worthy of a separate lecture...)
The A* procedure Hill-climbing (and its clever versions) may miss an optimal solution. Here is a search method that ensures optimality of the solution. The algorithm keep a list of partial paths (initially root to root, length 0); repeat succeed if the first path P reaches the goal node; otherwise remove path P from the list; extend P in all possible ways, add new paths to the list; sort the list by the sum of two values: the real cost of P till now, and an estimate of the remaining distance; prune the list by leaving only the shortest path for each node reached so far; until success or the list of paths becomes empty;
The A* procedure (2) A* requires a lower-bound estimate of the distance of any node to the goal node. A heuristic that never overestimates is also called optimistic or admissible. We consider three functions with values ≥ 0: • g(n) is the actual cost of reaching node n, • h(n) is the actual unknown remaining cost, • h'(n) is the optimistic estimate of h(n).
The A* procedure (3) Here is a sketchy proof of the optimality of the goal node found by A*. Let m be such a non-optimal goal: g(m) > g(nk) and let n0, n1, ..., nk be a path leading to nk. Since m is a goal, h(m) = 0 and thus h'(m) = 0. Every step of A* selects the next ni in such a way that g(ni) + h'(ni) ≤ g(ni) + h(ni) = g(nk) < g(m) = g(m) + h'(m) In other words, g(ni) + h'(ni) < g(m) + h'(m), so that A* cannot have selected m.
A A 4 4 B B 3 5 5 C C 4 3.5 2 4 D D E E F F 5.8 10.1 3.4 G G 11.5 9.2 3.5 7.1 S S The A* procedure (4) A simple search problem: S is the start node, G is the goal node, the real distances are shown. A lower-bound estimate of the distance to G could be as follows (note that we do not need it for F):
S A 10.1 9.2 D 10.1 11.5 7.1 S A E 5.8 9.2 3.5 B D F 7.1 0.0 E G The A* procedure (5) Hill-climbing happens to succeed here:
S A 3+10.1 4+9.2 D The A* procedure (6) A*, step 1
S A 13.1 13.2 D D B 3+4+5.8 3+5+9.2 The A* procedure (7) A*, step 2
S A 13.1 13.2 D D B 12.8 17.2 C E 3+4+4+3.4 3+4+5+7.1 The A* procedure (8) A*, step 3
S A A 13.1 13.2 D D B 12.8 17.2 4+5+10.1 C E E 14.4 19.1 4+2+7.1 The A* procedure (9) A*, step 4
S A A 13.1 13.2 D D B B 12.8 17.2 19.1 C E E 14.4 19.1 13.1 F 4+2+5+5.8 4+2+4+3.5 The A* procedure (10) A*, step 5
S 13.1 13.2 A A D D B B 12.8 17.2 19.1 C 14.4 19.1 13.1 E E F 16.8 13.5 G The A* procedure (11) A*, step 6
The A* procedure (12) The 8-puzzle with optimistic heuristics. A, ..., N: states; m = misplaced; d = depth; f(x) = m(x) + d(x).
Means-ends analysis The idea is due to Newell and Simon (1957): work by reducing the difference between states, and so approaching the goal state. There are procedures, indexed by differences, that change states. The General Problem Solver algorithm set C, the current node, to any initial node; loop succeed if the goal state has been reached, otherwise find the difference between C and the goal state; choose a procedure that reduces this difference, apply it to C to produce the new current state;
Means-ends analysis (2) P r o c e d u r e An example: planning a trip. Distance