390 likes | 686 Views
2. Iterative Deepening A* (IDA*). Idea: Reduce memory requirement of A* by applying cutoff on values of fConsistent heuristic function hAlgorithm IDA*:Initialize cutoff to f(initial-node)Repeat:Perform depth-first search by expanding all nodes N such that f(N) ? cutoffReset cutoff to smallest
E N D
1. 1 Heuristic (Informed) Search
2. 2 Iterative Deepening A* (IDA*) Idea: Reduce memory requirement of A* by applying cutoff on values of f
Consistent heuristic function h
Algorithm IDA*:
Initialize cutoff to f(initial-node)
Repeat:
Perform depth-first search by expanding all nodes N such that f(N) ? cutoff
Reset cutoff to smallest value f of non-expanded (leaf) nodes
3. 3 8-Puzzle
4. 4 8-Puzzle
5. 5 8-Puzzle
6. 6 8-Puzzle
7. 7 8-Puzzle
8. 8 8-Puzzle
9. 9 8-Puzzle
10. 10 8-Puzzle
11. 11 8-Puzzle
12. 12 8-Puzzle
13. 13 8-Puzzle
14. 14 8-Puzzle
15. 15 Advantages/Drawbacks of IDA* Advantages:
Still complete and optimal
Requires less memory than A*
Avoid the overhead to sort the fringe
Drawbacks:
Cant avoid revisiting states not on the current path
Essentially a DFS
Available memory is poorly used (? memory-bounded search, see R&N p. 101-104)
16. 16 Another approach
Local Search Algorithms
Hill-climbing or Gradient descent
Simulated Annealing
Genetic Algorithms, others
17. 17 Local Search Light-memory search method
No search tree; only the current state is represented!
Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state
Many similarities with optimization techniques
18. 18 Hill-climbing search If there exists a successor s for the current state n such that
h(s) < h(n)
h(s) <= h(t) for all the successors t of n,
then move from n to s. Otherwise, halt at n.
Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor.
Similar to Greedy search in that it uses h, but does not allow backtracking or jumping to an alternative path since it doesnt remember where it has been.
Not complete since the search will terminate at "local minima," "plateaus," and "ridges."
19. 19 Hill climbing on a surface of states Height Defined by Evaluation Function
20. 20 Robot Navigation
21. 21 Drawbacks of hill climbing Problems:
Local Maxima: peaks that arent the highest point in the space
Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk)
Ridges: flat like a plateau, but with dropoffs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up.
Remedy:
Introduce randomness
Random restart.
Some problem spaces are great for hill climbing and others are terrible.
22. 22 Examples of problems with HC
http://www.ndsu.nodak.edu/instruct/juell/vp/cs724s00/hill_climbing/hill_climbing.html
23. 23 Hill climbing example
24. 24 Example of a local maximum
25. 25 Steepest Descent S ? initial state
Repeat:
S ? arg minS?SUCCESSORS(S){h(S)}
if GOAL?(S) return S
if h(S) ? h(S) then S ? S else return failure
Similar to:
- hill climbing with h
- gradient descent over continuous space
26. Application: 8-Queen Repeat n times:
Pick an initial state S at random with one queen in each column
Repeat k times:
If GOAL?(S) then return S
Pick an attacked queen Q at random
Move Q in its column to minimize the number of attacking queens ? new S [min-conflicts heuristic]
Return failure
27. Application: 8-Queen Repeat n times:
Pick an initial state S at random with one queen in each column
Repeat k times:
If GOAL?(S) then return S
Pick an attacked queen Q at random
Move Q it in its column to minimize the number of attacking queens is minimum ? new S
28. 28 Steepest Descent S ? initial state
Repeat:
S ? arg minS?SUCCESSORS(S){h(S)}
if GOAL?(S) return S
if h(S) ? h(S) then S ? S else return failure
may easily get stuck in local minima
Random restart (as in n-queen example)
Monte Carlo descent
29. 29 Monte Carlo Descent S ? initial state
Repeat k times:
If GOAL?(S) then return S
S ? successor of S picked at random
if h(S) ? h(S) then S ? S
else
Dh = h(S)-h(S)
with probability ~ exp(?Dh/T), where T is called the temperature, do: S ? S [Metropolis criterion]
Return failure
Simulated annealing lowers T over the k iterations.
It starts with a large T and slowly decreases T
30. 30 Simulated annealing Simulated annealing (SA) exploits an analogy between the way in which a metal cools and freezes into a minimum-energy crystalline structure (the annealing process) and the search for a minimum [or maximum] in a more general system.
SA can avoid becoming trapped at local minima.
SA uses a random search that accepts changes that increase objective function f, as well as some that decrease it.
SA uses a control parameter T, which by analogy with the original application is known as the system temperature.
T starts out high and gradually decreases toward 0.
Applet http://www.heatonresearch.com/articles/64/page1.html
31. 31 Simulated annealing (cont.) A bad move from A to B is accepted with a probability
(f(B)-f(A)/T)
e
The higher the temperature, the more likely it is that a bad move can be made.
As T tends to zero, this probability tends to zero, and SA becomes more like hill climbing
If T is lowered slowly enough, SA is complete and admissible.
32. 32 The simulated annealing algorithm
33. 33 Parallel Local Search Techniques They perform several local searches concurrently, but not independently:
Beam search
Genetic algorithms
See R&N, pages 115-119
34. 34 Local Beam Search Idea: Keep track of k states rather than just one
Start with k randomly generated states
Repeat
At each iteration, all the successors of all k states are generated
If any one is a goal state
stop
Else
select the k best successors from the complete list and repeat
35. 35 Local Beam Search Not the same as k searches run in parallel!
Searches that find good states recruit other searches to join them
Problem
quite often, all k states end up on same local hill
Solution
choose k successors randomly biased towards good ones
Close analogy to natural selection
36. 36 Genetic Algorithm (GA) GA=stochastic local beam search + generate successors from pairs of states
State=a string over a finite alphabet (e.g, a string of 0 and 1)
E.g, for 8-queen, the position of the queen in each column is denoted by a number
Cross over and mutation
http://www.heatonresearch.com/articles/65/page1.html
37. 37 Genetic Algorithm (GA)
38. 38 Genetic Algorithm (GA) Crossover helps iff substrings are meaningful components
39. 39