1 / 29

Local Search Algorithms

Local Search Algorithms. CMPT 420 / CMPG 720. Outlines. Introduction of local search Hill climbing search Simulated annealing Local beam search. Local search algorithms. In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution.

sarahj
Download Presentation

Local Search Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Local Search Algorithms CMPT 420 / CMPG 720

  2. Outlines • Introduction of local search • Hill climbing search • Simulated annealing • Local beam search

  3. Local search algorithms • In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. • e.g., n-queens. • E.g., Integrated-circuit design • Job scheduling • …

  4. 8-Queens Problem • Put 8 queens on an 8 × 8 board with no two queens attacking each other. • No two queens share the same row, column, or diagonal.

  5. 8-Queens Problem • Incremental formulation • Complete-state formulation

  6. Local search algorithms • We can use local search algorithms: • keep a single "current" state, try to improve it • generally move to neighbors • The paths are not retained

  7. Example: n-queens • Move a queen to reduce number of conflicts

  8. Advantages to local search • Use very little memory • Can often find reasonable solutions in large state spaces • Local search algorithms can’t backtrack

  9. Hill-climbing search (steepest-ascent version) • A simple loop that continuously moves in the direction of increasing value – uphill • Terminates when reaches a “peak” • does not look ahead beyond the immediate neighbors, does not maintain a search tree

  10. 8-queens problem complete-state formulation vs. incremental formulation • How many successors we can derive from one state? • Each state has 8*7 = 56 successors.

  11. 8-queens problem • h = number of pairs of queens that are attacking each other (h=0 solution) • h = 17 for the above state

  12. Hill-climbing search • “Greedy local search” • grabs a good neighbor state without thinking ahead about where to go next • makes rapid progress

  13. Hill climbing search: 8-queens problem • Only 5 steps from h = 17 to h = 1

  14. What we think hill-climbing looks like What we learn hill-climbing is usually like

  15. Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima.

  16. Problems for hill climbing • A local maxima with h = 1

  17. Problems for hill climbing • Plateaux: a flat area of the state-space landscape

  18. Hill climbing search • Starting from a randomly generated 8-queen state, steepest-ascent hill climbing gets stuck 86% of the time. •  It takes 4 steps on average when it succeeds and 3 when it gets stuck. • The steepest ascent version halts if the best successor has the same value as the current.

  19. Some solutions • allow a sideways move • shoulder •  flat local maximum, that is not a shoulder

  20. Some solutions • Solution: a limit on the number of consecutive sideway moves • E.g., 100 consecutive sideways moves in the 8-queens problem • success rate: raises from14% to 94% • cost: 21 steps on average for each successful instance, 64 for each failure

  21. Some more solutions(Variants of hill climbing ) • Stochastic hill climbing • chooses at random from among the uphill moves • converge more slowly, but finds better solutions • First-choice hill climbing • generates successors randomly until one is better than the current state • good when with many (thousands) of successors

  22. Some more solutions(Variants of hill climbing ) • Random-restart hill climbing • “If you don’t succeed, try, try again.” • Keep restarting from randomly generated initial states, stopping when goal is found

  23. Simulated Annealing • A hill-climbing algorithm that never makes “downhill” moves is guaranteed to be incomplete. • Idea: escape local maxima by allowing some “bad” moves

  24. Simulated Annealing • Picks a random move (instead of the best) • If “good move” • accepted; • else • accepted with some probability • The probability decreases exponentially with the “badness” of the move • It also decreases as temperature “T” goes down

  25. Local Beam Search • Idea: keep k states instead of 1; choose top k of all their successors • Not the same as k searches run in parallel! • Searches that find good states recruit other searches to join them • moves the resources to where the most progress is being made

  26. Genetic Algorithms (GA) • A successor state is generated by combining two parent states

  27. Genetic Algorithms • Start with k randomly generated states (population) Evaluation function (fitnessfunction). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

  28. Genetic algorithms • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% etc

  29. Crossover can produce a state that is a long way from either parent state.

More Related