290 likes | 314 Views
Local Search Algorithms. CMPT 420 / CMPG 720. Outlines. Introduction of local search Hill climbing search Simulated annealing Local beam search. Local search algorithms. In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution.
E N D
Local Search Algorithms CMPT 420 / CMPG 720
Outlines • Introduction of local search • Hill climbing search • Simulated annealing • Local beam search
Local search algorithms • In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. • e.g., n-queens. • E.g., Integrated-circuit design • Job scheduling • …
8-Queens Problem • Put 8 queens on an 8 × 8 board with no two queens attacking each other. • No two queens share the same row, column, or diagonal.
8-Queens Problem • Incremental formulation • Complete-state formulation
Local search algorithms • We can use local search algorithms: • keep a single "current" state, try to improve it • generally move to neighbors • The paths are not retained
Example: n-queens • Move a queen to reduce number of conflicts
Advantages to local search • Use very little memory • Can often find reasonable solutions in large state spaces • Local search algorithms can’t backtrack
Hill-climbing search (steepest-ascent version) • A simple loop that continuously moves in the direction of increasing value – uphill • Terminates when reaches a “peak” • does not look ahead beyond the immediate neighbors, does not maintain a search tree
8-queens problem complete-state formulation vs. incremental formulation • How many successors we can derive from one state? • Each state has 8*7 = 56 successors.
8-queens problem • h = number of pairs of queens that are attacking each other (h=0 solution) • h = 17 for the above state
Hill-climbing search • “Greedy local search” • grabs a good neighbor state without thinking ahead about where to go next • makes rapid progress
Hill climbing search: 8-queens problem • Only 5 steps from h = 17 to h = 1
What we think hill-climbing looks like What we learn hill-climbing is usually like
Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima.
Problems for hill climbing • A local maxima with h = 1
Problems for hill climbing • Plateaux: a flat area of the state-space landscape
Hill climbing search • Starting from a randomly generated 8-queen state, steepest-ascent hill climbing gets stuck 86% of the time. • It takes 4 steps on average when it succeeds and 3 when it gets stuck. • The steepest ascent version halts if the best successor has the same value as the current.
Some solutions • allow a sideways move • shoulder • flat local maximum, that is not a shoulder
Some solutions • Solution: a limit on the number of consecutive sideway moves • E.g., 100 consecutive sideways moves in the 8-queens problem • success rate: raises from14% to 94% • cost: 21 steps on average for each successful instance, 64 for each failure
Some more solutions(Variants of hill climbing ) • Stochastic hill climbing • chooses at random from among the uphill moves • converge more slowly, but finds better solutions • First-choice hill climbing • generates successors randomly until one is better than the current state • good when with many (thousands) of successors
Some more solutions(Variants of hill climbing ) • Random-restart hill climbing • “If you don’t succeed, try, try again.” • Keep restarting from randomly generated initial states, stopping when goal is found
Simulated Annealing • A hill-climbing algorithm that never makes “downhill” moves is guaranteed to be incomplete. • Idea: escape local maxima by allowing some “bad” moves
Simulated Annealing • Picks a random move (instead of the best) • If “good move” • accepted; • else • accepted with some probability • The probability decreases exponentially with the “badness” of the move • It also decreases as temperature “T” goes down
Local Beam Search • Idea: keep k states instead of 1; choose top k of all their successors • Not the same as k searches run in parallel! • Searches that find good states recruit other searches to join them • moves the resources to where the most progress is being made
Genetic Algorithms (GA) • A successor state is generated by combining two parent states
Genetic Algorithms • Start with k randomly generated states (population) Evaluation function (fitnessfunction). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation
Genetic algorithms • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% etc
Crossover can produce a state that is a long way from either parent state.