1.18k likes | 3.69k Views
Local Search and Continuous Search. Local search algorithms. In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution In such cases, we can use local search algorithms keep a (sometimes) single "current" state, try to improve it.
E N D
Local search algorithms • In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution • In such cases, we can use local search algorithms • keep a (sometimes) single "current" state, try to improve it
Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal
Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal
Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal
Local Search • Operates by keeping track of only the current node and moving only to neighbors of that node • Often used for: • Optimization problems • Scheduling • Task assignment • …many other problem where the goal is to find the best state according to some objective function
Hill-climbing search • Consider next possible moves (i.e. neighbors) • Pick the one that improves things the most • “Like climbing Everest in thick fog with amnesia”
Hill-climbing search: 8-queens problem • h = number of pairs of queens that are attacking each other, either directly or indirectly • h = 17 for the above state
Hill-climbing search: 8-queens problem • 5 steps later… • A local minimum with h = 1 (acommon problem with hill climbing)
Drawbacks of hill climbing • Problem: depending on initial state, can get stuck in local maxima
Approaches to local minima • Try again • Sideways moves
Try, try again • Run algorithm some number of times and return the best solution • Initial start location is usually chosen randomly • If you run it “enough” times, will get answer (in the limit) • Drawback: takes lots of time
Sideways moves • If stuck on a ridge, if we wait awhile and allow flat moves, will become unstuck—maybe • Questions • How long is awhile? • How likely to become unstuck?
Any other extensions? • First-choice hill climbing • Generate successors randomly until a good one is found • Look three moves ahead • Unstuck from certain areas • More inefficient • Might not be any better • Move quality: as good or better
Comparison of approaches for 8-queens problem • Tradeoff between success rate and number of moves • As success rate approaches 100% number of moves will increase rapidly
Nice properties of local search • Can often get “close” • When is this useful? • Can trade off time and performance • Can be applied to continuous problems • E.g. first-choice hill climbing • More on this later…
Simulated annealing • Insight: all of the modifications to hill climbing are really about injecting variance • Don’t want to get stuck in local maxima or plateu • Idea: explicitly inject variability into the search process
Properties of simulated annealing • More variability at the beginning of search • Since you have little confidence you’re in right place • Variability decreases over time • Don’t want to move away from a good solution • Probability of picking move is related to how good it is • Sideways or slight decreases are more likely than major decreases
How simulated annealing works • At each step, have temperature T • Pick next action semi-randomly • Higher temperature increase randomness • Select action according to goodness and temperature • Decrease temperature slightly at each time step until it reaches 0 (no randomness)
Local Beam Search • Keep track of k states rather than just one • Start with k randomly generated states • At each iteration, all the successors of all k states are generated • If any one is a goal state, stop; else select the k best successors from the complete list and repeat. • Results in states getting closer together over time
Stochastic Local Beam Search • Designed to prevent all k states clustering together • Instead of choosing k best, choose k successors at random, with higher probability of choosing better states. Terminology: stochastic means random.
Genetic algorithms • Inspired by nature • New states generated from two parent states. Throw some randomness into the mix as well…
Genetic Algorithms • Initialize population (k random states) • Select subset of population for mating • Generate children via crossover • Continuous variables: interpolate • Discrete variables: replace parts of their representing variables • Mutation (add randomness to the children's variables) • Evaluate fitness of children • Replace worst parents with the children
Genetic algorithms 32752411
Genetic algorithms • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% • … etc.
Genetic algorithms Probability of selection is weighted by the normalized fitness function.
Genetic algorithms Probability of selection is weighted by the normalized fitness function. Crossover from the top two parents.
Genetic Algorithms • Initialize population (k random states) • Calculate fitness function • Select pairs for crossover • Apply mutation • Evaluate fitness of children • From the resulting population of 2*k individuals, probabilistically pick k of the best. • Repeat.
Searching Continuous Spaces • Continuous: Infinitely many values. • Discrete:A limited number of distinct, clearly defined values. • In continuous space, cannot consider all next possible moves (infinite branching factor) • Makes classic hill climbing impossible
Example • Want to put 3 airports in Romania, such that the sum of squared distances from each city on the map to its closest airport is minimized. • State: coordinates of the airports • Objective function: ,
Example • What can we do to solve this problem?
Searching Continuous Space • Discretize the state space • Turn it into a grid and do what we’ve always done.
Searching Continuous Space • Calculate the gradient of the objective function at the current state. • Take a step of size in the direction of the steepest slope Problem: Can be hard or impossible to calculate. Solution: approximate the gradient through sampling.
Step size • Very small takes a long time to reach the peak • Very big can overshoot the goal • What can we do…? • Start high and decrease with time • Make it higher for flatter parts of the space
Summary • Local search often finds an approximate solution • (i.e. it end in “good” but not “best” states) • Can inject randomness to avoid getting stuck in local maxima • Can trade off time for higher likelihood of success
Real World Problems • “many real world problems have a landscape that looks more like a widely scattered family of balding porcupines on a flat floor, with miniature porcupines living on the tip of each porcupine needle, ad infinitum.” -Russell and Norvig
Dear Student: I Don't Lie Awake At Night Thinking of Ways to Ruin Your Life Art Caden, for Forbes.com “One of the popular myths of higher education is that professors are sadists who live to inflict psychological trauma on undergraduates. …” … “I do not “take off” points. You earn them. The difference is not merely rhetorical, nor is it trivial. In other words, you start with zero points and earn your way to a grade.” … “this means that the burden of proof is on you to demonstrate that you have mastered the material. It is not on me to demonstrate that you have not. ” Link to the Article