410 likes | 685 Views
Beyond Classical Search. Instructor: Kris Hauser http://cs.indiana.edu/~hauserk. Agenda. Local search, optimization Branch and bound search Online search. Local Search. Light-memory search methods No search tree; only the current state is represented!
E N D
Beyond Classical Search Instructor: Kris Hauser http://cs.indiana.edu/~hauserk
Agenda • Local search, optimization • Branch and bound search • Online search
Local Search • Light-memory search methods • No search tree; only the current state is represented! • Applicable to problems where the path is irrelevant (e.g., 8-queen) • For other problems, must encode entire paths in the state • Many similarities with optimization techniques
Idea: Minimize h(N) • …Because h(G)=0 for any goal G • An optimization problem!
Steepest Descent • S initial state • Repeat: • S’ argminS’SUCCESSORS(S){h(S’)} • if GOAL?(S’) return S’ • if h(S’) h(S) then S S’ else return failure • Similar to: • hill climbing with –h • gradient descent over continuous space
1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens new S[min-conflicts heuristic] • Return failure
1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen Repeat n times: • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens new S[min-conflicts heuristic] • Return failure
1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen • Why does it work ??? • There are many goal states that are well-distributed over the state space • If no solution has been found after a few steps, it’s better to start it all over again. Building a search tree would be much less efficient because of the high branching factor • Running time almost independent of the number of queens Repeat n times: • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens new S[min-conflicts heuristic] • Return failure
Steepest Descent • S initial state • Repeat: • S’ argminS’SUCCESSORS(S){h(S’)} • if GOAL?(S’) return S’ • if h(S’) h(S) then S S’ else return failure • may easily get stuck in local minima • Random restart (as in n-queen example) • Monte Carlo descent
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x1) x x1
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x1) x x1 x2
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x2) x x1 x2
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x2) x x1 x2 x3
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x3) x x1 x2 x3
Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y x x1 x2 x3
Gradient: analogue of derivative in multivariate functions f(x1,…,xn) Direction that you would move x1,…,xn to make the steepest increase in f f x1 x2
f GD works well f GD works poorly
Algorithm for Gradient Descent • Input: continuous objective function f, initial point x0=(x10,…,xn0) • For t=0,…,N-1: Compute gradient vector gt=(f/x1(xt),…,f/xn(xt)) If the length of gtis small enough [convergence] Return xt Pick a step sizet Let xt+1=xt-tgt Return failure [convergence not reached] “Industrial strength” optimization software uses more sophisticated techniques to use higher derivatives, handle constraints, deal with particular function classes, etc.
Problems for Discrete Optimization… Plateau Ridges NP-hard problems typically have an exponential number of local minima
Monte Carlo Descent • S initial state • Repeat k times: • If GOAL?(S) then return S • S’ successor of S picked at random • if h(S’) h(S) then S S’ • else • ∆h = h(S’)-h(S) • with probability ~ exp(∆h/T), where T is called the “temperature”, do: S S’ [Metropolis criterion] • Return failure • Simulated annealing lowers T over the k iterations. • It starts with a large T and slowly decreases T
“Parallel” Local Search Techniques • They perform several local searches concurrently, but not independently: • Beam search • Genetic algorithms • Tabu search • Ant colony/particle swarm optimization
Empirical Successes of Local Search • Satisfiability (SAT) • Vertex Cover • Traveling salesman problem • Planning & scheduling • Many others…
Relation to Numerical Optimization • Optimization techniques usually operate on a continuous state space • Example: stitch point clouds together into a global model • Same major issues, e.g., local minima, apply
Classical search assumes that: • World states are perfectly observable, the current state is exactly known • Action representations are perfect, states are exactly predicted • How an agent can cope with adversaries, uncertainty, and imperfect information?
Distance, speed, acceleration? Intent? Personality?
On-Line Search • Sometimes uncertainty is so large that actions need to be executed for the agent to know their effects • On-line search: repeatedly observe effects, and replan • A proactive approach for planning • A reactive approach to uncertainty • Example: A robot must reach a goal position. It has no prior map of the obstacles, but its vision system can detect all the obstacles visible from a the robot’s current position
Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic
Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic
Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic Just as with classical search, on-line search may detect dead-ends and move to a more promising position (~ node of search tree)
D* algorithm for Mobile Robots Tony Stentz
Next class • Uncertain and partially observable environments • Game playing • Read R&N 5.1-4 • HW1 due at end of next class