1 / 33

Beyond Classical Search

Beyond Classical Search. Instructor: Kris Hauser http://cs.indiana.edu/~hauserk. Agenda. Local search, optimization Branch and bound search Online search. Local Search. Light-memory search methods No search tree; only the current state is represented!

quana
Download Presentation

Beyond Classical Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beyond Classical Search Instructor: Kris Hauser http://cs.indiana.edu/~hauserk

  2. Agenda • Local search, optimization • Branch and bound search • Online search

  3. Local Search • Light-memory search methods • No search tree; only the current state is represented! • Applicable to problems where the path is irrelevant (e.g., 8-queen) • For other problems, must encode entire paths in the state • Many similarities with optimization techniques

  4. Idea: Minimize h(N) • …Because h(G)=0 for any goal G • An optimization problem!

  5. Steepest Descent • S  initial state • Repeat: • S’ argminS’SUCCESSORS(S){h(S’)} • if GOAL?(S’) return S’ • if h(S’)  h(S) then S  S’ else return failure • Similar to: • hill climbing with –h • gradient descent over continuous space

  6. 1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens  new S[min-conflicts heuristic] • Return failure

  7. 1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen Repeat n times: • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens  new S[min-conflicts heuristic] • Return failure

  8. 1 2 2 0 3 2 3 2 2 2 2 2 3 2 Application: 8-Queen • Why does it work ??? • There are many goal states that are well-distributed over the state space • If no solution has been found after a few steps, it’s better to start it all over again. Building a search tree would be much less efficient because of the high branching factor • Running time almost independent of the number of queens Repeat n times: • Pick an initial state S at random with one queen in each column • Repeat k times: • If GOAL?(S) then return S • Pick an attacked queen Q at random • Move Q in its column to minimize the number of attacking queens  new S[min-conflicts heuristic] • Return failure

  9. Steepest Descent • S  initial state • Repeat: • S’ argminS’SUCCESSORS(S){h(S’)} • if GOAL?(S’) return S’ • if h(S’)  h(S) then S  S’ else return failure • may easily get stuck in local minima • Random restart (as in n-queen example) • Monte Carlo descent

  10. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x1) x x1

  11. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x1) x x1 x2

  12. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x2) x x1 x2

  13. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x2) x x1 x2 x3

  14. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y df/dx(x3) x x1 x2 x3

  15. Gradient Descent in Continuous Space • Minimize y=f(x) • Move in opposite direction of derivativedf/dx(x) y x x1 x2 x3

  16. Gradient: analogue of derivative in multivariate functions f(x1,…,xn) Direction that you would move x1,…,xn to make the steepest increase in f f x1 x2

  17. f GD works well f GD works poorly

  18. Algorithm for Gradient Descent • Input: continuous objective function f, initial point x0=(x10,…,xn0) • For t=0,…,N-1: Compute gradient vector gt=(f/x1(xt),…,f/xn(xt)) If the length of gtis small enough [convergence] Return xt Pick a step sizet Let xt+1=xt-tgt Return failure [convergence not reached] “Industrial strength” optimization software uses more sophisticated techniques to use higher derivatives, handle constraints, deal with particular function classes, etc.

  19. Problems for Discrete Optimization… Plateau Ridges NP-hard problems typically have an exponential number of local minima

  20. Monte Carlo Descent • S  initial state • Repeat k times: • If GOAL?(S) then return S • S’  successor of S picked at random • if h(S’)  h(S) then S  S’ • else • ∆h = h(S’)-h(S) • with probability ~ exp(∆h/T), where T is called the “temperature”, do: S  S’ [Metropolis criterion] • Return failure • Simulated annealing lowers T over the k iterations. • It starts with a large T and slowly decreases T

  21. “Parallel” Local Search Techniques • They perform several local searches concurrently, but not independently: • Beam search • Genetic algorithms • Tabu search • Ant colony/particle swarm optimization

  22. Empirical Successes of Local Search • Satisfiability (SAT) • Vertex Cover • Traveling salesman problem • Planning & scheduling • Many others…

  23. Relation to Numerical Optimization • Optimization techniques usually operate on a continuous state space • Example: stitch point clouds together into a global model • Same major issues, e.g., local minima, apply

  24. Dealing with Imperfect Knowledge

  25. Classical search assumes that: • World states are perfectly observable,  the current state is exactly known • Action representations are perfect,  states are exactly predicted • How an agent can cope with adversaries, uncertainty, and imperfect information?

  26. Distance, speed, acceleration? Intent? Personality?

  27. On-Line Search • Sometimes uncertainty is so large that actions need to be executed for the agent to know their effects • On-line search: repeatedly observe effects, and replan • A proactive approach for planning • A reactive approach to uncertainty • Example: A robot must reach a goal position. It has no prior map of the obstacles, but its vision system can detect all the obstacles visible from a the robot’s current position

  28. Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic

  29. Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic

  30. Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic Just as with classical search, on-line search may detect dead-ends and move to a more promising position (~ node of search tree)

  31. D* algorithm for Mobile Robots Tony Stentz

  32. Real-time replanning among unpredictably moving obstacles

  33. Next class • Uncertain and partially observable environments • Game playing • Read R&N 5.1-4 • HW1 due at end of next class

More Related