430 likes | 556 Views
Background Motivation Previous Work Our Research Summary. To reopen or not to reopen ( this is the question) Different approaches of re-opening nodes in heuristic search. Vitali Sepetnitsky Advisors: Prof. Ariel Felner Dr. Roni Stern Ben-Gurion University of The Negev
E N D
Background • Motivation • Previous Work • Our Research • Summary To reopen or not toreopen(this is the question)Different approaches of re-opening nodes in heuristic search Vitali Sepetnitsky Advisors: Prof. Ariel Felner Dr. Roni Stern Ben-Gurion University of The Negev Department of Information Systems Engineering
Heuristic Search & (reminder) • Background • Motivation • Previous Work • Our Research • Summary • Heuristic Search field deals with the problem of finding a path between two vertices in a graph • is one of the general-purpose graph search algorithms based on best-first search: • Uses two data structures, and • Initially contains only the start state and is empty • Iteratively one of the states of is selected, moved to , expanded and its successor states (generated) are moved to • The state expanded at each stage is the one that minimizes the function
Heuristic Search & (reminder) • Background • Motivation • Previous Work • Our Research • Summary • Usually, the resources for the search (e.g. time and memory) are limited • Therefore, optimal search can be infeasible for real-world problems like: • Robotics (motion planning, outdoor navigation, etc.) • Video games • Touring • And for combinatorial games like the famous Rubik’s cube
Bounded Search • Background • Motivation • Previous Work • Our Research • Summary • When optimally solving a problem is impractical, bounded search can be a practical alternative. • Sub-optimal algorithms sacrifice the solution optimality in an attempt to reduce run-time and memory required for solving • Different Approaches Exist: • Bounded Suboptimal Search • Bounded Cost Search • … • Anytime Heuristic Search
Bounded Search (cont.) • Background • Motivation • Previous Work • Our Research • Summary • When optimally solving a problem is impractical, bounded search can be a practical alternative. • Sub-optimal algorithms sacrifice the solution optimality in an attempt to reduce run-time and memory required for solving • Different Approaches Exist: • Bounded Suboptimal Search • Bounded Cost Search • … • Anytime Heuristic Search We will focus on this type of search
Suboptimal Search • Background • Motivation • Previous Work • Our Research • Summary Find a solution whose cost is within a bounded factor of the optimal solution cost What can be done? (one of the approaches) Relax some conditions of by weighting the terms and in the node evaluation function • (Pohl 1970) • (Pohl 1973) • (Pearl and Kim 1982) • ….
: Closer View • Background • Motivation • Previous Work • Our Research • Summary • Constant inflation of the heuristic function by a fixed factor ,so the cost function is now: • By using weighted heuristic accelerates the search since nodes closer to goal seem more attractive Let’s see an Example …
: Closer View (cont.) It is clear that is the shortest path (costs 14) S This is the start • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 13 11 G And this is the goal
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S First, let’s find the shortest path using the classical … • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .1. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .2. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .3. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .4. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .5. 2 4 1 A h = 4 B h = 3 Note the updated value of the state 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .6. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .7. 2 4 1 A h = 4 B h = 3 Note the updated value of the goal state 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 is chosen for expansion and we have is the shortest path (costs 14) 13 11 G
: Closer View (cont.) S And now, let’s find the shortest path using the algorithm • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .1. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .2. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .3. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .4. 2 4 1 A h = 4 B h = 3 Note that the state was chosen for expansion despite its distance from the start state , which is bigger than 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .5. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .6. 2 4 1 A h = 4 B h = 3 Here we have an interesting phenomenon: 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .7. 2 4 1 A h = 4 B h = 3 Here we have an interesting phenomenon: 13 11 but, now we have shorter path to ( instead of )! G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .8. 2 4 1 A h = 4 B h = 3 Here we have an interesting phenomenon: 13 11 but, now we have shorter path to ( instead of )! G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .9. 2 4 1 A h = 4 B h = 3 Here we have an interesting phenomenon: 13 11 but, now we have shorter path to ( instead of )! G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .10. 2 4 This is called Re-Opening 1 A h = 4 B h = 3 State is now removed from and inserted back to 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .11. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .12. 2 4 1 A h = 4 B h = 3 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary .13. 2 4 1 A h = 4 B h = 3 Note the updated value of the goal state 13 11 G
: Closer View (cont.) S • Background • Motivation • Previous Work • Our Research • Summary 2 4 1 A h = 4 B h = 3 is chosen for expansion and we have is the shortest path (costs 14) 13 11 G
Re-opening • Background • Motivation • Previous Work • Our Research • Summary • The phenomenon of moving a state that has been already expanded (currently in )back to is called re-opening • Re-openings are continuously propagated down the search tree causing degradation of the performance • Preventing the re-opening (disallow re-expansions) is a possible solution (but sounds bad!) • It sounds reasonable that any solution found using the “Always-Reopen” (AR) policy is at least good than any solution found using the “No-Reopen” (NR) policy
Re-opening (cont.) • Background • Motivation • Previous Work • Our Research • Summary • And here is what researchers say (partial): • “Dropping duplicatescan reduce the search effort needed to find a goal at the cost of finding more expensive solutions.”(Ruml, 2013) • Algorithms that “... do not re-expand duplicate states, … this can decrease solution quality, and even quality bounds …ignoring duplicates allows generating orders of magnitude fewer nodes.“ (Thayer and Ruml, 2010) • “Dropping the duplicates” is a possibility, but ….no systematic theoretical study was ever done!
Exploring Re-opening • Background • Motivation • Previous Work • Our Research • Summary • A simple experiment we conducted shows the misconception! • 100 instances of 15-puzzle were taken • was ran with: • Manhattan-Distance heuristic • 40 different weights • “Always Reopen”and “Never Reopen” policies were compared • In the results we can see a lot of runs in which with “NR” policy outperforms with “AR”policy in terms of solution cost!
The Anomaly • Background • Motivation • Previous Work • Our Research • Summary Moreover, we’ve discovered that solutions found by fall into the four possible cases with similar distribution: “Case 4” Reported by researchers
The Anomaly (cont.) • Background • Motivation • Previous Work • Our Research • Summary Furthermore, we can get each one of the four cases, by using close but different weights by solving a single graph … h=4 h=4 2 2 S 4 4 B h = 4 C h = 2 4 1 4 D h = 3 C h = 4 40 4 G
Our Research: Part I • Background • Motivation • Previous Work • Our Research • Summary • Studying the existing polices of re-opening: • Running on different search domains: • Tile puzzles • Grid worlds (e.g. maps, mazes etc.) • Rubik’s cube • Robotic arm movement • Looking for the above anomaly in different suboptimal algorithms • Try different heuristic functions • Theoretical analysis (rather than empirical study)
Our Research: Part II • Background • Motivation • Previous Work • Our Research • Summary Always Reopen Policy () NeverReopen Policy ()
Success Measures • Background • Motivation • Previous Work • Our Research • Summary • We will measure our success rate by different measures (compared to existing algorithms): • Solution cost • Numbers of generated and expanded states • Numbers of re-opened states • Time and memory consumed • The bottom line of this research is to provide policies of re-opening that reduce the number of re-opened states but not the solution cost (compared to “AR”)
Literature (partial) • Background • Motivation • Previous Work • Our Research • Summary • Likhachev, M., Gordon, G., & Thrun, S. (2004). ARA*: Anytime A* with provable bounds on sub-optimality. In Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference (NIPS-03). MIT Press. • Pohl, I. (1970). First results on the effect of error in heuristic search. Machine Intelligence, 5, 219-236. • Pohl, I. (1973). The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem-solving. In Proceedings of the 3rd International Joint Conference on Artificial Intelligence (IJCAI-73), pp. 12-17. Morgan Kaufmann. • Thayer, J. T., & Ruml, W. Faster than weighted A*: An optimistic approach to bounded suboptimal search. In Proceedings of the Eighteenth International Conference on Automated Planning and Scheduling (ICAPS-08).
The End (Or The Start?) • Background • Motivation • Previous Work • Our Research • Summary