1 / 22

Planning as Heuristic Forward Search

Planning as Heuristic Forward Search. Brian C. Williams Sept. 30 th , 2002 16.412J/6.834J. Outline. Introduction to FF FF Search Algorithm FF Heuristic Fn. Planning as Forward Heuristic Search.

bevis
Download Presentation

Planning as Heuristic Forward Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Planning as Heuristic Forward Search Brian C. Williams Sept. 30th, 2002 16.412J/6.834J

  2. Outline • Introduction to FF • FF Search Algorithm • FF Heuristic Fn

  3. Planning as Forward Heuristic Search • Planning can be seen as a state space search, for a path from the initial state to a goal state. • Planning has largely not been concerned with finding optimal solutions. • Although heuristic preference to shorter plans. • Planning has largely used incomplete or uniformed search methods. • Breadth first search • Meta search rules • The size of most state spaces requires informative heuristics to guide the search.

  4. Readings in Planning as Forward Heuristic Search • “Planning as Heuristic Search,” by Blai Bonet and Hector Geffner, Artificial Intelligence Journal, 2001. • “The FF Planning System: Fast Plan Generation Through Heuristic Search,” by Jorg Hoffmann and Bernhard Nebel, Journal of Artificial Intelligence Research, 2001.

  5. Review: Search Strategies • Breadth first search (Uninformed) • systematic search of state space in layers. • A* search (Informed) • Expands search node with best estimated cost. • Estimated cost = cost-so-far + optimistic-cost-to-go • Greedy search • Expands search node closest to the goal according to a heuristic function. • Hill-climbing search • Move towardsgoal by random selection from the best children. • To apply informed search to planning need heuristic fn

  6. Fast Forward (FF) • Forward-chaining heuristic search planner • Basic principle: Hill-climb through the space of problem states, starting at the initial state. • Each child state applies a single plan operator. • Always moves to the first child state found that is closer to the goal. • Record the transitions applied along the path. • The transitions leading to the goal constitute a plan.

  7. Outline • Introduction to FF • FF Search Algorithm • FF Heuristic Fn

  8. Planning Problem and State Space • A planning problem is a tuple <P, A, I, G>: • Propositions P • Ground actions A are instantiated operators • Initial state I is a subset of P, and • Goal state G is a subset of P. • The state space of a problem consists of all subsets of propositions P. • A transition between two states is any valid application of an action, that is, its preconditions are satisfied.

  9. FF Search Strategy FF uses a strategy called enforced hill-climbing: • Obtain heuristic estimate of the value of the current state. • Find action(s) transitioning to a better state. • Move to the better state. • Append actions to plan head. • Never backtrack over any choice.

  10. h(S4) h(init) h(S6) h(S1) h(S2) h(S5) h(S3) S1 S2 S3 Init S5 S6 S4 h(S1) < h(S4) <h(init) < h(S2) < h(S3) < h(S5) = h(S6) A B Plan Head: B Plan Head: A, B

  11. h(S7) h(S10) h(S6) h(S12) h(S9) h(S8) h(S11) S11 S12 S7 S8 S9 S6 S10 Finding a better state: Plateaus h(S7) < h(S6) = h(S7) . . . = h(S10) < h(S11) < h(S12) • Perform breadth first search from current state, • to states reachable by action applications, • Stopping as soon as a strictly better one is found. C D

  12. Enforced Hill-Climbing (cont.) • The success of this strategy depends on how informative the heuristic is. • FF uses a heuristic found to be informative in a large class of bench mark planning domains. • The strategy is not complete. • Never backtracking means that some parts of the search space are lost. • If FF fails to find a solution using this strategy it switches to standard best-first search. • (e. g., Greedy or A* search).

  13. Outline • Introduction to FF • FF Search Algorithm • FF Heuristic Fn

  14. FF’s Heuristic Estimate • The value of a state is a measure of how close it is to a goal state. • This cannot be determined exactly (too hard), but can be approximated. • One way of approximating is to use the relaxed problem. • Relaxation is achieved by ignoring the negative effects of the actions. • The relaxed action set, A', is defined by: A' = {<pre(a),add(a),0> | a in A}

  15. noop In(A) noop In(A) In(A) Closed In(B) Closed Move noop Open noop Opened Closed noop Opened Close Open Layer 3 Layer 2 Relaxed Distance Estimate • Current: In(A), Closed Goal: In(B) • Layers correspond to successive time points, • # layers indicate minimum time to achieve goals. Layer 1

  16. Building the Relaxed Plan Graph • Start at the initial state • Repeatedly apply all relaxed actions whose preconditions are satisfied. • Their (positive) effects are asserted at the next layer. • If all actions applied and the goals are not all present in the final graph layer Then the problem is unsolvable.

  17. Extracting a Relaxed Soln • When a layer containing all of the goals is reached ,FF searches backwards for a plan. • The earliest possible achiever is always used for any goal. • This maximizes the possibility for exploiting actions in the relaxed plan. • The relaxed plan might contain many actions happening concurrently at a layer. • The number of actions in the relaxed plan is an estimate of the true cost of achieving the goals.

  18. How FF Uses the Heuristic • FF uses the heuristic to estimate how close each state is to a goal state • any state satisfying the goal propositions. • The actions in the relaxed plan are used as a guide to which actions to explore when extending the plan. • All actions in the relaxed plan at layer i that achieve at least one of the goals required at layer i+1 are considered helpful. • FF restricts attention to the helpful actions when searching forward from a state.

  19. Properties of the Heuristic • The relaxed plan that is extracted is not guaranteed to be the optimal relaxed plan. • the heuristic is not admissible. • FF can produce non-optimal solutions. • Focusing only on helpful actions is not completeness preserving. • Enforced hill-climbing is not completeness preserving.

  20. Getting Out of Deadends • Because FF does not backtrack, FF can get stuck in dead-ends. • This arises when an action cannot be reversed, thus, having entered a bad state there is no way to improve. • When no search progress can be made, FF switches to Best First Search from the initial state. • Detecting a dead-end can be expensive if the plateau is large.

  21. Fast Forward (FF) • Forward-chaining heuristic search planner • Basic principle: Hill-climb through the space of problem states, starting at the initial state. • Each child state applies a single plan operator. • Always moves to the first child state found that is closer to the goal. • Record the transitions applied along the path. • The transitions leading to the goal constitute a plan.

  22. Other Distance Estimates • Distance to the goal can be estimated without building a relaxed reachability analysis, and then extracting a relaxed plan. • Read HSP paper: • An alternative is to estimate the cost of achieving a goal, as the cost of achieving the preconditions of a suitable action, plus one.

More Related