630 likes | 772 Views
فصل دهم. 07. برنامه ریزی AI. yousefpour@shomal.ac.ir. Alireza yousefpour. The planning problem. Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions Output:
E N D
فصل دهم 07 برنامه ریزیAI yousefpour@shomal.ac.ir Alirezayousefpour
The planning problem • Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions • Output: A sequence of actions that if applied to the initial state, transfers the world to the goal state
An example – Blocks world • Blocks on a table • Can be stacked, but only one block on top of another • A robot arm can pick up a block and move to another position • On the table • On another block • Arm can pick up only one block at a time • Cannot pick up a block that has another one on it
STRIPS Representation • State is a conjunction of positive ground literals On(B, Table) Λ Clear (A) • Goal is a conjunction of positive ground literals Clear(A) Λ On(A,B) Λ On(B, Table) • STRIPS Operators • Conjunction of positive literals as preconditions • Conjunction of positive and negative literals as effects
More on action schema • Example: Move (b, x, y) • Precondition: Block(b)Λ Clear(b) Λ Clear(y) Λ On(b,x) Λ(b ≠ x) Λ (b ≠ y) Λ (y ≠ x) • Effect:¬Clear(y) Λ ¬On(b,x)Λ Clear(x) Λ On(b,y) • An action is applicable in any state that satisfies its precondition Delete list Add list
STRIPS assumptions • Closed World assumption • Unmentioned literals are false (no need to explicitly list out) • STRIPS assumption • Every literal not mentioned in the “effect” of an action remains unchanged • Atomic Time (actions are instantaneous)
STRIPS expressiveness • Literals are function free: Move (Block(x), y, z) • operators can be propositionalized Move(b,x,y) and 3 blocks and table can be expressed as 48 purely propositional actions • No disjunctive goals: On(B, Table) V On(B, C) • No conditional effects: On(B, Table) if ¬On(A, Table)
Planning algorithms • Planning algorithms are search procedures • Which state to search? • State-space search • Each node is a state of the world • Plan = path through the states • Plan-space search • Each node is a set of partially-instantiated operators and set of constraints • Plan = node
State search • Search the space of situations, which is connected by operator instances • The sequence of operators instances = plan • We have both preconditions and effects available for each operator, so we can try different searches: Forward vs. Backward
Planning: Search Space C A B C A B C B A B A C B A C B A B C C B A B A C A C A A B C B C C B A A B C
Forward state-space search (1) • Progression • Initial state: initial state of the problem • Actions: • Applied to a state if all the preconditions are satisfied • Succesor state is built by updating current state with add and delete lists • Goal test: state satisfies the goal of the problem
Progression (forward search) ProgWS(world-state, goal-list, PossibleActions, path) If world-state satisfies all goals in goal-list, • Then return path. • Else Act = choose an action whose precondition is true in world-state • If no such action exists • Then fail • Else return ProgWS( result(Act, world-state), goal-list, PossibleActions, concatenate(path, Act) )
Forward state-space search (2) • Advantages • No functions in the declarations of goals search state is finite • Sound • Complete (if algorithm used to do the search is complete) • Limitations • Irrelevant actions not efficient • Need heuristic or pruning procedure
Backward state-space search (1) • Regression • Initial state: goal state of the problem • Actions: • Choose an action that • Is relevant; has one of the goal literals in its effect set • Is consistent; does not negate another literal • Construct new search state • Remove all positive effects of A that appear in goal • Add all preconditions, unless already appears • Goal test: state is the initial world state
Regression (backward search) RegWS(initial-state, current-goals, PossibleActions, path) • If initial-state satisfies all of current-goals • Then return path • Else Act = choose an action whose effect matches one of current-goals • If no such action exists, or the effects of Act contradict some of current-goals, then fail • G= (current-goals – goals-added-by(Act)) + preconds(Act) • If G contains all of current-goals, then fail • Return RegWS(initial-state, G, PossibleActions, concatenate(Act, path))
Backward state-space search (2) • Advantages • Consider only relevant actions much smaller branching factor • Limitations • Still need heuristic to be more efficient
Comparing ProgWS and RegWS • Both algorithms are • sound (they always return a valid plan) • complete (if a valid plan exists they will find one) • Running time is O(bn) where b = branching factor, n = number of “choose” operators
Blocks world: STRIPS operators • UnStack(x,y) • Pre: on(x, y), ae • Del: on(x, y), ae • Add: holding(x), clear(y) • Stack(x, y) • Pre: holding(x), clear(y) • Del: holding(x), clear(y) • Add: on(x, y), ae • Pickup(x) Pre: on(x, Table), clear(x), ae Del: on(x, Table), ae Add: holding(x) • Putdown(x) Pre: holding(x) Del: holding(x) Add: on(x, Table), ae
STRIPS Planning • Current state: • on(A,table), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D), ae. • Goal • on(A,C), on(C,D) C D A B D A C
STRIPS Planning D Plan: on(A,C), on(D,A) Goalstack: A C on(A,C) Stack(A, C) holding(A), clear(C) holding(A) Pickup(A) on(A,Table), clear(A), ae C A B D on(A,table), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D), ae. Current State
STRIPS Planning D Plan: on(A,C), on(D,A) Goalstack: A C on(A,C) Stack(A, C) holding(A), clear(C) • Pickup(x) • Pre: on(x,Table), clear(x), ae • Del: on(x, Table), ae, • Add: holding(x) holding(A) Pickup(A) C A B D on(A,table), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D), ae. holding(A), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D). Current State
STRIPS Planning D Plan: on(A,C), on(D,A) Goalstack: A C Pickup(A) on(A,C) Stack(A, C) • Stack(x, y) • Pre: holding(x), clear(y) • Del: holding(x), clear(y) • Add: on(x, y), ae A C B D holding(A), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D). on(A,C), on(C, B), on(B,table), on(D,table), clear(A), clear(D), ae. Current State
STRIPS Planning D Plan: on(A,C), on(D,A) Goalstack: A C Pickup(A) on(D, A) Stack(A, C) Stack(D,A) holding(D), clear(A) holding(D) Pickup(D) on(D,Table), clear(D), ae A C B D on(A,C), on(C, B), on(B,table), on(D,table), clear(A), clear(D), ae. on(A,C), on(C, B), on(B,table), holding(D), clear(A), clear(D) on(A,C), on(C, B), on(B,table), on(D,A), clear(A), ae Current State
STRIPS Planning D Plan: on(A,C), on(D,A) Goalstack: A C Pickup(A) on(D, A) Stack(A, C) Stack(D,A) holding(D), clear(A) Pickup(D) holding(D) D A C B on(A,C), on(C, B), on(B,table), holding(D), clear(A), clear(D) on(A,C), on(C, B), on(B,table), on(D,A), clear(A), ae Current State
STRIPS Planning: Getting it Wrong! D Plan: on(A,C), on(D,A) Goalstack: A C on(D,A) Stack(D, A) holding(D), clear(A) holding(D) Pickup(D) on(D,Table), clear(D), ae C A B D on(A,table), on(C, B), on(B,table), on(D,table), clear(A), clear(C), clear(D), ae. on(A,table), on(C, B), on(B,table), holding(D), clear(A), clear(C), clear(D) Current State
STRIPS Planning: Getting it Wrong! D Plan: on(A,C), on(D,A) Goalstack: A C Pickup(D) on(D,A) Stack(D, A) D C A B on(A,table), on(C, B), on(B,table), on(D,A), clear(C), clear(D), ae. on(A,table), on(C, B), on(B,table), holding(D), clear(A), clear(C), clear(D) Current State
STRIPS Planning: Getting it Wrong! D Plan: on(A,C), on(D,A) Goalstack: A C Pickup(D) Stack(D, A) • Now What? • We chose the wrong goal first • A is no longer clear. • stacking D on A messes up the preconditions for actions to accomplish on(A, C) • either have to backtrack, or else we must undo the previous actions D C A B on(A,table), on(C, B), on(B,table), on(D,A), clear(C), clear(D), ae. Current State
STRIPS planning (Goalstack planning) • Works on one subgoal at a time • Insists on completely achieving that subgoal before considering other subgoals • May have to backtrack: • If it chooses the wrong order to work on the subgoals • If it chooses the wrong action to achieve a subgoal • Searches backwards from goal – uses goal to guide choice of actions
Limitation of state-space search • Linear planning or Total order planning • Example • Initial state: all the blocks are clear and on the table • Goal: On(A,B) Λ On(B,C) • If search achieves On(A,B) first, then needs to undo it in order to achieve On(B,C) • Have to go through all the possible permutations of the subgoals
Search through the space of plans • Nodes are partial plans, links are plan refinement operations and a solution is a node (not a path). • This can be powerful if the plan representation and refinements change the search space. • POP creates partial-order plans following a “least commitment” principle.
Total Order Plans: Partial Order Plans: Start Start Start Start Start Start Start Right Sock Right Sock Left Sock Left Sock Right Sock Left Sock Left Sock Right Sock Left Sock Left Sock Right Sock Right Sock Right Shoe Left Shoe Left Sock on Right Sock on Left Shoe Right Shoe Right Shoe Left Sock Right Shoe Left Shoe Left Sock Right Sock Right Shoe on Left Shoe on Right Shoe Left Shoe Right Shoe Left Shoe Right Shoe Left Shoe Finish Finish Finish Finish Finish Finish Finish
Q Ac Ap P.O. plans in POP • Plan = (A, O, L), where • A is the set of actions in the plan • O is a set of temporal orderings between actions • L is a set of causal links linking actions via a literal Causal link means that Ac has precondition Q that is established in the plan by Ap. move-a-from-b-to-table move-c-from-d-to-b (clear b)
Q Ac Ap Threats to causal links Step Atthreatens link if: • At has (~Q) as an effect • At could come between Ap and Ac, i.e., O is consistent with Ap < At < Ac
Threat Removal • Threats must be removed to prevent a plan from failing • Demotion adds the constraint At < Ap to prevent clobbering, i.e. push the clobberer before the producer • Promotion adds the constraint Ac < At to prevent clobbering, i.e. push the clobberer after the consumer
Initial (Null) Plan • Initial plan has • A = { A0, A¥} • O = {A0 < A¥} • L = {} • A0 (Start) has no preconditions but all facts in the initial state as effects. • A¥ (Finish) has the goal conditions as preconditions and no effects.
Q Ac Ap POP algorithm POP((A, O, L), agenda, PossibleActions): • If agenda is empty, return (A, O, L) • Pick (Q, An) from agenda • Ad = choose an action that adds Q. • If no such action exists, fail. • Add the link Ad Ac to L and the ordering Ad < Ac to O • If Ad is new, add it to A. • Remove (Q, An) from agenda. If Ad is new, for each of its preconditions P add (P, Ad) to agenda. • For every action At that threatens any link • Choose to add At < Ap or Ac < At to O. • If neither choice is consistent, fail. • POP((A, O, L), agenda, PossibleActions) Q
Sussman Anomaly A0 (on C A) (on-table A) (on-table B) (clear C) (clear B) (on A B) (on B C) (on-table C) A ¥
Work on open precondition (on B C) A0 (on C A) (on-table A) (on-table B) (clear C) (clear B) (clear B) (clear C) (on-table B) A1: move B from Table to C -(on-table B) -(clear C) (on B C) (on-table C) (on A B) (on B C) A ¥
Work on open precondition (on A B) A0 (on C A) (on-table A) (on-table B) (clear C) (clear B) (clear A) (clear B) (on-table A) A2: move A from Table to B (clear B) (clear C) (on-table B) -(on-table A) -(clear B) (on A B) A1: move B from Table to C -(on-table B) -(clear C) (on B C) (on-table C) (on A B) (on B C) A ¥
Work on open precondition (on-table C) A0 (on C A) (on-table A) (on-table B) (clear C) (clear B) (on C A) (clear C) A3: move C from A to Table (on-table C) -(on C A) (clear A) (clear A) (clear B) (on-table A) (clear B) (clear C) (on-table B) A2: move A from Table to B A1: move B from Table to C -(on-table A) -(clear B) (on A B) -(on-table B) -(clear C) (on B C) (on-table C) (on A B) (on B C) A ¥
Analysis • POP can be much faster than the state-space planners because it doesn’t need to backtrack over goal orderings (so less branching is required). • Although it is more expensive per node, and makes more choices than RegWS, the reduction in branching factor makes it faster, i.e., n is larger but b is smaller!
More analysis • Does POP make the least possible amount of commitment? • Lifted POP: Using Operators, instead of ground actions, • Unification is required
PutOn(x,y) PutOnTable(x) POP in the Blocks world Cl(x), Cl(y), On(x,z) On(x,y), Cl(x), ~Cl(y), ~On(x,z) On(x, z) Cl(x) On(x,Table), Cl(x), ~On(x,z)
Buy (y,x) GO (x,y) At(x), Sells(x,y) Have(y) At(x) At(y) ~At(x) Example 2 • A0: Start • At(Home) Sells(SM,Banana) Sells(SM,Milk) Sells(HWS,Drill) • A¥ : Finish • Have(Drill) Have(Milk) Have(Banana) At(Home)
start finish POP Example