340 likes | 490 Views
Situation Calculus for Action Descriptions. We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus. This adds more expressive power, by allowing full FOL expressions (with quantifiers) for preconditions and effects.
E N D
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus. This adds more expressive power, by allowing full FOL expressions (with quantifiers) for preconditions and effects.
Situation Calculus Example: Corralling Sheep Constants: B, S1, S2, C, F (same as before) Relations: Sheep(x) – same as before Robot(x) – same as before At(x, y, s) – true if object x is in location y in situation s Holding(x, y, s) – true if object x is holding object y in situation s Relations that can change over time are called fluents in Situation Calculus. They get an extra argument, which will specify the situation.
Situation Calculus Actions Actions in Situation Calculus have 3 parts: • Action functions, eg. Grab(sheep, location), which just specify the name and set of arguments for an action. Syntactically, action functions return an “action object” representing the appropriate action. • A possibility axiom, which is a FOL sentence that specifies the situations when an action is possible (like preconditions in STRIPS). • Some successor state axioms, which are FOL sentences that say how relations change when actions happen (similar to effects in STRIPS).
Example Situation Calculus Actions • Action function: a = Grab(sh, loc) This specifies the name of the action (Grab), and the set of arguments (sh and loc). Notice that the action function returns an object, which I’m calling “a”, that represents an action.
Example Situation Calculus Actions • Action function: a = Grab(sh, loc) • Possibility Axiom: This uses the special POSS(action, situation) relation, and has the typical format: ∀s, args . preconditions(s) => POSS(action-function(args), s) For Grab, here is how it would look: ∀s, sh, loc [ Sheep(sh) At(sh, loc, s) At(B, loc, s) ∀t sheep(t) ⇒Holding(B, t, s) ] ⇒POSS(Grab(sh, loc), s)
Quiz: Possibility Axiom Write a possibility axiom for action function Left.
Answer: Possibility Axiom Write a possibility axiom for action function Left. ∀s, args . preconditions(s) => POSS(action-function(args), s) ∀s . At(B, C, s) => POSS(Left, s)
Example Situation Calculus Actions • Action function: a = Grab(sh, loc) • Possibility Axiom • Successor state axioms: This uses the special function RESULT: situation’ = RESULT(situation, action) It has the typical format: ∀s, a . POSS(a, s) ⇒[ fluent-true-in-situation(RESULT(s, a)) ⇔ (a makes fluent true) (fluent was true a didn’t undo it)] For Grab, we need successor state axioms for Holding and At, since Grab can change both of these. Here is how it would look for Holding: ∀s, a, sh POSS(a, s) ⇒ [ Holding(B, sh, RESULT(s, a)) ⇔ (∃loc . a=Grab(sh, loc)) (Holding(B, sh, s) ∃loc . a=Ungrab(sh, loc) )]
Quiz: Successor State Axiom Write a successor state axiom for fluent At. This is a tricky one. Keep in mind all of the actions that can affect At: L, R, G, and U.
Answer: Successor State Axiom Write a successor state axiom for fluent At. ∀s, a . POSS(a, s) ⇒ [ fluent-true-in-situation(RESULT(s, a)) ⇔ (a makes fluent true) (fluent was true and a didn’t undo it)] ∀s, a, x, loc POSS(a, s) ⇒ [ At(x, loc, RESULT(s, a)) ⇔ (loc=Ca=R) (loc=Fa=L) a=Ungrab(x, loc) ( At(x, loc, s) ((loc=Ca=R) (loc=Fa=L)) a=Grab(x,loc) ) ]
Initial State and Goal State Examples Initial states and goal states are described with any FOL sentences you like, usually using S0 for the situation of the initial state, and ∃S to quantify the situation of the goal state. For example: Initial: Sheep(sh1) ^ Sheep(sh2) ^ Robot(B) ^ At(sh1, F, S0) ^ At(sh2, F, S0) ^ At(B, F, S0) Goal: ∃S . ∀sh . Sheep(sh) => At(sh, C, S)
Quiz: Describing states Write a situation calculus description of the initial state which includes 4 sheep in the corral, and the robot in the field. Also, write a goal state in which all sheep are in the field.
Answer: Describing states Write a situation calculus description of the initial state which includes 4 sheep in the corral, and the robot in the field. Sheep(sh1) ^ Sheep(sh2) ^Sheep(s3) ^Sheep(s4)^ Robot(B) ^ At(sh1, C, S0) ^ At(sh2, C, S0) ^ At(sh3, C, S0) ^ At(sh4, C, S0) ^ At(B, F, S0) Also, write a goal state in which all sheep are in the field. ∃S . ∀sh . Sheep(sh) => At(sh, F, S)
Planning in Situation Calculus Situation Calculus is just FOL with some extra conventions. Planning amounts to doing inference in FOL. In other words, planning is equivalent to finding a proof of this FOL statement: possibility-axioms ^ successor-state-axioms ^ initial-state => goal
FOL Inference, or “Theorem-Proving” There is a lot of active research on doing inference in FOL. For finite domains (finite sets of objects and relations in the possible worlds), the problem is NP-hard. Automated provers rely on lots of heuristic search techniques, which we won’t get into in this class. You will try using one of these systems in your second programming assignment.
Handling Uncertainty So far, we have assumed that our environments: • Are fully observable • Have deterministic actions • Don’t change unless the agent causes them to change • Are discrete • Have a single intelligent agent Let’s see what happens when we break some of these assumptions. For now, we’ll consider the first 3.
Unobservable Sheep World Instead of modeling world states, in unobservable or partially-observable environments we will model belief states. A belief state is a set of possible worlds, which constitute all of the worlds that the agent believes it might belong to. For example, here is a belief state in the sheep world: Quiz: 1. Intuitively, what does this belief state represent? 2. Write an FOL statement that describes this belief state.
Partially-observable Sheep World Instead of modeling world states, in unobservable or partially-observable environments we will model belief states. A belief state is a set of possible worlds, which constitute all of the worlds that the agent believes it might belong to. For example, here is a belief state in the sheep world: Answer: 1. It represents the belief that the robot is in the field, but it doesn’t know where the sheep are. 2. Sheep(s1) ^ Sheep(s2)^ Robot(B) ^ At(B, F) ^ (At(s1,F) At(s1, C)) ^ (At(s2,F) At(s2, C))
Actions in a Partially Observable World Because an agent in a PO World doesn’t know exactly what the state of the world is, the agent might try to take an action whose preconditions aren’t satisfied. Instead of saying that this causes the robot to explode, we will simply say that in such cases the action does nothing.
Quiz: Actions in a PO Sheep World 1 1 Suppose an agent takes the action Grab(s1, F). What is the robot’s belief state after this action? 2 2 2 1 1 2 Grab(s1, F) ?
Answer: Actions in a PO Sheep World 1 1 Suppose an agent takes the action Grab(s1, F). What is the robot’s belief state after this action? 2 2 2 1 1 2 Grab(s1, F) 2 1 2 1 2 1 1 2
Sensors in a PO Sheep World 1 1 2 Suppose our robot can’t see whether a sheep is in F or C, but it has a sensor to determine whether it is holding a sheep. We model sensors as events that transform the belief state, just like actions. 2 2 1 1 2 Grab(s1, F) 2 1 2 1 2 1 1 2 Sense Holding 2 1 2 2 1 1 1 1 2 2
Sensors in a PO Sheep World 1 1 Notice that actions result in a single belief state. Sensing events split the belief state into 2 (or more) possible belief states, depending on what is perceived. We will need conditional plans to handle these sensing events, since the rest of the plan depends on what was sensed. 2 2 2 1 1 2 Grab(s1, F) 2 1 2 1 2 1 1 2 Sense Holding 2 1 2 2 1 1 1 1 2 2
Quiz: Planning with Sensing Suppose the robot doesn’t know where it is, and doesn’t know where the sheep are, but knows it isn’t holding any sheep. Describe this initial state, and come up with a plan to put all the sheep in the corral.
Answer: Dynamic Sheep World Initial belief state: 1 1 Suppose the robot doesn’t know where it is, and doesn’t know where the sheep are, but knows it isn’t holding any sheep. Describe this initial state, and come up with a plan to put all the sheep in the corral. 2 2 2 1 1 2 1 1 2 2 2 1 1 2 A possible plan: L; G(s1, F); if holding(s1, F): {R, U(s1, C), L, G(s2, F), if holding(s2, F): {R, U(s2, C)} }, else: {G(s2, F), if holding(s2, F): R, U(s2, C)}
Quiz: Dynamic Sheep World Now let’s suppose the world is dynamic (in addition to PO): it can change even when the robot does nothing. We will assume that sometimes a sheep can get out of the corral by itself. So if At(s, C) is true in one time step, it might change to At(s, F) in the next time step (unless the robot’s action was to grab s). For each of the plans below to put all of the sheep in the corral, determine whether the plan will always work, sometimes work, or never work: • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F) • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F), L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F)
Answer: Dynamic Sheep World Now let’s suppose the world is dynamic (in addition to PO): it can change even when the robot does nothing. We will assume that sometimes a sheep can get out of the corral by itself. So if At(s, C) is true in one time step, it might change to At(s, F) in the next time step (unless the robot’s action was to grab s). For each of the plans below to put all of the sheep in the corral, determine whether the plan will always work, sometimes work, or never work: • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F) Sometimes: s1 may get out of C by the time s2 is in C • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F), L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F) Same thing: Sometimes: s1 may get out of C by the time s2 is in C
Quiz: Stochastic Sheep World Suppose instead of a dynamic world, we have a world with stochastic actions: sometimes when L and R happen, they succeed, and sometimes they don’t. Since the world is PO, we don’t know when it succeeds and when it doesn’t. For each of the plans below to put all of the sheep in the corral, determine whether the plan will always work, sometimes work, or never work: • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F) • L, L, G(s1, F), R, R, U(s1, C), L, L, G(s2, F), R, R, U(s2, F) • L, L, L, G(s1, F), R, R, R, U(s1, C), L, L, L, G(s2, F), R, R, R, U(s2, F)
Answer: Stochastic Sheep World Suppose instead of a dynamic world, we have a world with stochastic actions: sometimes when L and R happen, they succeed, and sometimes they don’t. Since the world is PO, we don’t know when it succeeds and when it doesn’t. For each of the plans below to put all of the sheep in the corral, determine whether the plan will always work, sometimes work, or never work: • L, G(s1, F), R, U(s1, C), L, G(s2, F), R, U(s2, F) Sometimes: L and R may not happen correctly, in which case the rest of the plan will fail. • L, L, G(s1, F), R, R, U(s1, C), L, L, G(s2, F), R, R, U(s2, F) Same thing • L, L, L, G(s1, F), R, R, R, U(s1, C), L, L, L, G(s2, F), R, R, R, U(s2, F) Same thing
Plans with loops and conditionals To handle dynamic worlds and worlds with stochastic actions, we need plans with loops. For example, a plan might include this : Repeat until successful: L. To handle sensing, we need plans with conditionals.
Plans with loops and conditionals Plans without loops and conditionals were represented as a path, e.g.: To represent plans with loops and conditionals, we will use a directed graph. Edges will include not just actions, but also sensing events. E.g.: L G R U Holding G(s2) Holding Holding At(B, C) G(s1) At(B, C) R Holding
Finding plans with loops and conditionals We won’t go into detail about planning algorithms for these kinds of plans. The basic idea is the same as for ordinary planning: we perform a search through a state-space graph for the goal state, starting from the initial state. However, the plan may involve branches and loops, and it must guarantee that ALL leaf nodes are goal states (thus guaranteeing that the plan reaches the goal for any possible set of conditions).
Quiz: Plans with loops and conditionals Assume the robot starts in F. • How many steps does this plan take to execute? • If the goal is for the robot to be At(B, C), does this plan achieve the goal? Holding G(s2) Holding Holding At(B, C) G(s1) At(B, C) R Holding
Answer: Plans with loops and conditionals Assume the robot starts in F. • How many steps does this plan take to execute? It could take 2 or more action steps. There is no upper bound on the number of steps it might take, because of the loop. • If the goal is for the robot to be At(B, C), does this plan achieve the goal? No, it doesn’t achieve the goal. For a plan with conditions and loops, ALL branches must end in goal states. Here, only the bottom branch ends in a goal state, so the plan as a whole does not achieve the goal. Holding G(s2) Holding Holding At(B, C) G(s1) At(B, C) R Holding