320 likes | 469 Views
Reasoning about Actions. Artificial Intelligence and Lisp #9. Registration: 50 students Number of: lab2a lab2b lab3a lab3b ------------------------- Reg. Downloads 45 42 18 8 Orphan downloads 3 2 1 0
E N D
Reasoning about Actions Artificial Intelligence and Lisp #9
Registration: 50 students Number of: lab2a lab2b lab3a lab3b ------------------------- Reg. Downloads 45 42 18 8 Orphan downloads 3 2 1 0 Lab completed 37 14 10 1 Incomplete upload 0 0 0 Other information: Notice that two lectures have been rescheduled! Please check the schedule on the official (university) webpage for the course. Lab statistics 2009-11-02
Repeat: Strengths and weaknesses of state-space-based action planning • Strength: systematic algorithms exist • Strength: complexity results exist for some restricted cases, and some of them are computationally tractable • Strength: progressive planning integrates well with prediction, which is needed in many cognitive-robotics applications • Strength and weakness: actions with arguments must (and often, can) be converted to variable-free form, large size • Weakness: expressivity is insufficient for many practical situations • Weakness: computational properties are not always good enough in practice for large problems • Use of logic-based methods is a way of obtaining higher expressivity • Use of constraint-based methods often obtains better results for very large planning problems • These approaches will be addressed in the next three lectures
Example of Scenario that is Unsuitable forState-Space-based Planning • Scenario with a mobile robot and the action of the robot moving from one place to another • The feasibility of the movement action depends on the floorplan, for example, whether the robot has to pass doorways, whether the door is open or closed, locked or not, etc • In principle it is possible to do state-space planning in such a scenario by defining a state containing all the possible relevant aspects of the scenario, but it is inconvenient • It is more practical to describe the scenario using a set of propositions (= statements), and to bring such statements into the planning process if and when they are needed.
Purpose of Using Logicfor Reasoning about Actions • Specify current state of the world using logic formulas • Specify the immediate effects of actions using logic formulas • Specify indirect effects of actions and other, similar information for the world using logic formulas • Specify policies and goals using logic formulas • Use well-defined operations on logic formulas for deriving conclusions, identifying plans for achieving goals, etc • This is worthwhile if one obtains better expressivity and/or better computational methods
Predicates (revised and extended from before) • [H t f v] the feature f has the value v at time t • [P t a] the action a is performable at time t • [G t a] the agent attempts action a at time t • [D t t+1 a] the action a is actually performed from time t to time t+1 • The predicate D can be generalized to allow actions with extended duration • The predicate symbols are abbreviations for Holds, Performable, Go (or Goal), and Do or Done • These are intended for use both about the past and about the future in the agent's world
Actions and Plans • [P t a] the action a is performable at time t • [G t a] the agent attempts action a at time t • [D t t+1 a] the action a is actually performed from time t to time t+1 • The action argument in these is an action term • An action term may be a verb with its arguments, for example [pour glass4 mug3] • The sequential composition of action terms is an action term, e.g. [seq [pour g4 m3][pour m3 g7]] • Conditional expressions and repetition may also be used, but they are a later step for the logic • Composite action terms can be used in plans, and as scripts within the agent (e.g. precond advise), more...
Examples of Action Laws • [P t [fill g b]] <-> [H t (subst-in g) empty] • [D t-1 t [fill g b]] -> [H t (subst-in g) b] • [P t [pour g1 g2]] <-> [H t (subst-in g2) empty] • & (not [H t (subst-in g1) empty]) • Compare the previous notation (scripts): [Do .t [pour .fr .to]] = [if [and [H- .t (substance-in: .to) empty] (not [H- .t (substance-in: .fr) empty ])] [coact [H! .t (substance-in: .to) (cv (substance-in: .fr)) ] [H! .t (substance-in: .fr) empty ]] ]
Execution of a plan = composite action • If a is an elementary action, [G t a] holds, [P t a] holds, and [G t a'] does not hold for any other elementary action a', then [D t t+1 a] holds • If [G t [seq a1 a2 …]] holds, [P t a] holds, and [G t a] does not hold for any other elementary action a', then [D t t+1 a1] and [G t+1 [seq a2 …]] hold • [G t [seq]] (empty sequence) has no effect, and [seq] is not an elementary action in the above • Leave the definition of concurrent actions for another time • The plan [seq a1 a2 …] is performable iff all the successive steps shown above are performable (actually it's a bit more complicated)
Now time to bring in logic • Recall: the purpose of bringing in logic in the context of actions and plans, is to have systematic and well-founded methods for manipulation of formulas like the ones shown on the previous slides
From Decision Tree to Logic Formula [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) … If you know a&b&c then conclude red If you know b&red conclude c v -c (trivial) If you know green conclude -c If you know white conclude (-b&-c) v (b&c) which can be expressed as b <-> c
Entailment [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) … If you know white conclude (-b&-c) v (b&c) which can be expressed as b <-> c F, white |= (-b&-c) v (b&c) The symbol |= is pronounced “entails” (“innebär”) It is a relation between (sets of) formulas, not a symbol within a formula!
Definition of entailment • For an entailment statement A, B, … |= G • An interpretation is an assignment of truthvalues to the proposition symbols in A, B, … G • The models for A is the set of those interpretations where the value of A is true. It is written Mod[A]. (More precisely, the classical models). • The models for a set of formulas is the intersection of their model sets • The entailment statement expresses that Mod[{A, B, …}] Mod[G]
Entailment – example of definition [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) … If you know white conclude (-b&-c) v (b&c) Which can be expressed as b <-> c F, white |= (-b&-c) v (b&c) The symbol |= is pronounced “entails” (“innebär”) It is a relation between (sets of) formulas, not a symbol within a formula!
Model sets [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] F = (a & b & c & red & -green & -blue & -white) v (a & b & -c & -red & green & -blue & -white) v (a & -b & c & -red & -green & blue & -white) … F, a&white |= (-b&-c) v (b&c) Mod[a&b&c&red& …] = {{a:T,b:T,c:T,red:T,green:F,blue:F, white:F}} Mod[F]= {{a:T,b:F,c:T,red:T,green:F,blue:F,white:F},...} Mod[a&white] = {{a:T,white:T, (any combination of the others)}...} Mod[(-b&-c) v (b&c)] = {{b:F,c:F, (any combination of the others)}, {b:T,c:T, (any combination of the others)} }
Simplified formulation [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] F = (a&b&c&red&-green&-blue&-white) v (a&b&-c&-red&green&-blue&-white) v (a&-b&c&-red&-green&blue&-white) … Fd = (a&b&c&red) v (a&b&-c&green) v (a&-b&c&-red&blue) … F2 = -(red&green v red&blue v red&white v green&blue v green&white v blue&white) It is “easy” to see that Mod[F] = Mod[Fd,F2] and the latter formulation is much more compact
Equivalence between formulas [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] Fd = (a&b&c&red) v (a&b&-c&green) v (a&-b&c&-red&blue) v ... Fc = (-a v -b v -c v red) & (-a v -b v c v green) & … One can see that Mod[Fc] = Mod[Fd] since each combination of a, b and c allows exactly one color The first conjunct in Fc can be rewritten as ((-a v -b v -c) v red) (-(a & b & c) v red) (a & b & c) -> red These formulas are equivalent – they have the same set of models! Fc – conjunctive normal form, Fd – disjunctive normal form
Inference Fc = (-a v -b v -c v red) & (-a v -b v c v green) & … F2 = -(red&green v red&blue v red&white v green&blue v green&white v blue&white) F2 can be equivalently replaced by F3 = -(red&green) & -(red&blue) & ... And in turn by F4 = (-red v -green) & (-red v -blue) & ... Both Fc and F4 can be replaced by the set of their conjuncts (= components of the and-expression) for the purpose of inference Now suppose we have Fc, F4, and the proposition white We observed above F, white |= (-b&-c) v (b&c) Try go obtain this using inference in a strict way!
Details of proof [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? blue green]]] white -a v -b v -c v red | -red -a v -b v c v green | -a v -b v -c -a v b v -c v blue | a v -b v c -a v b v c v white | -green a v -b v -c v white | -a v -b v c a v -b v c v red | a v b v c a v b v -c v blue | -blue a v b v c v green | -a v b v -c -white v -red | a v b v -c -white v -green | -b v c -white v -blue (and more...) | b v -c | (-b v c) & (b v -c) | (b -> c) & (c -> b) | b <-> c
Resolution Rule (Simple Case) • The resolution rule is an inference rule • It applies to a combination of two propositions each of which must be a clause, i.e. a disjunctions of literals, where a literal is an atomic proposition or the negation of one (no other subexpressions) • Given (a v B) and (-a v C), it forms (B v C) where both B and C may be zero, one or more literals • Conversion from or to logic formulas that are not clauses must be done using other rules
Inference Operator • Let A,B, … and G be clauses • If G can be obtained from A,B,... using repeated application of the resolution operator, then we write A,B,... |- G and say that A,B,... lead to G • The same notation is used for other choices of inference rules as well • Recall that if Mod[A,B,...] Mod[G] then we say that A,B,... entails G • Completeness result (for propositional logic): A,B,... |- G if and only if A,B,... |= G • One can view |- as an implementation of |=
From here to reasoning about actions • Now we have seen how logical inference can be done by systematic manipulation of logic formulas • Three things are needed in order to apply this to reasoning about actions: • Already done: Replace single propositional symbols (a, b, etc) by composite expressions involving a predicate and its arguments, for example [H time feature value] • Extend the definition of models: not merely an assignment of truth-values to proposition symbols • Generalize the resolution operator to the larger reportoire of logic formulas
Definition of Models for Actions, I • Need to define interpretation and evaluation • An interpretation for a logic of actions consists of a development (~ episode) Dev + a mapping Act from elementary action terms to actions + an invocation log Inv • An action is here a set of pairs of partial states, as in the previous lecture • An invocation log here a set of pairs of timepoints and action terms. Each pair specifies an attempt to begin executing the action term at that timepoint • A formula [H t f v] is true in an interpretation <Dev Act Inv> iff [: f v] is a member of Dev(t), meaning that v holds as the value of f • Must also define D and related predicates, later slide
Repeat: State-space Ontology of Actions • The structures in lab2 implement this ontology • Purpose: unified treatment for several ways of representing the preconditions and effects of actions • It is also the basis for planning algorithms • Start from a set Fea of features and an assignment of a range for each feature • Consider the space of all feature states Sta; each of them is a mapping from features to values, and there may be restrictions on what combinations of values are possible • A complete feature state assigns a value to all members of Fea, a partial feature state does not. Complete is the default. • A development Dev is a mapping from integers from 0 and up, to feature states. (Similar to episodes in Leonardo)
Example 1 • The set of models for • [H t lightning true] → [H t+1 thunder true] • consists of those interpretations where, in the • development, each state containing • [: lightning true] • is followed by a state containing • [: thunder true] • Notice that only the Dev part of the interpretation is used in this simple example; the other two parts may be empty sets, or whatever else
Definition of Models for Actions, II • Consider an interpretation <Dev Act Inv> • A formula [P t a] is true iff pre Dev(t) for some member <pre post> of Act(a), meaning a is performable at time t • A formula [D t t+1 a] is true in the interpretation iff pre Dev(t) and post Dev(t+1) for some member <pre post> of Act(a), meaning the action is done then • A formula [G t a] is true in the interpretation iff the pair of t and a is a member of Inv • A formula [D s t a] where s ≠ t-1 is false, so actions with extended duration are not admitted (in the present, simplified version of the formalism) • Repeat: If A is a set of propositions, then Mod[A] is the set of interpretations where all members of A are true.
Interpretations • In order to merely characterize what has happened during an episode, it is sufficient to use the predicates H and D, and interpretations can have the form <Dev Act {}> • In order to characterize the intention or the plan of an agent, and the execution of that plan, use all of H, D, P and G, and make use of interpretations of the form <Dev Act Int> with non-empty Int
Return to Action Laws • [P t [fill g b]] <-> [H t (subst-in g) empty] • [D t-1 t [fill g b]] -> [H t (subst-in g) b] • [G t a] & [P t a] -> [D t t+1 a] (simplified) • [H 14 (subst-in glass4) empty] • [G 14 [fill glass4 beer]] • We expect to be able to obtain the consequence • [H 15 (subst-in glass4) beer]
Rewrite as • 1 -[P t [fill g b]] v [H t (subst-in g) empty] • 2 [P t [fill g b]] v -[H t (subst-in g) empty] • 3 -[D t-1 t [fill g b]] v [H t (subst-in g) b] • 4 -[G t a] v -[P t a] v [D t t+1 a] • 5 [H 14 (subst-in glass4) empty] • 6 [G 14 [fill glass4 beer]] • Consequences using the resolution rule + instantiation: • 7 -[P 14 [fill glass4 beer]] v [D 14 15 [fill glass4 beer]] from 4,6 • 8 -[H 14 (subst-in glass4) empty] v [D 14 15 [fill glass4 beer]] from 2,7 • 9 [D 14 15 [fill glass4 beer]] from 5,8 • 10 [H 15 (subst-in glass4) beer] from 3,9 • Instantiation = replacement of variable by some expression • Resolution and instantiation can be combined: unification
Just a complicated way of doing a simulation?? • In this simple example, yes, but the same method also works in cases that require more expressivity, like: • If you don't know whether the agent's intention is to pour beer or juice into the glass • If you just know that the intention concerns the glass that is in the robot's hand, without knowing its identifier • If the intention is to fill all the glasses that are on a tray (after some generalization of the above) • and many other cases • The important thing is that both the current state of the environment, its expected future states, actions, and intentions for actions are expressed in a uniform manner and can be combined using the inference operator
Revisit the Premises • [P t [fill g b]] <-> [H t (subst-in g) empty] • [D t-1 t [fill g b]] -> [H t (subst-in g) b] • [G t a] & [P t a] -> [D t t+1 a] (simplified) • [H 14 (subst-in glass4) empty] • [G 14 [fill glass4 beer]] • We expect to be able to obtain the consequence • [H 15 (subst-in glass4) beer] • Notice: Domain specific (= Application specific),Ontology based,Current situation • The full set of 'red' and 'blue' premises can be made to characterize the Act part of the interpretation completely (details next time)
Reasoning about Plans – two cases • Given the logical rules, action laws, initial state of the world, and one or more intentions by the agent – predict future states of the world: done by inference for obtaining logic formulas that refer to later timepoints • rules, initstate, plan |= effects • Given the logical rules, action laws, initial state of the world, and an action term expressing a goal for the agent – obtain a plan (an intention) that will achieve that goal • rules, initstate, plan |= goal • where red marks the desired result in both cases • Planning is therefore an inverse problem wrt inference. In formal logic this operation is called abduction. This will be the main topic of the next lecture.