1 / 90

Action, Change and Evolution: from single agents to multi-agents

Explores the historical and contemporary significance of action and change in knowledge representation (KR) and reasoning (R), delving into the evolution from single agents to multi-agents. Covers diverse domains, including robotics, distributed systems, cell behavior, and engineering. Emphasizes the importance of specifying, controlling, and modeling change in various contexts, with a focus on non-monotonicity, probabilistic reasoning, and causal inference. Traces the frame problem's origins in AI and its impact on problem-solving systems. Showcases key milestones and quotes from influential figures like McCarthy and Hayes, highlighting the wide applicability and interdisciplinary nature of the subject.

dmacdonald
Download Presentation

Action, Change and Evolution: from single agents to multi-agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Action, Change and Evolution: from single agents to multi-agents Chitta Baral Professor, School of Computing, Informatics & DSE Key faculty, Center for Evolutionary Medicine & Inform. Arizona State University Tempe, AZ 85287

  2. Action, Change and Evolution: importance to KR & R • Historical importance • Applicability to various domains • Various knowledge representation aspects • Various kinds of reasoning

  3. Heracleitos/Herakleitos/Heraclitus of Ephesus (c. 500 BC) - interpreted by Plato in Cratylus "No man ever steps in the same river twice, for it is not the same river and he is not the same man.“ Panta rei kai ouden menei Panta rei kai ouden menei All things are in motion and nothing at rest.

  4. Alternate interpretation of what Heraclitus said • … different waters flow in rivers staying the same. • In other words, though the waters are always changing, the rivers stay the same. • Indeed, it must be precisely because the waters are always changing that there are rivers at all, rather than lakes or ponds. • The message is that rivers can stay the same over time even though, or indeed because, the waters change. The point, then, is not that everything is changing, but that the fact that some things change makes possible the continued existence of other things.

  5. Free will and choosing ones destiny

  6. Where does that line of thought lead us? • Change is ubiquitous • But one can shape the change in a desired way • Some emerging KR issues • How to specify change • How to specify our desires/goals regarding the change • How to construct/verify ways to control the change

  7. “Action and Change” is encountered often in Computing as well as other fields • Robots and Agents • Updates to a database • Becomes more interesting when updates trigger active rules • Distributed Systems • Computer programs • … • Modeling cell behavior • Ligand coming in contact with a receptor • Construction Engineering • …

  8. Various KR aspects encountered • Need for non-monotonicity • Probabilistic reasoning • Modal logics • Open and closed domains • Causality • Hybrid reasoning

  9. Various kinds of reasoning • Prediction • Plan verification; control verification • Narratives • Counterfactuals • Causal reasoning • Planning; control generation • Explanation • Diagnosis • Hypothesis generation

  10. Initial Key Issue: Frame Problem • Motivation: How to specify transition between states of the world due to actions? • A state transition table would be too space consuming! • Assume by default that properties of the world normally do not change and specify the exceptions of what changes. • How to precisely state the above? • Many finer issues! • To be elaborate upon as we proceed further.

  11. Origin of the AI “frame” problem • Leibniz, c.1679 • "everything is presumed to remain in the state in which it is" • Newton, 1687 (Philosophiae Naturalis Principia Mathematica) • An object will remain at rest, or continue to move at a constant velocity, unless a resultant force acts on it.

  12. Early work in AI on action and change • 1959 McCarthy (Programs with common sense), • 1969 McCarthy and Hayes 1969 (Some philosophical problems from the standpoint of AI) – origin of the “frame problem” in AI. • 1971 Raphael – The frame problem in problem-solving systems (Defines the frame problem nicely) • 1972 Sandewall – An approach to the frame problem • 1972 Hewitt – PLANNER • 1973 Hayes – The Frame problem and related problems in AI • 1977 Hayes – The logic of frames • 1978 Reiter – On reasoning by default

  13. Quotes from McCarthy & Hayes 1969 • In the last section of part 3, in proving that one person could get into conversation with another, we were obliged to add the hypothesis that if a person has a telephone he still has it after looking up a number in the telephone book. If we had a number of actions to be performed in sequence we would have quite a number of conditions to write down that certain actions do not change the values of certain fluents. In fact with n actions and m fluents we might have to write down mn such conditions. • We see two ways out of this difficulty. The rest is to introduce the notion of frame, like the state vector in McCarthy (1962). A number of fluents are declared as attached to the frame and the effect of an action is described by telling which fluents are changed, all others being presumed unchanged.

  14. In summary … • Action and Change is an important topic in KR & R • Its historical basis goes back to pre Plato and Aristotle days • In AI it goes back to the founding days of AI • It has a wide applicability • It involves various kind of KR aspects • It involves various kinds of reasoning

  15. Outline of the rest of the talk • Highlights of some important results and turning points in describing the world and how actions change the world (physical as well as mental) • Other aspects of action and change: here we will talk about mostly our work • Specifying Goals • Agent architecture • Applications • A future direction • Interesting issues with multiple agents

  16. The Yale Shooting Problem: Hanks & McDermott (AAAI 1986) • Nonmonotonic formal systems have been proposed as an extension to classical first-order logic that will capture the process of human “default reasoning” or “plausible inference” through their inference mechanisms, just as modus ponens provides a model for deductive reasoning. … • We provide axioms for a simple problem in temporal reasoning which has long been identified as a case of default reasoning, thus presumably amenable to representation in nonmonotonic logic. Upon examining the resulting nonmonotonic theories, however, we find that the inferences permitted by the logics are not those we had intended when we wrote the axioms, and in fact are much weaker. This problem is shown to be independent of the logic used; nor does it depend on any particular temporal representation. • Upon analyzing the failure we find that the nonmonotonic logics we considered are inherently incapable of representing this kind of default reasoning.

  17. Reiter 1991: A simple solution (sometimes) to the frame problem • Combines earlier proposal by Schubert (1990) and Pednault (1989) together with a suitable closure assumption. • Intermediate point: • Poss(a,s)  preR+(a,s)  R(do(a,s) ) • Poss(a,s)  preR-(a,s)  ~R(do(a,s) ) • Poss(a,s)  [ R(do(a,s) )  preR+(a,s)  R(s)  ~ preR-(a,s) ) ]

  18. Lin & Shoham 1991: Provably correct theories of actions • “… argued that a useful way to tackle the frame problem is to consider a monotonic theory with explicit frame axioms first, and then to show that a succinct and provably equivalent representation using, for example, nonmonotonic logics, captures the frame axioms concisely”

  19. Sandewall – Features and Fluents • 1991/1994 Book ; IJCAI 1993; 1994 JLC: The range of applicability of some non-monotonic logics for strict inertia • Propose a systematic methodology to analyze a proposed theory in terms of its selection function • When • Y is a scenario description (expressed using logical formulae), • (Y) is the set of intended models of Y • S(Y) is the set of models of Y selected by the selection function S • Validation of S means showing • S(Y) = (Y) for an interesting and sufficient large class of Y. • Range of applicability is the set Z: Y  Z  S(Y) = (Y)

  20. The language A - 1992 • 1992. Gelfond & Lifschitz. Representing actions in extended logic programs. Journal of Logic Programming version in 1993. • Syntax • Value proposition • F after A1; …; Am initially F • Effect proposition • A causes F if P1, …, Pm • Domain Description: a set of propositions • Semantics • Entailment between Domain Descriptions & Value Propositions • Entailment defined by models of domain descriptions • Models defined in terms of initial states and transition between states due to actions • Sound translation to logic programs

  21. Kartha 93: Soundness and Completeness of three formalizations of actions • Used A as the base language • Proposed translations to • Pednault’s scheme • Reiter’s scheme • A circumscriptive schemed based on a method by Baker • Proved the soundness and completeness of the translations.

  22. 1990-91-92 • 1990: I first learn about Frame problem from Don Perlis • 1991-92: Learn more about it from Michael Gelfond

  23. Initial frame problem Succinctly specifying state transition due to an action What if we allow actions to be executed in parallel? Do we explicitly specify effects of each possible subsets of actions executed in parallel? Too many Do we just add their effects? May not match reality l_lift causes spilled r_lift causes spilled {l_lift, r_lift} causes ~spilled if ~spilled {l_lift, r_lift} causes lifted initially ~spilled, ~lifted paint causes painted Effect of actions executed in parallel: IJCAI 93; JLP 97 (with Gelfond)

  24. Our Solution and similar work • Inherit from subsets under normal circumstances; and • use specified exceptions when necessary. • High level language: syntax and semantics • Logic programming formulation • Correctness theorem • Similar work by Lin and Shoham in 1992.

  25. Our Solution: Excerpts from the high level language semantics • Execution of an action a in a state s causes a fluent literal f if • a immediately causes f (defined as: there is a proposition acausesf ifp1, …, pn such that p1, …, pn hold in s) • a inherits the effect f from its subsets in s. (i.e. there is a ba, such that execution of b in s immediately causes f and there is no c such that bc a and execution of c in s immediately causes ~f.) • E+(a, s) = {f : f is a fluent and execution of a in s causes f } • E-(a, s) = {f : f is a fluent and execution of a in s causes ~f } • F(a, s) = s  E+(a, s) \E-(a, s).

  26. Our Solution: Excerpts from the logic programming axiomatization • Inertia • holds(F, res(A,S)) holds(F,S), not may_i_cause(A, F’,S), atomic(A), not undefined(A,S). • Translating “a causes f if p1, …, pn” • may_i_cause(a,f,S) not h’(p1,S), …, not h’(pn,S). • cause(a,f,S)  h(p1,S), …, h(pn,S). • Effect axioms • holds(F, res(A,S)) cause(A,F,S), not undefined(A,S). • undefined(A,S)  may_i_cause(A, F,S), may_i_cause(A, F’,S). • Inheritance axioms • holds(F, res(A,S)) subset(B,A), holds(F, res(B,S)), not noninh(F,A,S), not undefined (A,S). • cancels(X,Y,F,S) subset(X,Z), subseeq(Z,Y), cause(Z,F’,S). • noninh(F,A,S)  subseeq(U,A), may_i_cause(U, F’,S), not cancels(U,A,F’,S). • undefined(A,S) noninh(F,A,S), noninh(F’,A,S).

  27. Effect of actions in presence of specifications relating fluents in the world • Examples of “state constraints”: • dead iff ~alive. • at(X)  at(Y) X = Y. • Winslett 1988: s’ F(a,s) if • s’ satisfies the direct effect (E) of an action plus state constraints (C) and • There is no other state s” that satisfies E and C and that is closer (defined using symmetric difference) to s than s’. • But?

  28. Problems in using classical logic to express state constraints • Lin’s Suitcase example (Lin - IJCAI 95) • flip1 causes up1 • filp2 causes up2 • State Constraint: up1  up2  open • initially up1, ~up2, ~open. • What happens if we do flip2? • But up1  up2  open is equivalent to ~open up2  ~up1 • Marrying and moving (me - IJCAI 95) • at(X)  at(Y) X = Y. • married_to(X)  married_to(Y) X = Y. • Ramification vs. Qualification.

  29. Causal connection between fluents • We Suggested in IJCAI 95 that a causal specification (in particular: Marek and Truszczynski’s Revision programs) be used to specify “state constraints” • out(at_B)  in(at_A). out(at_A)  in(at_B). • in(married_to_A), in(married_to_B). • Presented a way to translate it to logic programs. • Thus a logic programming solution to the frame problem in presence of “state constraints” that can express causality and that distinguished between ramification and qualification. • We proved soundness and completeness theorems. • McCain and Turner presented a conditional logic based solution in the same IJCAI. (1995) • Lin 1995: Embracing causality in specifying indirect effects of actions • Thielscher 1996 • Used in RCS-Advisor system developed at Texas Tech university.

  30. Knowledge and Sensing • Moore 1979, 1984 • for any two possible worlds w1 and w2 such that w2 is the result of the execution of a in w1 the worlds that are compatible with what the agent knows in w2 are exactly the worlds that are the result of executing a in some world that is compatible with what the agent knows in w1 • Suppose sensef is an action that the agent can perform to know if f is true or not. Then for any world represented by w1 and w2 such that w2 is the result of sensef happening in w1 the world that is compatible with what the agent knows in w2 are exactly those worlds that are the result of sensef happening in some world that is compatible with what the agent knows in w1 and in which f has the same truth value as in w2. • Scherl & Levesque 1993

  31. Knowledge and Sensing • Effect Specifications • push_door causes open if ~locked, ~jammed • push_door causes jammed if locked • flip_lock causes locked if ~ locked • flip_lock causes ~ locked if locked • initially ~ jammed, ~ open • Goal: To make open true • P1: If ~locked then push_door else flip_lock; push_door • P2: sense_locked If ~locked then push_door else flip_lock; push_door

  32. Formalizing sensing actions: a transition function based approach (with Son AIJ 2001) s1 s1 sensef s1, s2, s3, s4, … s1‘, s2’, s3’, … s1, s2, s3, s4, …

  33. Combining narratives with hypothetical reasoning: planning from the current situation • With Gelfond & Provetti JLP1997 – The language L • Besides effect axioms of the type • a causes f if p1, …, pn • We have occurrence and precedence facts of the form • f at si • a occurs_at si • sipreceeds sj

  34. An example • rent causes has_car • hit causes ~has_car • drive causes at_airport if has_car • drive causes ~at_home if has_car • pack causes packed if at_home • at_home at s0 • ~at_airport at s0 • has_car at s0 • PLAN • EXECUTE • s0 preceeds s1 • pack occurs_at s1 • OBSERVE • s1 preceeds s2 • ~has_car at s2 • Needs to make a new PLAN from the CURRENT situation

  35. From sensing and narratives to dynamic diagnosis: basic ideas (With McIlraith, Son: KR2000) • Diagnosis: Reiter defined diagnosis to be a fault assignment to the various component of the system that is consistent with (or explains) the observations; Thielscher extended it to dynamic diagnosis. • Dynamic diagnosis using L and sensing: • Necessity of Diagnosis: When observation is inconsistent with the assumption that all components were initially fine and no action that can break one of those component occurred. I.e., (SD \ SDab, OBS  OK0) does not have a model • Diagnostic model M: is a model of the narrative (SD, OBS  OK0) • Narratives • OBS: s0 < s1 < s2 < s3 ~light_on at s0 light_on at s1 ~light_on at s2 ~light_on at s3 turn_on occurs_at s0 turn_off occurs_at s1 turn_on occurs_between s2, s3 • OK0: ~ab(bulb) at s0. • Diagnostic plan: A conditional plan with sensing actions which when executed gives sufficient information to reach a unique diagnosis.

  36. Golog: JLP1997 (Levesque, Reiter, Lesperance, Lin, Scherl) • A logic based language to program robots/agents • Allows programs to reason about the state of the world and consider effects of various possible course of actions before committing to a particular behavior • I.e., it will unfold to an executable sequence of actions • Based on theories of action and extended version of Situation calculus

  37. Features of Golog • Primitive actions • Test actions (fluent formulas to be test in a situation) • Sequence • Non-deterministic choice of two actions • Non-deterministic choice of action arguments • Non-deterministic iteration (conditionals and while loops can be defined using it) • Procedures

  38. Lots of follow-up on Golog • Work at Toronto • Work at York • Work at Aachen • Etc.

  39. Other aspects of action description languages • Non-deterministic effect of actions • Probabilistic effect of actions with causal relationships; counterfactual reasoning • Defeasible specification of effects • Presence of triggers • Characterizing active databases • Actions with durations • Hybrid effects of actions • Thielschers’ fluent calculus • Event calculus • Modular action description • Learning action models • …

  40. Issues studied so far • Mostly about describing how actions may change the world

  41. Outline of the rest of the talk • Highlights of some important results and turning points in describing the world and how actions change the world (physical as well as mental) • Other aspects of action and change: mostly presenting our work • Specifying Goals and directives • Agent architecture • Applications • A future direction • Interesting issues with multiple agents

  42. Specifying goals and directives

  43. What are maintenance goals? • Always f, also written as □ f • too strong for many kind of maintainability (eg. maintain the room clean) • Always Eventually f, also written as □◊ f. • Weak in the sense it does not give an estimate on when f will be made true. • May not be achievable in presence of continuous interference by belligerent agents. • □ f ------------------ □◊k f -------------------------- □◊ f • □◊3 f is a shorthand for □ ( f VOf VOOf VOOOf ) • But if an external agent keeps interfering how is one supposed to guarantee □◊3 f .

  44. Definition of k-maintainability: AAAI 00 • Given • A system A = (S,A,Ф), where • S is the set of system states • A is the union of agent actions Aag, and environmental actions Aenv • Ф : S x A → 2 S • A set of initial states S, a set of maintenance states E, parameter k, a function exo : S → 2 Aenv about exogenous action occurrence • we say that a control K k-maintains S with respect to E, if • for each state s reachable from S via K and exo, and each sequence σ = s, s1, . . . , sr (r <=k) that unfolds within k steps by executing K, we have {s, s1, . . . , sr} ∩ E ≠ { }.

  45. No 3-maintainable policy for S = {b} with respect to E = {h} a c d a a e a’ a f b h e g

  46. 3-maintainable policy for S = {b} with respect to E = {h} : Do a in b, c and d. e a c d a a a a’ f b h e g

  47. Finding k-maintainable policies (if exists) : an overview (joint work with T. Eiter): ICAPS 04 • Encoding the problem in SAT whose models, if exists, encode the k-maintainable policies. • This SAT encoding can be recasted as a Horn logic program whose least model encodes the maximal control. • (Maintainability is almost similar to Dijkstra’s self-stabilization in distributed systems.)

  48. Motivational goal: Try your best to reach a state where p is true. a7 ~p, q,~r,~s a7 s2 a2 a1 a5 a5 a6 ~p, ~q,~r,~s p,s s5 ~p, q, r,~s s1 a1 s4 a4 a3 a3 ~p, ~q,r,~s s3

  49. Try your best to reach p: Policy p1 a7 ~p a7 s2 a2 a1 a5 a5 a6 ~p p s5 ~p s1 a1 s4 a4 a3 a3 ~p s3

  50. LTL, CTL* and p-CTL* • LTL: Next, Always, Eventually, Until • For plans that are action sequences • CTL*: exists path, all paths • For plans that are action sequences • p-CTL*: exists path following the policy under consideration, all paths following the policy under construction. (ECAI 04) • For policies (mapping states to actions)

More Related