1 / 16

Artificial Intelligence Chapter 23. Multiple Agents

Artificial Intelligence Chapter 23. Multiple Agents. Outline. Interacting Agents Models of Other Agents A Modal Logic of Knowledge Additional Readings and Discussion. 23.1 Interacting Agents. Agents’ objectives To predict what another agent will do : Need methods to model another

sovann
Download Presentation

Artificial Intelligence Chapter 23. Multiple Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Chapter 23.Multiple Agents

  2. Outline • Interacting Agents • Models of Other Agents • A Modal Logic of Knowledge • Additional Readings and Discussion (C) 2000, 2001 SNU CSE Biointelligence Lab

  3. 23.1 Interacting Agents • Agents’ objectives • To predict what another agent will do : • Need methods to model another • To affect what another agent will do : • Need methods to communicate with another • Focus • Distributed artificial intelligence (DAI) (C) 2000, 2001 SNU CSE Biointelligence Lab

  4. 23.2 Models of Other Agents • Varieties of models • Need of model : to predict the behavior of other agents and processes • Model focused : high level model ( e.g. T-R model) • The model and its apparatus for using that model to select actions : cognitive structure • Cognitive structure often includes an agent’s goals and intentions • Our focus of cognitive structure : its model of its environment and of the cognitive structures of others agents (C) 2000, 2001 SNU CSE Biointelligence Lab

  5. 23.2 Models of Other Agents • Model strategies • Iconic based • Attempts to simulate relevant aspects of the environment • Feature based • Attempts to describe the environment (C) 2000, 2001 SNU CSE Biointelligence Lab

  6. 23.2 Models of Other Agents • Simulation Strategies • Often useful but suffer from the difficulty representing ignorance or uncertainty • Simulation Databases • Build a hypothetical DB of formulas presumed to be the same formulas that our agent thinks actually populate that other agent’s world model • It has the same deficiencies as iconic models • Uncertainty : whether our agent has a formula or other agent has the formula (C) 2000, 2001 SNU CSE Biointelligence Lab

  7. 23.2 Models of Other Agents • The Intentional Stance • Capable of describing another agent’s knowledge and beliefs about the world • Describing another agent’s knowledge and beliefs : taking intentional stance • 3 possibilities for constructing intentional-stance • 1. Reify the other agent’s beliefs [McCarthy 1979] : Bel( On(A,B) ) • 2 . Our agent assume that the other agent actually represents its beliefs about the world by predicate-calculus formulas in its DB • : Bel( Sam, On(A,B) ) • 3. Use modal operators (C) 2000, 2001 SNU CSE Biointelligence Lab

  8. 23.3 A Modal Logic of Knowledge • Modal Operators • Modal operator • to construct a formula whose intended meaning is that a certain agent knows a certain proposition • e.g) K( Sam, On(A,B) )  K(α,φ) or K α(φ) • knowledge and belief • Whereas an agent can believe a false proposition, it cannot know anything that is false. • logic of knowledge is simpler than logic of belief (C) 2000, 2001 SNU CSE Biointelligence Lab

  9. 23.3 A Modal Logic of Knowledge • Modal first-order language • using the operator K • syntax 1. All of the wffs of ordinary first-order predicate calculus are also wwf of the modal language 2. If φ is a closed wff of the modal language, and if α is a ground term, then K(α, φ) is a wff of the modal language. 3. As usual, if φ and ψ are wffs, then so are any expressions that can be constructed from φ and ψ by the usual propositional connectives. (C) 2000, 2001 SNU CSE Biointelligence Lab

  10. 23.3 A Modal Logic of Knowledge • As examples, • K[Agent1, K[Agent2, On(A,B))] : Agent1 knows that Agent2 knows that A is on B. • K(Agent1, On(A,B))  K(Agnet1, On(A,C)) : Either Agent1 knows that A is on B or it knows that A is on C • K[Agent1, On(A,B)  On(A,C)] : Agent1 knows that either A is on B or that A is on C. • K(Agent1, On(A,B))  K(Agent1, ¬On(A,B)) : Agent1 knows whether or not A is on B. • ¬K(Agent1, On(A,B)) : Agent1 doesn’t know that A is on B. • (x)K(Agent1, On(x,B)) : illegal wwf (C) 2000, 2001 SNU CSE Biointelligence Lab

  11. 23.3 A Modal Logic of Knowledge • Knowledge Axioms • ,  : compositional semantics • Semantics of K is not compositional. • truth value of Kα [φ] is not depend on αand φ compositionally • φ ψ, Kα(φ) Kα(ψ) for all α : not necessary since α might not know that φ is equivalent to ψ. • axiom schemas • distribution axiom • [Kα(φ) Kα(φ ψ)] Kα(ψ) … (1) (  Kα(φ  ψ) [Kα(φ) Kα(ψ)] … (2) ) • knowledge axiom • Kα(φ)  φ …(3) : An agent cannot possibly know something that is false. • positive-introspection axiom • Kα(φ)  Kα(Kα(φ)) … (4) (C) 2000, 2001 SNU CSE Biointelligence Lab

  12. 23.3 A Modal Logic of Knowledge • negative-introspection axiom • Kα(φ)  Kα(¬Kα(φ)) … (5) • epistemic necessitation • ├φ infer Kα(φ) … (6) • logically omniscienct • φ ├ψ and from Kα(φ) infer Kα(ψ) … (7) (  ├ (φψ) infer Kα(φ) Kα(ψ) … (8) • from logical omniscience, K(α, (φψ))  K(α, φ) K(α, ψ) … (9) (C) 2000, 2001 SNU CSE Biointelligence Lab

  13. 23.3 A Modal Logic of Knowledge • Reasoning about Other Agents’ Knowledge • Our agent can carry out proofs of some statements about the knowledge of other agents using only the axioms of knowledge, epistemic necessitation, and its own reasoning ability (modus ponens, resolution). • e.g) Wise-Man puzzle • assumption : among three wise men, at least one has a white spot on his forehead. Each wise man can see the others’ foreheads but not his own. Two of them said, “I don’t know whether I have a white spot”. • proof of K(A, White(A)) (where, A is the third man.) 1. KA[¬White(A)  KB(¬White(A))] (given) 2. KA[KB(¬White(A)  White(B))] (given) 3. KA(¬KB(White(B))) (given) 4. ¬White(A)  KB(¬White(A)) (1, and axiom 3) 5. KB[¬White(A)  White(B)) (2, and axiom 2) (C) 2000, 2001 SNU CSE Biointelligence Lab

  14. 23.3 A Modal Logic of Knowledge 6. KB(¬White(A))  KB(White(B)) (5. and axiom 2) 7. ¬White(A)  KB(White(B)) (resolution on the clause forms of 4. and 6.) 8. ¬KB(White(B))  White(A) (contrapositive of 7.) 9. KA[¬KB(White(B))  White(A)] (1.- 5., 8., rule 7) 10. KA(¬KB(White(B)))  KA(White(A)) (axiom 2) 11. KA(White(A)) (modus ponens using 3. and 10.) (C) 2000, 2001 SNU CSE Biointelligence Lab

  15. 23.3 A Modal Logic of Knowledge • Predicting Actions of Other Agents • in order to predict what another agent A1, • If A1 is not too complex, our agent may assume that A1’s action are controlled by a T-R program. Suppose the conditions in that program are i, for i=1, …, k. To predict A1’s future actions, our agent needs to reason about how A1 will evaluate these conditions. • It is often appropriate for our agent to take an intentional stance toward A1 and attempt to establish whether or not KA1(i) for i=1, …, k (C) 2000, 2001 SNU CSE Biointelligence Lab

  16. Additional Readings and Discussion • [Shoham 1993] : “agent-oriented programming” • [Minsky 1986] : Society of Mind • [Hintikka 1962] : modal logic of knowledge • [Kripke 1963 : possible-worlds semantics for modal logics • [Moore 1985b] : possible-worlds semantics within first-order logic • [Levesque 1984b, Fagin & Halpern 1985, Konoliege 1986] • [Cohen & Levesque 1990] : modal logic for the relationship between intention and commitment • [Bond & Gasser 1988] : DAI paper collection (C) 2000, 2001 SNU CSE Biointelligence Lab

More Related