1 / 10

Modeling Belief Reasoning in Multi-Agent Systems*

Modeling Belief Reasoning in Multi-Agent Systems*. Thomas R. Ioerger Department of Computer Science Texas A&M University. *funding provided by a MURI grant through DoD/AFOSR. Many interactions in collaborative MAS require reasoning about beliefs of others

Download Presentation

Modeling Belief Reasoning in Multi-Agent Systems*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling Belief Reasoning in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI grant through DoD/AFOSR

  2. Many interactions in collaborative MAS require reasoning about beliefs of others There are no efficient, complete inference procedures for modal logics of belief Need a practical way of maintaining models of other agents beliefs Want it to work much like JARE Correct representation of unknown is a must New Issue: how do you know what others believe? various reasons of different strength Motivation

  3. Define hierarchy of justification types including: rules, defaults, persistence also incorporate observability a major source of info about others’ beliefs represent truth-values explicitly (T,F,?) update cycle (given current beliefs, senses...) BOA - Beliefs of Other Agents like JARE: syntax, binding envs, API, query, assert forward-chaining single-level of nesting: (bel joe (open door)) Approach

  4. Rule types (with strengths for resolving conflicts) 8 - assertions (e.g. from shell, perceptions, messages) 7 - facts (static, never change truth value) 6 - direct observation (self) 5 - effects of actions (for whoever is aware of it...) 4 - inferences (by any agent) 3 - observability (of others) 2 - persistence (“memory,” certain beliefs persist) 1 - default assumptions (given no other evidence) Justifications

  5. (infer (bel ?a (not (tank empty ?car)) (bel ?a (running ?car)) (persist (bel ?a (light-on ?room))) (obs ?a (light-on ?r) (in ?a ?r) (light-on ?r)) (effect (running ?c) (do ?a (start ?c)) (have ?a (keys ?c))) (default (unknown (wumpus alive))) (init (val (num-arrows) 3)) (infer (can-shoot)(val (num-arrows) ?x)(> ?x 0)) (obs (bel ?a (whether (light-on ?rm)) (in ?a ?rm)) (fact (bel archer (value-of (num-arrows)))) (fact (bel archer (whether (can-shoot)))) BOA Syntax consequent antecedent assume believer is ‘self’ context function procedural attachment

  6. What conclusions can be drawn from KB? Update cycle (hence forward chaining): KB’=update(KB,senses,action?,justification rules) If multiple rules relevant to a predicate can fire, want strongest to determine truth-value Must control order of firing, avoid premature... Semantics based on prioritized logic programs Brewka & Eiter, Sakama & Inoue, Delegrande & Schaub Sort predicates by antecedent dependencies fire all for rules for least-dependent predicate first Prioritized Inference no circularities allowed

  7. You get back a Vector of JareEnv’s with variable bindings for alternative solutions (as usual) (query (threat enemy-unit-17)) (query (val (target enemy-unit-17) ?target)) (query (bel sam (light-on room-1))) (query (bel joe (light-on ?r))) (query (bel joe (not (light-on room-1)))) (query (bel joe (whether (light-on room-1)))) Can’t use variable for agent name: (query (bel ?a (has-weapon ?a)))does not work! Queries in BOA

  8. Useful for writing plans that depend on what others believe Challenges: interaction with JARE and variable binding in conditions Preliminary experiments: method 1: separate JARE and BOA KB’s beliefs can’t be used as conds in IF, WHILE...; use bq-if method 2: replace JARE completely efficiency? - assert/retract used for many things in CAST Integration into CAST (?)

  9. (task inform-others-of-loc (?enemy) (seq (bupdate) (foreach ((agent ?ag)) (if (cond (gunner ?ag) (not (radio-silence))) (bq-if (bel ?ag (unknown (val (loc ?enemy))) (bq-if (val (loc ?enemy) ?loc) (seq (send ?ag (val (loc ?enemy) ?loc)) (bassert (bel ?ag (val (loc ?enemy) ?loc)) ))))))))

  10. Ryan’s experience useful for implementing Proactive Information Exchange in CAST-PM (Master’s thesis online) http://faculty.cs.tamu.edu/ioerger/thesi/rozich-thesis.pdf awkward to have to say that everything persists! Reflection on Belief Reasoning... not as expressive as Modal Logic, but efficient no nested beliefs the real issue is: managing the various reasons for beliefs about others’ beliefs (observability, actions, inference, defaults, persistence...) Concluding Remarks

More Related