1 / 11

Reasoning About Beliefs, Observability, and Information Exchange in Teamwork

Reasoning About Beliefs, Observability, and Information Exchange in Teamwork. Thomas R. Ioerger Department of Computer Science Texas A&M University. The Need for Reasoning about Beliefs of Others in MAS. The traditional interpretation of BDI:

cleo
Download Presentation

Reasoning About Beliefs, Observability, and Information Exchange in Teamwork

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University

  2. The Need for Reasoning about Beliefs of Others in MAS The traditional interpretation of BDI: Beliefs, Desires, and Intentions of self What about beliefs of others? - important for agent interactions Decision-making depends on beliefs: Does the other driver see me? Does the other driver seem to be in a hurry? Did the other driver see who arrived at the intersection first? Does the other driver see my turn signal? Does other driver allow gap to open for changing lanes?

  3. The Need for Reasoning about Beliefs of Others in Teams • Proactive Information Exchange: • automatically share info with others • makes teamwork more efficient • infer relevance from pre-conditions • of others’ goals in team plan • should try to avoid redundancy A B Ideal conditions to send message: Bel(A,I)  Bel(A,Bel(B,I)) Bel(A,Goal(B,G))  Precond(I,G) Bel(B,I) Done(B,G) Bel(B,I) Done(B,G) team-plan: catch-thief (do B (turn-on light-switch)) (do A (enter-room)) (do A (jump-on thief)) should B tell A the light is now on???

  4. Obs(a,f,y) - agent a will observe f under conditions y (i.e. the “context”) example: x Obs(A1,broken(x),holding(A1,x)) Similarity to VSK logic (Wooldridge) V(f)=accessible, S(f)=perceives, K(f)=knows Obs(a,f,y) ySa(f) Assumption: agents believe what they see: Sa(f)  Ka(f) Small differences: we use Belief instead of Knowledge: Sa(f)  Ba(f) B is weak-S5 modal logic (KD45, without T axiom, B(f) |= f) only believe whetherf is true (or false) Obs(a,f,y) y[(f  Sa(f))  (f  Sa(f))] Observability

  5. <A1,in(gold,room1),true> <A1,lightOn(room1),false> <A1,in(A1,room1),true> <A1,in(A1,room2),false> <A1,in(A2,room1),false> <A1,in(A2,room2),true> <A2,in(gold,room1),unknown> <A2,lightOn(room1),false> <A2,in(A1,room1),true> <A2,in(A1,room2),false> <A2,in(A2,room1),whether> <A2,in(A2,room2),whether> ... ... Belief Database tuples: <A,F,V> Aagents Ffacts (propositions) Vvaluations valuations= {true,false,unkown,whether} unknown  truefalse whether  truefalse Update Algorithm: Di+1=Update(Di,P,J) justification rules J perceptions P Di Di Update Di+1

  6. Justification typeRepresentationPriorityNotes direct observation: (sense s)  y  f6 self only observability: (obs a f y) 5 obs of others effects of actions: (effect x q) 4 if aware of x inferences: (infer f y) 3 y f memory: (persist f) 2 f true OR false assumptions: (default f) 1 Justifications for Belief Updates

  7. updating beliefs is not so simple... Prioritized logic programs Horn clauses annotated with strengths semantics based models in which facts are supported by strongest rule implementation: (assuming rules are not cyclic...) create DAG of propositions topoligically sort: P1..Pn determine true values in order Pi depends at most on truth values of {P1..Pi-1} Belief Update Algorithm ABC (1) GC (2) CDE (1) AFE (3) E C D A B F G

  8. PIEX = Proactive Information EXchange given: belief database D, perceptions P, and justification rules J D’BeliefUpdateAlg(D,P,J) for each agent Ai  Agents and G  Goals(Ai) for each C  PreConditions(G) if C is a positive literal, let vtrue if C is a negative literal, let vfalse if <Ai,C,not(v)>  D’ or <Ai,C,unknown>  D’ Tell(Ai,C,v) Update(D’,<Ai,C,v>) PIEX Algorithm

  9. Experiment: Wumpus Roundup!

  10. Current formalism does not allow for nested beliefs Bel(A1,Bel(A2,lightOn(room5))) Bel(A1,Bel(A2,Bel(A1,lightOn(room5)))) see Isozaki and Katsuno (1996) We are working on an representation of modal logic in Prolog allows nested beliefs and rules backward-chaining rather than forward (e.g. UpdateAlg) of course, not complete Better reasoning about knowledge of actions assert pre-conds before effects? uncertainty of do-er/time? Issues

  11. 1. Modeling beliefs of others is important for multi-agent interactions 2. Observability is a key to modeling others’ beliefs 3. Must be integrated properly with other justifications, such as inference, persistence... 4. Different strengths can be managed using prioritized inference (Prioritized Logic Programs) 5. Proactive information exchange can improve performance of teams 6. Message traffic can be intelligently reduced by reasoning about beliefs Conclusions

More Related