110 likes | 221 Views
Reasoning About Beliefs, Observability, and Information Exchange in Teamwork. Thomas R. Ioerger Department of Computer Science Texas A&M University. The Need for Reasoning about Beliefs of Others in MAS. The traditional interpretation of BDI:
E N D
Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University
The Need for Reasoning about Beliefs of Others in MAS The traditional interpretation of BDI: Beliefs, Desires, and Intentions of self What about beliefs of others? - important for agent interactions Decision-making depends on beliefs: Does the other driver see me? Does the other driver seem to be in a hurry? Did the other driver see who arrived at the intersection first? Does the other driver see my turn signal? Does other driver allow gap to open for changing lanes?
The Need for Reasoning about Beliefs of Others in Teams • Proactive Information Exchange: • automatically share info with others • makes teamwork more efficient • infer relevance from pre-conditions • of others’ goals in team plan • should try to avoid redundancy A B Ideal conditions to send message: Bel(A,I) Bel(A,Bel(B,I)) Bel(A,Goal(B,G)) Precond(I,G) Bel(B,I) Done(B,G) Bel(B,I) Done(B,G) team-plan: catch-thief (do B (turn-on light-switch)) (do A (enter-room)) (do A (jump-on thief)) should B tell A the light is now on???
Obs(a,f,y) - agent a will observe f under conditions y (i.e. the “context”) example: x Obs(A1,broken(x),holding(A1,x)) Similarity to VSK logic (Wooldridge) V(f)=accessible, S(f)=perceives, K(f)=knows Obs(a,f,y) ySa(f) Assumption: agents believe what they see: Sa(f) Ka(f) Small differences: we use Belief instead of Knowledge: Sa(f) Ba(f) B is weak-S5 modal logic (KD45, without T axiom, B(f) |= f) only believe whetherf is true (or false) Obs(a,f,y) y[(f Sa(f)) (f Sa(f))] Observability
<A1,in(gold,room1),true> <A1,lightOn(room1),false> <A1,in(A1,room1),true> <A1,in(A1,room2),false> <A1,in(A2,room1),false> <A1,in(A2,room2),true> <A2,in(gold,room1),unknown> <A2,lightOn(room1),false> <A2,in(A1,room1),true> <A2,in(A1,room2),false> <A2,in(A2,room1),whether> <A2,in(A2,room2),whether> ... ... Belief Database tuples: <A,F,V> Aagents Ffacts (propositions) Vvaluations valuations= {true,false,unkown,whether} unknown truefalse whether truefalse Update Algorithm: Di+1=Update(Di,P,J) justification rules J perceptions P Di Di Update Di+1
Justification typeRepresentationPriorityNotes direct observation: (sense s) y f6 self only observability: (obs a f y) 5 obs of others effects of actions: (effect x q) 4 if aware of x inferences: (infer f y) 3 y f memory: (persist f) 2 f true OR false assumptions: (default f) 1 Justifications for Belief Updates
updating beliefs is not so simple... Prioritized logic programs Horn clauses annotated with strengths semantics based models in which facts are supported by strongest rule implementation: (assuming rules are not cyclic...) create DAG of propositions topoligically sort: P1..Pn determine true values in order Pi depends at most on truth values of {P1..Pi-1} Belief Update Algorithm ABC (1) GC (2) CDE (1) AFE (3) E C D A B F G
PIEX = Proactive Information EXchange given: belief database D, perceptions P, and justification rules J D’BeliefUpdateAlg(D,P,J) for each agent Ai Agents and G Goals(Ai) for each C PreConditions(G) if C is a positive literal, let vtrue if C is a negative literal, let vfalse if <Ai,C,not(v)> D’ or <Ai,C,unknown> D’ Tell(Ai,C,v) Update(D’,<Ai,C,v>) PIEX Algorithm
Current formalism does not allow for nested beliefs Bel(A1,Bel(A2,lightOn(room5))) Bel(A1,Bel(A2,Bel(A1,lightOn(room5)))) see Isozaki and Katsuno (1996) We are working on an representation of modal logic in Prolog allows nested beliefs and rules backward-chaining rather than forward (e.g. UpdateAlg) of course, not complete Better reasoning about knowledge of actions assert pre-conds before effects? uncertainty of do-er/time? Issues
1. Modeling beliefs of others is important for multi-agent interactions 2. Observability is a key to modeling others’ beliefs 3. Must be integrated properly with other justifications, such as inference, persistence... 4. Different strengths can be managed using prioritized inference (Prioritized Logic Programs) 5. Proactive information exchange can improve performance of teams 6. Message traffic can be intelligently reduced by reasoning about beliefs Conclusions