260 likes | 275 Views
This paper presents a graphical model approach to solving interactive POMDPs, specifically focusing on finitely nested I-POMDPs. It introduces the use of Influence Diagrams to represent the problem and proposes a method for updating beliefs on models of other agents over time. The paper also discusses related work in Multiagent Influence Diagrams and Network of Influence Diagrams. The application of this approach is demonstrated in the context of emergence of social behaviors in multiagent systems.
E N D
International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007) Graphical Models for Online Solutions to Interactive POMDPs Prashant Doshi Yifeng Zeng Qiongyu Chen University of Georgia Aalborg University National Univ. USA Denmark of Singapore
Decision-Making in Multiagent Settings State (S) Actions (Aj) Belief over state and model of i Actions (Ai) Belief over state and model of j Observations (Oi) Observations (Oj) Agent j Agent i Act to optimize preferences given beliefs
Finitely Nested I-POMDP (Gmytrasiewicz&Doshi, 05) • A finitely nested I-POMDP of agent i with a strategy level l: • Interactive states: • Beliefs about physical environments: • Beliefs about other agents in terms of their preferences, capabilities, and beliefs: • Type: • A Joint actions • Possible observations • Ti Transition function: S×A×S [0,1] • Oi Observation function: S×A× [0,1] • Ri Reward function: S×A
Forget It! • Different approach • Use the language of Influence Diagrams (IDs) to represent the problem more transparently • Belief update • Use standard ID algorithms to solve it • Solution
Challenges • Representation of nested models for other agents • Influence diagram is a single agent oriented language • Update beliefs on models of other agents • New models of other agents • Over time agents revise beliefs over the models of others as they receive observations
Related Work • Multiagent Influence Diagrams (MAIDs) (Koller&Milch,2001) • Uses IDs to represent incomplete information games • Compute Nash equilibrium solutions efficiently by exploiting conditional independence • Network of Influence Diagrams (NIDs) (Gal&Pfeffer,2003) • Allows uncertainty over the game • Allows multiple models of an individual agent • Solution involves collapsing models into a MAID or ID • Both model static single play games • Do not consider agent interactions over time (sequential decision-making)
Aj Mj,l-1 Level l I-ID Introduce Model Node and Policy Link Ri Ai • A generic level l Interactive-ID (I-ID) for agent i situated with one other agent j • Model Node: Mj,l-1 • Models of agent j at level l-1 • Policy link: dashed line • Distribution over the other agent’s actions given its models • Beliefs on Mj,l-1 • P(Mj,l-1|s) • Update? S Oi
Details of the Model Node • Members of the model node • Different chance nodes are solutions of models mj,l-1 • Mod[Mj] represents the different models of agent j • CPT of the chance node Aj is a multiplexer • Assumes the distribution of each of the action nodes (Aj1, Aj2) depending on the value of Mod[Mj] Mj,l-1 Aj S Mod[Mj] mj,l-11 Aj1 mj,l-11, mj,l-12 could be I-IDs or IDs mj,l-12 Aj2
Whole I-ID Ri Ai S Aj Oi Mod[Mj] mj,l-11, mj,l-12 could be I-IDs or IDs Aj1 Aj2 mj,l-11 mj,l-12
Ri Ait+1 St+1 Ajt+1 Oit+1 Mj,l-1t+1 Interactive Dynamic Influence Diagrams (I-DIDs) Ri Ait Ajt St Oit Mj,l-1t Model Update Link
Semantics of Model Update Link Ajt+1 Mj,l-1t+1 Ajt st+1 Mj,l-1t Mod[Mjt+1] st mj,l-1t+1,1 Aj1 Mod[Mjt] mj,l-1t+1,2 Oj Aj2 mj,l-1t+1,3 mj,l-1t,1 Aj3 Aj1 Oj1 mj,l-1t+1,4 mj,l-1t,2 Aj4 Aj2 Oj2 These models differ in their initial beliefs, each of which is the result of j updating its beliefs due to its actions and possible observations
Notes • Updated set of models at time step (t+1) will have at most models • :number of models at time step t • :largest space of actions • :largest space of observations • New distribution over the updated models uses • original distribution over the models • probability of the other agent performing the action, and • receiving the observation that led to the updated model
Ri Ri Ait Ait+1 St Oit St+1 Ajt+1 Ajt Oit+1 Mod[Mjt+1] Mod[Mjt] Oj Aj1 Oj1 Aj2 Oj2 mj,l-1t+1,1 Aj1 mj,l-1t+1,2 Aj2 mj,l-1t+1,3 mj,l-1t,1 Aj3 mj,l-1t+1,4 mj,l-1t,2 Aj4
Example Applications: Emergence of Social Behaviors • Followership and Leadership in the persistent multiagent tiger problem • Altruism and Reciprocity in the public good problem with punishment • Strategies in a simple version of two-player Poker
Followership and Leadership in Multiagent Persistent Tiger • Experimental Setup: • Agent j has a better hearing capability (95% accurate) compared to i’s (65% accuracy) • Agent i does not have initial information about the tiger’s location • Agent i considers two models of agent j which differ in j’s level 0 initial beliefs • Agent j likely thinks that the tiger is behind the left door • Agent j likely thinks that the tiger is behind the right door • Solve the corresponding level 1 I-DID expanded over three time steps and get the normative behavioral policy of agent i
Level 1 I-ID in the Tiger Problem Expand over three time steps Mapping decision nodes to chance nodes
Policy Tree 1: Agent i has hearing accuracy of 65% L GL,* GR,* L L GL,* GR,CL GL,CR GR,* GL,S/CL GR,S/CR L OL OR L L L Conditional Followership
Policy Tree 2: Agent i loses hearing ability (accuracy is 0.5) L *,* L *,CR *,S *,CL OR L OL Unconditional (Blind) Followership
Example 2: Altruism and Reciprocity in the Public Good Problem • Public Good Game • Two agents initially endowed with XT amount of resources • Each agent may choose • contribute (C) a fixed amount of the resources to a public pot • not contribute ie. defect (D) • Agents’ actions and pot are not observable, but agents receive an observation symbolizing the state of the public pot • plenty (PY) • meager (MR) • Value of resources in the public pot is discounted by ci (<1) for each agent i, where ci is the marginal private return • In order to encourage contributions, the contributing agents punish free riders P but incur a small cost cp for administering the punishment
Agent Types • Altruistic and Non-altruistic types • Altruistic agent has a high marginal private return (ci is close to 1) and does not punish others who defect • Optimal Behavior • One action remaining: both types of agents choose to contribute to avoid being punished • Two actions to go: altruistic type chooses to contribute, while the other defects • Why? • Three steps to go: the altruistic agent contributes to avoid punishment and the non-altruistic type defects • Greater than three steps: altruistic agent continues to contribute to the public pot depending on how close its marginal return is to 1, the non-altruistic type prescribes defection
Level 1 I-ID in the Public Good Game Expand over three time steps
Policy Tree 1: Altruism in PG C • If agent i (altruistic type) believes with a probability 1 that j is altruistic, i chooses to contribute for each of the three steps. • This behavior persists when i is unaware of whether j is altruistic, and when i assigns a high probability to j being the non-altruistic type * C * C
Policy Tree 2: Reciprocal Agents D • Reciprocal Type • The reciprocal type’s marginal private return is less and obtains a greater payoff when its action is similar to that of the other • Experimental Setup • Consider the case when the reciprocal agent i is unsure of whether j is altruistic and believes that the public pot is likely to be half full • Optimal Behavior • From this prior belief, i chooses to defect • On receiving an observation of plenty, i decides to contribute, while an observation of meager makes it defect • With one action to go, i believes that j contributes, will choose to contribute too to avoid punishment regardless of its observations PY MR D C * * C C
Conclusion and Future Work • I-DIDs: A general ID-based formalism for sequential decision-making in multiagent settings • Online counterparts of I-POMDPs • Solving I-DIDs approximately for computational efficiency (see AAAI ’07 paper on model clustering) • Apply I-DIDs to other application domains Visit our poster on I-DIDs today for more information