1 / 31

Dynamic Games of Incomplete Information: Exercising the PBE

Dynamic Games of Incomplete Information: Exercising the PBE. APEC 8205: Applied Game Theory. Objectives. Learn How to Find Perfect Bayesian Equilibria (PBE) Finite Types & Actions Continuous Types & Actions. Example 1. (50, 50). U. Player A |1. (40, 25). m. L. R. P=0.4. D. (0, 0).

danika
Download Presentation

Dynamic Games of Incomplete Information: Exercising the PBE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Games of Incomplete Information:Exercising the PBE APEC 8205: Applied Game Theory

  2. Objectives • Learn How to Find Perfect Bayesian Equilibria (PBE) • Finite Types & Actions • Continuous Types & Actions

  3. Example 1 (50, 50) U Player A|1 (40, 25) m L R P=0.4 D (0, 0) Nature Player B (50, 0) 1-P=0.6 U 1- m L R (30, 30) Player A|2 D (0, 50)

  4. How can we solve this game for the PBE? • As with subgame perfection, it is useful to work backwards. • Find B’s best response given R. • Find A’s best response given A’s type & the probability B plays U. • Use Bayes rule to determine  given A’s best responses. • Check  for consistency with B’s best response.

  5. Find B’s best response given R. B’s payoff to choosing U given R: B’s payoff to choosing D given R: B(U|R) >/=/< B(D|R) implies

  6. Graphically sB(U|R) 0.5 m

  7. Find A’s best response given A’s type & the probability B plays U. Type t A’s payoff to choosing R given B(U|R): Type 1 A’s payoff to choosing L:A(L|1) = 40 Type 2 A’s payoff to choosing L:A(L|2) = 30 A(R|1) >/=/< A(L|1) implies A(R|2) >/=/< A(L|2) implies

  8. Graphically sA(R|1)=sA(R|2)=1 0.8 sA(R|1)=[0,1] sA(R|2)=1 sA(R|1)=0 sA(R|2)=1 0.6 sB(U|R) sA(R|1)=0 sA(R|2)=[0,1] sA(R|1)=sA(R|2)=0 0.5

  9. Use Bayes rule to determine  given A’s best responses. = 0.4 = 0.4 sA(R|1)=sA(R|2)=1 0.8 sA(R|1)=[0,1] sA(R|2)=1 = 0 sA(R|1)=0 sA(R|2)=1 = [0,0.4] 0.6 sB(U|R) = [0,1] sA(R|1)=0 sA(R|2)=[0,1] sA(R|1)=sA(R|2)=0 = [0,1] 0.5 = [0,1] m

  10. Check  for consistency with B’s best response. • A(R|1) = A(R|2) = 0, B(U|R) = [0, 0.6], &  = 0.5. • A(R|1) = A(R|2) = 0, B(U|R) = 0, &  < 0.5. Note that all these equilibria are observationally equivalent!

  11. Example 2 (10, 50) (50, 50) U U Player A|1 mR mL L R D P = 1/4 D (30, 0) (0, 0) Player B Nature Player B (0, 0) (10, 0) U 1-P U 1- mR 1- mL L R Player A|2 D D (50, 50) (30, 50)

  12. How can we solve this game for the PBE? • Again, it is useful to work backwards. • Find B’s best response given R & given L. • Find A’s best response given A’s type & the probability B plays U following R & U following L. • Use Bayes rule to determine R & L given A’s best responses. • Check R & L for consistency with B’s best responses.

  13. Find B’s best response given R & given L. B’s payoff to choosing U given action a = R, L: B’s payoff to choosing D given a: B(U|a) >/=/< B(D|a) implies

  14. Graphically mR>1/2 mL<1/2 mR>1/2 mL>1/2 mR>1/2, mL=1/2 1 mR=1/2, mL=1/2 mR=1/2, mL<1/2 sB(U|R) mR=1/2, mL>1/2 0 1 mR<1/2, mL=1/2 mR<1/2 mL<1/2 mR<1/2 mL>1/2 sB(U|L)

  15. Find Type 1 A’s best response given the probability B plays U following R & U following L. Type 1 A’s payoff to choosing R given B(U|R): Type 1 A’s payoff to choosing L given B(U|L): A(R|1) >/=/< A(L|1) implies

  16. Find Type 2 A’s best response given the probability B plays U following R & U following L. Type 2 A’s payoff to choosing R given B(U|R): Type 2 A’s payoff to choosing L given B(U|L): A(R|2) >/=/< A(L|2) implies

  17. Graphically 1 sA(R|1)=1 sA(R|2)=0 sA(R|1)=1 sA(R|2)=1 0.6 sA(R|1) = (0,1) sB(U|R) sA(R|1)=0 sA(R|2)=0 sA(R|1)=0 sA(R|2)=1 sA(R|2) = (0,1) 0.2 0 0.4 1 0.8 sB(U|L)

  18. Use Bayes rule to determine R & L given A’s best responses. mR = 1/ (1 + 3A(R|2)) L= 0 1 sA(R|1)=1 sA(R|2)=0 mR = 1 mL = 0 mR = 1 mL = (1 - A(R|1))/ (1 - A(R|1)) sA(R|1)=1 sA(R|2)=1 0.6 sA(R|1) = (0,1) mR = A(R|1)/ (A(R|1) + 3A(R|2)) L= (1 - A(R|1))/ (4 - A(R|1) - 3A(R|2)) mR = 1/4 mL = [0,1] sB(U|R) sA(R|1)=0 sA(R|2)=0 sA(R|1)=0 sA(R|2)=1 mR = [0,1] mL = 1/4 sA(R|2) = [0,1] mR = A(R|1)/ (A(R|1) + 3) L= 1 0.2 mR = 0 mL = 1 mR = 0 mL = 1/ (4 - 3A(R|2)) 0 0.4 1 0.8 sB(U|L)

  19. Check R & L for consistency with B’s best responses. • A(R|1) = A(R|2) = 0, B(U|R) = B(U|L) = 0, R  ½, and L = ¼. • A(R|1) = 0, A(R|2) = 1, B(U|R) = 0, B(U|L) = 1, R = 0, and L = 1. • A(R|1) = 1, A(R|2) = 0, B(U|R) = 1, B(U|L) = 0, R = 1, and L = 0. • A(R|1) = 0, A(R|2) = 0, B(U|R) = (0, 0.6), B(U|L) = 0, R = ½, and L = ¼.

  20. Example: SLAPP GAME • Players? • Homeowner (H) & Developer (D) • Who does what when? • Nature chooses Homeowner’s value to preserving green space (VH) & Developer’s value to dosing green space (VD). • Homeowner get to choose whether to Fight (F) or Whine (W). • If the homeowner chooses F, H invests effort first in a Stackelberg contest. • If the homeowner chooses W, D invests effort first in a Stackelberg contest. • Who knows what when? • H gets to see its value & D’s value, while D only gets to see its own value. • If H choose F, D gets to see H’seffort before choosing its own. • If H choose W, H gets to see D’seffort before choosing its own. • How are Players Rewarded? • i = Vixi/(xi + xj) – xi for i, j = H, D and i  j.

  21. Characterization of Beliefs • Nature chooses the Homeowner’s value v [v’, v’’] with density f(v) and distribution F(v). • Developer can update it beliefs: • After F: v  RF for fF(v) & FF(v) • After W: v  RW for fW(v) & FW(v)

  22. How can we solve this game using the PBE? • H Leads: • Figure out D’s effort given H’s choice of effort. • Figure out H’s effort & expected reward knowing how D will respond. • D Leads: • Figure out H’s effort given D’s choice of effort. • Figure out D’s effort knowing how H will respond. • Figure out H’s expected reward. • Compare H’s expected rewards to determine when it should choose F & when it should choose W. • Check for consistency of beliefs with H’s and D’s best responses.

  23. H Leads: D’s optimal effort given H’s effort: Which yields the best response:

  24. H Leads: H’s optimal effort given D’s best response: Which yields:

  25. D Leads: H’s optimal effort given D’s effort: Which yields the best response:

  26. D Leads: D’s optimal effort given H’s best response: Which yields:

  27. Compare H’s expected rewards to determine when it should choose F & when it should choose W. UH(W) >/=/< UH(F) as or

  28. Implications • If E(D|W)  1, the homeowner should choose F regardless of its value. • If E(D|W) < 1, • the homeowner should choose W if VD(1 – (1 - E (D|W))0.5)2 > v > VD(1 + (1 - E(D|W))0.5)2,such that RW = (VD(1 – (1 - E (D|W))0.5)2, VD(1 + (1 - E(D|W))0.5)2) and fW(v) = f(v) / (F(VD(1 + (1 - E(D|W))0.5)2) – F(VD(1 – (1 - E (D|W))0.5)2)). • otherwise it should choose F such that RF = {2(xHVD)0.5} and Pr(v = 2(xHVD)0.5) = 1.

  29. Case where 1 < E(rD|W) 0 rH UH(W) - UH(F)

  30. Case where 1 > E(rD|W) rH =1 + (1-E(rD|W))0.5 0 rH 1 rH =1 – (1-E(rD|W))0.5 UH(W) - UH(F)

  31. Comparison of Complete and Incomplete Information Games When 1 > E(rD|W) UH(W) - UH(F) Under Complete Information rH =1 – (1-E(rD|W))0.5 rH =1 + (1-E(rD|W))0.5 0 rH 1 UH(W) - UH(F) Under Incomplete Information

More Related