1 / 37

Communication Networks

Communication Networks. A Second Course. Jean Walrand Department of EECS University of California at Berkeley. Concave, Learning, Cooperative. Concave Games Learning in Games Cooperative Games. Concave Games. Motivation

adamdaniel
Download Presentation

Communication Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley

  2. Concave, Learning, Cooperative • Concave Games • Learning in Games • Cooperative Games

  3. Concave Games Motivation • In many applications, the possible actions belong to a continuous set.For instance one chooses prices, transmission rates, or power levels. • In such situations, one specifies reward functions instead of a matrix or rewards. • We explain results on Nash equilibria for such games

  4. Concave Games: Preliminaries Many situations are possible: 3 NE 1 NE No NE J.B. Rosen, “ Existence and Uniqueness of Equilibrium Points forConcave N-Person Games,” Econometrica, 33, 520-534, July 1965

  5. Concave Game Definition: Concave Game Definition: Nash Equilibrium

  6. Concave Game Proof Theorem: Existence

  7. Concave Game Specialized Case:

  8. Concave Game Definition: Diagonally Strictly Concave

  9. Concave Game Theorem: Uniqueness

  10. Concave Game Theorem: Uniqueness - Bilinear Case:

  11. Concave Game Local Improvements

  12. Learning in Games Motivation Examples Models Fictitious Play Stochastic Fictitious Play Fudenberg D. and D.K. Levine (1998), The Theory of Learning in Games. MIT Press, Cambridge, Massachusetts. Chapters 1, 2, 4.

  13. Motivation Explain equilibrium as result of players “learning” over time (instead of as the outcome of fully rational players with complete information)

  14. Examples: 1 Fixed Player Model If P1 is patient and knows P2 chooses her play based on her forecast of P1’s plays, then P1 should always play U to lead P2 to play R A sophisticated and patient player who faces a naïve opponent can develop a reputation for playing a fixed strategy and obtain the rewards of a Stackelberg leader Large Population Models Most of the theory avoids possibility above by assuming random pairings in a large population of anonymous users In such a situation, P1 cannot really teach much to the rest of the population, so that myopic play (D, L) is optimal Naïve play: Ignore that you affect other players’ strategies

  15. Examples: 2 Cournot Adjustment Model Each player selects best response to other player’s strategy in previous period Converges to unique NE in this case This adjustment is a bit naïve … 2 BR1 BR2 NE 1

  16. Models Learning Model: Specifies rules of individual players and examines their interactions in repeated game Usually: Same game is repeated (some work on learning from similar games) Fictitious Play: Players observe result of their own match, play best response to the historical frequency of play Partial Best-Response Dynamics: In each period, a fixed fraction of the population switches to a best response to the aggregate statistics from the previous period Replicator Dynamics: Share of population using each strategy grows at a rate proportional to that strategy’s current payoff

  17. Fictitious Play Each player computes the frequency of the actions of the other players (with initial weights) Each player selects best response to the empirical distribution (need not be product) Theorem: Strict NE are absorbing for FP If s is a pure strategy and is steady-state for FP, then s = NE Proof: Assume s(t) = s = strict NE. Then, with a := a(t) …, p(t+1) = (1 – a)p(t) + ad(s), so that u(t+1, r) = (1 – a)u(p(t), r) + au(d(s), r),which is maximized by r = s if u(p(t), r) is maximized by r = s. Converse: If converges, this means players do not want to deviate, so limit must be NE…

  18. Fictitious Play Assume initial weights (1.5, 2) and (2, 1.5). Then(T, T)  (1.5, 3), (2, 2.5)  (T, H), (T, H)  (H, H), (H, H), (H, H)  (H, T)… Theorem: If under FP empirical converge, then product converges to NE Proof: If strategies converge, this means players do not want to deviate, so limit must be NE… Theorem: Under FP, empirical converge if one of the following holds 2x2 with generic payoffs Zero-sum Solvable by iterated strict dominance … Note: Empirical distributions need not converge

  19. Fictitious Play Assume initial weights (1, 20.5) for P1 and P2. Then(A, A)  (2, 20.5)  (B, B)  (A, A)  (B, B)  (A, A), etc Empirical frequencies converge to NE However, players get 0 Correlated strategies, not independent(Fix: Randomize …)

  20. Stochastic Fictitious Play Motivation: Avoid discontinuity in FP Hope for a stronger form of convergence: not only of the marginals, but also of the intended plays

  21. Stochastic Fictitious Play Definitions: Reward of i = u(i, s) + n(i, si), n has positive support on interval BR(i, s)(si) = P[n(i, si) is s.t. si = BR to s] Nash Distribution: if si = BR(i, s), all i Harsanyi’s Purification Theorem: For generic payoffs, ND  NE if support of perturbation  0. Key feature: BR is continuous and close to original BR. Matching Pennies

  22. Stochastic Fictitious Play Theorem (Fudenberg and Kreps, 93): Assume 2x2 game has unique mixed NE If smoothing is small enough, then NE is globally stable for SFP Theorem (K&Y 95, B&H 96) Assume 2x2 game has unique strict NE The unique intersection of smoothed BR is a global attractor for SFPAssume 2x2 game has 2 strict NE and one mixed NE. The SFP converges to one of the strict NE, w.p. 1. Note: Cycling is possible for SFP in multi-player games

  23. Stochastic Fictitious Play Other justification for randomization: Protection against opponent’s mistakes Learning rules should be Safe: average utility ≥ minmax Universally consistent: utility ≥ utility if frequency were known but not order of plays Randomization can achieve universal consistency (e.g., SFP)

  24. Stochastic Fictitious Play Stimulus-Response (Reinforcement learning): Increase probability of plays that give good results General observation: It is difficult to discriminate learning models on the basis of experimental data: SFP, SR, etc. seem all about comparable

  25. Cooperative Games • Motivation • Notions of Equilibrium • Nash Bargaining Equilibrium • Shapley Value

  26. Cooperative Games: Motivation • The Nash equilibriums may not be the most desirable outcome for the players. • Typically, players benefit by cooperating. • We explore some notions of equilibrium that players achieve under cooperation.

  27. Cooperative Games: Equilibria

  28. Cooperative Games: Nash B.E. Definition: Nash Bargaining Equilibrium Interpretation: Fact

  29. Cooperative Games: Nash B.E. Example: NBE: Social:

  30. Cooperative Games: Nash B.E. Axiomatic Justification  At NE, sum of relative increases is zero.

  31. Shapley Value Example: Shapley Value:

  32. Fixed Point Theorems Theorem (Brower):

  33. Brower Labels (1, 3) Labels (2, 3) One path through doors (1, 2) mustend up in triangle (1, 2, 3).[Indeed: Odd numberof boundary doors.]

  34. Brower Take small triangle (1, 2, 3) Divide it into triangles as before; it contains another (1, 2, 3); Continue in this way. Pick z(n) in triangle n. Let z = lim z(n). Claim: f(z) = z. Proof: If f(z) is not z, then z(n) and f(z(n)) are in different small triangles at stage n; but then z(n) cannot be in a (1, 2, 3) triangle ….

  35. Notes on Kakutani Theorem:

  36. Kakutani

  37. Kakutani

More Related