1 / 61

Algorithmic Game Theory (because a game is nice when it’s not too long!)

Algorithmic Game Theory (because a game is nice when it’s not too long!). Guido Proietti Dipartimento di Informatica, Università degli Studi dell’Aquila & Istituto di Analisi dei Sistemi ed Informatica – CNR Roma. Roadmap. Nash Equilibria (NE) Does a NE always exist?

arnie
Download Presentation

Algorithmic Game Theory (because a game is nice when it’s not too long!)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algorithmic Game Theory(because a game is nice when it’s not too long!) Guido Proietti Dipartimento di Informatica, Università degli Studi dell’Aquila & Istituto di Analisi dei Sistemi ed Informatica – CNR Roma

  2. Roadmap • Nash Equilibria (NE) • Does a NE always exist? • Can a NE be feasibly computed, once it exists? • Which is the “quality” of a NE? • How long does it take to converge to a NE? • Algorithmic Mechanism Design • Which social goals can be (efficiently) implemented in a non-cooperative selfish distributed system? • VCG-mechanisms and one-parameter mechanisms • Mechanism design for some basic network design problems

  3. FIRST PART: Nash equilibria

  4. Two Research Traditions • Theory of Algorithms: computational issues • What can be feasibly computed? • How much does it take to compute a solution? • Which is the quality of a computed solution? • Centralized or distributed computational models • Game Theory: interaction between self-interested individuals • What is the outcome of the interaction? • Which social goals are compatible with selfishness?

  5. Different Assumptions • Theory of Algorithms (in DCMs): • Processors are obedient, faulty, or adversarial • Large systems, limited comp. resources • Game Theory: • Players are strategic(selfish) • Small systems, unlimited comp. resources

  6. The Internet World • Agents often autonomous (users) • Users have their own individual goals • Network components owned by providers • Often involve “Internet” scales • Massive systems • Limited communication/computational resources  Both strategic and computational issues!

  7. Fundamental Questions • What are the computational aspects of a game? • What does it mean to design an algorithm for a strategic distributed system? Algorihmic Game Theory Theory of Algorithms Game Theory = +

  8. Game Theory • Given a game, predict the outcome by analyzing the individual behavior of the players (agents) • Game: • N players • Rules of encounter: Who should act when, and what are the possible actions • Outcomes of the game

  9. Normal Form Games • N rational and non-cooperative players • Si =Strategy set of player i • The strategy combination (s1, s2, …, sN) gives payoff (p1, p2, …, pN) to the N players • All the above information is known to all the players and it is common knowledge • Simultaneous move: each player i chooses a strategy siSi(nobody can observe others’ move)

  10. Equilibrium • An equilibrium s*= (s1*, s2*, …, sN*) is a strategy combination consisting of a best strategy for each of the N players in the game • What is a best strategy? depends on the game…informally, it is a strategy that a players selects in trying to maximize his individual payoff, knowing that other players are also doing the same

  11. Dominant Strategy Equilibrium: Prisoner’s Dilemma Strategy Set Payoffs Strategy Set

  12. Prisoner I’s decision • Prisoner I’s decision: • If II chooses Don’t Implicate then it is best to Implicate • If II chooses Implicate then it is best to Implicate • It is best to Implicate for I, regardless of what II does: Dominant Strategy

  13. Prisoner II’s decision • Prisoner II’s decision: • If I chooses Don’t Implicatethen it is best to Implicate • If I chooses Implicatethen it is best to Implicate • It is best to Implicate for II, regardless of what I does: Dominant Strategy

  14. Hence… • It is best for both to implicate regardless of what the other one does • Implicate is a Dominant Strategy for both • (Implicate, Implicate) becomes the Dominant Strategy Equilibrium • Note: If they might collude, then it’s beneficial for both to Not Implicate, but it’s not an equilibrium as both have incentive to deviate

  15. Dominant Strategy Equilibrium • Dominant Strategy Equilibrium: is a strategy combination s*= (s1*, s2*, …, sN*), such that si* is a dominant strategy for each i, namely, for each s= (s1, s2, …, si , …, sN): pi(s1, s2, …, si*, …, sN) ≥ pi(s1, s2, …, si, …, sN) • Dominant Strategy is the best response to any strategy of other players • It is good for agent as it needs not to deliberate about other agents’ strategies • Not all games have a dominant strategy equilibrium

  16. A Beautiful Mind: Nash Equilibrium • Nash Equilibrium: is a strategy combination s*= (s1*, s2*, …, sN*) such that for each i, si* is a best response to (s1*, …,si-1*,si+1*,…, sN*), namely, for any possible alternative strategy si pi(s*) ≥ pi(s1*, s2*, …, si, …, sN*) • Note: We are playing simultaneous games, and so nobody knows a priori the choice of other agents

  17. Nash Equilibrium • In a NE no agent can unilaterally deviate from its strategy given others’ strategies as fixed • There may be no, one or many NE, depending on the game • Agent has to deliberate about the strategies of the other agents • If the game is played repeatedly and players converge to a solution, then it has to be a NE • Dominant Strategy Equilibrium  Nash Equilibrium (but the converse is not true)

  18. Nash Equilibrium: The Battle of the Sexes (coordination game) • (Stadium,Stadium) is a NE: Best responses to each other • (Cinema, Cinema) is a NE: Best responses to each other  but they are not Dominant Strategy Equilibria … are we really sure they will eventually go out together????

  19. A conflictual game: Head or Tail • Player I (row) prefers to do what Player II does, while Player II prefer to do the opposite of what Player I does!  In any configuration, one of the players prefers to change his strategy, and so on and so forth…thus, there are no NE!

  20. Three big computational issues • Finding a NE, once it does exist • Establishing the quality of a NE, as compared to a cooperative system, i.e., a system in which agents can cooperate (recall the Prisoner’s Dilemma) • In a repeated game, establishing whether and in how many steps the system will eventually converge to a NE (recall the Battle of the Sex)

  21. On the existence of a NE Theorem (Nash, 1951): Any game with a finite set of players and finite set of strategies has a NE of mixed strategies. • Mixed strategies: each player independently selects his strategy by using a probability distribution over his set of possible strategies • Head or Tail game: if each player sets p(Head)=p(Tail)=1/2, then the expected payoff of each player is 0, and this is a NE, since no player can improve on this by choosing a different randomization!

  22. On the computability of a NE • But how do we select this probability distribution? It looks like a problem in the continuous… …but it’s not, actually! It can be shown that such a distribution can be found by checking for all the (exponentially large) possible combinations for each player of the underlying pure strategies! • And why should we be interested on that? Because “If your laptop cannot find a NE, then the market probably cannot either”

  23. Is finding a NE NP-hard? • W.l.o.g., we restrict ourself to 2-player games: The problem can be solved by a simplex-like technique called the Lemke–Howson algorithm, which however is exponential in the worst case • Reminder: a problem P is NP-hard if one can reduce any NP-complete problem P’ to it: • “yes”-instance of P’ → “yes”-instance of P • “no”-instance of P’→ “no”-instance of P • But each instance of 2-NASH is a “yes”-instance! (since every game has a NE)  if 2-NASH is NP-hard then NP = coNP (hard to believe!)

  24. The complexity class PPAD • Definition (Papadimitriou, 1994): roughly speaking, PPAD (Polynomial Parity Argument – Directed case) is the class of all problems whose solution space can be set up as the set of all sinks in a suitable directed graph (generated by the input instance), having an exponential number of vertices in the size of the input, though. • Remark: It could very well be that PPAD=PNP… …but several PPAD-complete problems are resisting for decades to poly-time attacks (e.g., finding Brouwer fixed points)

  25. 2-NASH is PPAD-complete! • 3D-BROUWER is PPAD-complete (Papadimitriou, JCSS’94) • 4-NASH is PPAD-complete (Daskalakis, Goldberg, and Papadimitriou, STOC’06) • 3-NASH is PPAD-complete (Daskalakis & Papadimitriou, ECCC’05, Chen & Deng, ECCC’05) • 2-NASH is PPAD-complete !!!(Chen & Deng, FOCS’06)

  26. On the quality of a NE • How inefficient is a NE in comparison to an idealized situation in which the players would strive to collaborate selflessly with the common goal of maximazing the social welfare? • Recall: in the Prisoner’s Dilemma game, the DSE  NE means a total of 10 years in jail for the players. However, if they would not implicate reciprocally, then they would stay a total of only 2 years in jail!

  27. The price of anarchy • Definition (Koutsopias & Papadimitriou, 1999): Given a game G and a social-choice minimization (resp., maximization) function f (i.e., the sum of all players’ payoffs), let S be the set of NE, and let OPT be the outcome of G optimizing f. Then, the Price of Anarchy (PoA) of G w.r.t. f is: • Example: in the PD game, G(f)=-10/-2=5

  28. A case study: selfish routing on Internet • Internet components are made up of heterogeneous nodes and links, and the network architecture is open-based and dynamic • Internet users behave selfishly: they generate traffic, and their only goal is to download/upload data as fast as possible! • But the more a link is used, the more is slower, and there is no central authority “optimizing” the data flow… • So, why does Internet eventually work is such a jungle???

  29. Example: Pigou’s game (network congestion game) Latency depends on the congestion (x is the fraction of flow using the edge) s t Latency is fixed • What is the NE of this game? Trivial: all the fraction of flow tends to travel on the upper edge  the cost of the flow is 1·1+0·1 =1 • What is the PoA of this NE? The optimal solution is the minimum of f(x)=x·x +(1-x)·1  f ’(x)=2x-1  OPT=1/2  f(OPT)=1/2·1/2+(1-1/2)·1=0.75  G(f) = 1/0.75 = 4/3

  30. Flows and NE • Assume now we are given a directed graph G = (V,E) and a set of source–sink pairs si,ti  V between which selfish users want to push a certain amount of flow. Then, a flow is at Nash equilibrium (or is a Nash flow) if no agent can improve its latency by changing its path • Theorem (Beckmann et al., 1956): If edge latency functions are continuous and non-decreasing, and users control an infinitesimal amount of flow, then the Nash flow exists and is unique.

  31. Flows and Price of Anarchy • Theorem 1:In a network with linear latency functions, the cost of a Nash flow is at most 4/3 times that of the minimum-latency flow. • Theorem 2:In a network with general latency functions, the cost of a Nash flow is at most n/2 times that of the minimum-latency flow. (Roughgarden & Tardos, JACM’02)

  32. A bad example for non-linear latencies Assume i>>1 xi 1- 1 s t 1  0 A Nash flow (of cost 1) is arbitrarily more expensive than the optimal flow (of cost close to 0)

  33. Convergence towards a NE(in pure strategies games) • Ok, we know that selfish routing is not so bad at its NE, but are we really sure this point of equilibrium will be eventually reached? • Convergence Time: number of moves made by the players to reach a NE from an initial arbitrary state • Question:Is the convergence time (polynomially) bounded in the number of players?

  34. The potential function method • Roughly speaking, a potential function for a game is a real-valued function, defined on the set of possible outcomes of the game, such that the equilibria of the game are precisely the local optima of the potential function • Potential games: broad class of games admitting a potential function • Theorem: In any finite potential game, best response dynamics always converge to a NE of pure strategies.

  35. Potential function and congestion games • How many steps are needed to reach a NE? It depends on the combinatorial structure of the players' strategy space • Definition(Matroid Congestion Games): A congestion gameG is called MCG if: • The strategy space of every player is the basis of a matroid over the set of congested resources (recall that the size of this strategy space corresponds to the rank of the player's matroid); • The rank of the game, r(G), is defined to be the maximum matroid rank over all players.

  36. Convergence in congestion games • Theorem (Achermann et al., FOCS'06): In a MCG G with n players and m resources, all best response improvement sequences have length O(n2m r(G)). • Example of a MCG: Load balancing • Instead, in general, a network congestion game is not a MCG • Moreover, it is possible to show that there exist instances for which the convergence time is exponential (unless finding a local optimum in any Polynomial Local Search (PLS class) problem can be done in polynomial time, against the common belief) Still, Internet works quite fine!

  37. And now…

  38. SECOND PART: Algorithmic Mechanism Design (or, the art of convincing a capitalist to behave like a socialist )

  39. Mechanism Design: the goal • Given: • System comprising of self-interested, rational agents • Set of system-wide goals • Mechanism Design • Does there exist a mechanism that can implement the goals? • Implementation of the goals depends on the individual behavior of the agents

  40. Mechanism Design: a picture Private “types” Reported types Mechanism t1 r1 p1 Agent 1 Output tn rn pn Agent n Payments Each agent reports strategicallyto maximize its well-being… …in response to a payment which is a function of the output!

  41. Overview of the results • Algorithms  Mechanisms • Centralized  Decentralized (Non-cooperative) • Network design problems

  42. Mechanism Design • Games induced by mechanisms are different from games in standard form: • Players hold independent private values • The payoff matrix is a function of these types  each player doesn’t really know about the other players’ payoffs, but only about its one!  Games of Incomplete information  Dominant Strategy Equilibrium is used

  43. Mechanism Design Problem: Type of an Agent • N agents, and each agent has some private information called thetype, tiTi, and performs a strategic action • We restrict ourself to direct revelation mechanisms, in which the action is reporting a typeriTi(with possibly ri  ti) • Example: Auction Game • Each agent knows its cost for doing a job, but not the others’ one; the type of the agent is its cost • Ti= [0, +]: The agent’s cost may be any positive amount of money • ti= 80: Minimum amount of money the agent i is willing to be paid • ri= 85: Exact amount of money the agent i bids to the system for doing the job (not known to other agents)

  44. Mechanism Design Problem: Output Specification • F is the set of feasible outputs • Output Specification: For a given reported-type configuration r=(r1, r2, …, rN), it specifies a valid outcome x(r)F which should optimize an objective function f(t) (the so-called social choice function) • Example: Auction Game • F : Different winners of the auction • f(t): mini (t1, t2, …, tN) (the lowest true cost) • x(r): allocate to the bidder with lowest reported cost

  45. Mechanism Design Problem: Valuation and Utility • If x is the outcome, then the valuation that agent i makes of x is given by a real valued function: vi(ti,x) • Auction Game: If agent i wins the auction then its valuation is equal to its actual cost for doing the job, otherwise it is 0 • If pi is the payment given to the agent, then the utility of the outcome x is: ui(ti,x) = pi - vi(ti,x) • Auction Game: If agent’s cost for the job is 90, and it gets the contract for 100 (i.e., it is paid 100), then its utility is 10

  46. The Mechanism A mechanismis a pairM=<x=g(r), p(x)> specifying: • An algorithm g(r) which computes the outcome x as a function of the reported typesr • A paymentscheme p as a function of the output x

  47. Strategy-Proof Mechanisms • If truth telling is the dominant strategy in a mechanism then it is called Strategy-Proof • Agents report their true types instead of strategically manipulating it • Utilitarian Problems: A problem is utilitarian if its objective function is such that f(t) =i vi(ti,x) • The Auction game is utilitarian

  48. Vickrey-Clarke-Groves (VCG) Mechanisms • A VCG-mechanism is (the only) strategy-proof mechanism for utilitarianproblems: • Algorithm: g(r)= arg maxxFi vi(ri,x) • Payment function: pi (x)= hi(r-i) -j≠i vj(rj,x) wherehi(r-i) is an arbitrary function of the types of other players • What about non-utilitarian problems? We will see…

  49. VCG-Mechanisms are Strategy-Proof • Proof (Intuitive sketch): • Payment given to agent i pi (x)=hi(r-i)-j≠i vj(rj,x) and both the terms above are independent of the type, strategy and valuation of agent i • So it is best for agent i to report its true value. Strategic behavior does not lead to a beneficial outcome. ■

  50. Clarke Mechanisms • This is a special VCG-mechanism (known as Clarke mechanism) in which hi(r-i)=j≠i vj(rj,x(r-i)) • pi =j≠i vj(rj,x(r-i)) -j≠i vj(rj,x) • In Clarke mechanisms, agents’ utility are always non-negative

More Related