180 likes | 199 Views
7. Game Theory and the Tools of Strategic Business Analysis. Many decisions that an entrepreneur faces are in strategic situations, i.e. the outcome for the entrepreneur depends not only on his own choice, but also on that of other agents
E N D
7. Game Theory and the Tools of Strategic Business Analysis Many decisions that an entrepreneur faces are in strategic situations, i.e. the outcome for the entrepreneur depends not only on his own choice, but also on that of other agents Ex: motivating employees, preventing another firm from entering the market These situations can be modeled as strategic games game theory allows to determine equilibria, states where no player wants to change his strategy we will distinguish between different types of games, in particular those with perfect and imperfect information and those with complete and incomplete information
7.1 Games of Strategy Defined the crucial aspect of a strategic game is that the utility of one person depends not only on her own choices but also on that of others A strategic game is defined by • a set of agents (players), • their available choices (strategies) and • rules that tell us • at which stage which player moves, • what type of information they have about the choices of other players when they make their choices and • that determine the utility outcomes for each player for each possible combination of the players’ strategies if chance is involved this is modeled by the introduction of a chance player, usually called “nature”
A game can be represented in extensive form as a game tree, that contains detailed description of the rules. (Fig 7.1) the nodes of the game show the sequence in which the players move, the available choices of the player at each node, and the information of this player an information set indicates what a player knows when it is his turn. A player always knows the information set he is in, but this can be more or less precise if an information set contains a single node, the player knows the preceding choices of other players, he has perfect information otherwise the player does not know which node he is at, hence he does not know all the preceding choices, he has imperfect information the final nodes show the players’ payoffs for the respective strategy combinations
A game can also be presented in normal form a strategy is a complete action plan for the game, i.e. it determines the choice for each node the player could end up at the normal form shows the available strategies of the players in a table (Table 7.1) the table shows the outcomes for both players for all possible strategy combinations note: the strategy of the second player conditions on the choice of the first player if he has perfect information Ex: matching pennies: imperfect information, in normal form each player has only two strategies that cannot condition on the other’s strategy, zero-sum game(Fig 7.2, Table 7.2)
7.2 Equilibria for Games we will deal here with noncooperative games, i.e. no groups of players can communicate and make binding agreements about their action (cooperative game theory deals with such situations) hence the incentives for individual players, not for groups of players are relevant when analyzing games we are looking for equilibria an equilibrium is a constellation of strategy choices (one by each player) such that no player has an incentive to change from his strategy assuming that the other players are not changing their strategies in an equilibrium hence each player is choosing a best response to the strategies chosen by the other players’ strategies, i.e. a strategy that yields the maximal (expected) payoff given the other players’ strategies
fundamental concept of game theory is the Nash-equilibrium Let n be the number of players Let Si be the set of strategies available for player i then the strategy choices of all players can be listed in an array of strategy choices s = (s1, s2,..., sn) and i(s1, s2,..., sn)is the payoff of player i given these strategy choices Then s*= (s1*, s2*,..., sn*) is a Nash-equilibrium if for all i {1,...,n} and all siSi: i(s1*, s2*,...si*,..., sn*) i (s1*, s2*,...si,..., sn*) this means if all players other than i play the equilibrium strategy, then player i cannot do better than also play his equilibrium strategy si*, i.e. no player can benefit by (unilaterally) deviating from the equilibrium s*, hence no rational player will deviate and thus s* is an equilibrium
dominant strategies equilibria definition of Nash-equ only tells us how to check whether some configuration of strategies is an equilibrium, but not how one can find one, or why players would pick a certain equilibrium for some games, it’s easy to see why a particular equilibrium emerges example: prisoner’s dilemma game (see App. B) here both players have a dominant strategy, i.e. playing C yields a higher payoff no matter what the other player does if both players have a dominant strategy, than both choosing that strategy is an equilibrium, called dominant-strategy equilibrium: if choosing C is best for me, no matter what the other player does, then it is a best response in particular to C, and hence (C,C) is an equilibrium if the strategies are strictly dominant, it is the only equilibrium
Solving Games by Elimination of Dominated Strategies a strategy s is dominated by another strategy s* if s yields a lower payoff than s* against all strategies of the other players precisely, s* strictly dominates s if s* is strictly better than s against all strategy combinations of the other players s* weakly dominates s if s* is at least as good as s against all strategies of the other players and strictly better than s against at least one strategy combination of the other players a rational player should not choose a dominated strategy hence one can try to find equilibria by eliminating dominated strategies and look for an equilibrium in the reduced game (Table 7.7) It may, however, be risky to rely on another player not playing a weakly dominated strategy
Solving Games by Iterated Elimination of Dominated Strategies • elimination of dominated strategies can be repeated • a necessary assumption is that players are rational, that they know that others are rational, that they know that others know that they are rational, and so on • this is called common knowledge of rationality • ex: (Table 7.8) • when we arrive by iterated elimination of dominated strategies at a unique outcome the game is called dominance solvable • when at some stages the domination is only weak, the outcome may depend on the order of elimination (ex. Table 7.9) • why is the result always an equilibrium? • Experiment: guessing game
Games with Many Equilibria ex: telephone game: if no player tries to call or both do, both have profit 0, if only 1 player calls, his payoff is 3, the other’s is 6 (Table 7.10) two equilibria: exactly one player calls refinement concepts are used to select one equilibrium, no general introduction here Games with No Equilibria in Pure Strategies a pure strategy describes to take one action plan with probability 1 Ex: matching pennies (see above) has no equilibrium in pure strategies: whenever player 2 chooses his best response to player 1’s strategy, player 2 wants to change his strategy a mixed strategy is a probability distribution over all pure strategies, i.e. if there are n pure strategies, a mixed strategy is a set of probabilities p1,..., pn, with p1+...+ pn=1
in a mixed-strategy equilibrium both players choose a mixed strategy such that the other player’s strategy is a best response this implies the other player must be indifferent between his pure strategies(otherwise his best response would be a pure strategy): all pure strat. yield same expected payoff assume player 1 chooses heads with probability q and tails with probability 1-q Then the expected payoff of player 2 choosing heads is 1*q + (-1)*(1-q) = 2q – 1 his expected payoff from choosing tails is (-1)*q + 1*(1-q) = 1 - 2q hence player 2 is indifferent, if 2q – 1 = 1 - 2q, or q = ½ similarly player 1 is indifferent if 2 chooses heads with p = ½ hence this combination is a mixed strategy equilibrium Note: both players choose the probabilities such that the other is indifferent
7.3 Credible Threats extensive and normal form do not necessarily lead to equivalent results Ex: “Rotten Kid Game” (Fig 7.7, Table 7.14) In normal form the combination “comply, punish” is an equilibrium in extensive form it is seen that this equilibrium is not convincing because it relies on a noncredible threat: if the child chooses R, the parent has an incentive to deviate Subgame Perfect (Credible Threat) Equilibria if an equilibrium is based on threats, these should be credible, i.e. whenever the node is reached where threat should be carried out, it will be a subgame consists of one node of a game and all the branches starting at this node a subgame perfect equilibrium is a set of strategies such that in each subgame the prescribed actions form an equilibrium
Backward Induction and Subgame Perfection A subgame perfect equilibrium can be found by backward induction: start at the end of the game, i.e. choose best responses in the last nodes, then replace these nodes with the resulting payoffs and work backwards this eliminates noncredible threats, because it specifies equilibria in all subgames problems with backward induction: centipede game 7.4 Rationality and Equity game theorists usually stick to the assumption of rationality and selfishness, i.e. they assume that players are only interested in their own payoff and that they consider the other players’ payoffs only because these determine their decisions contradictory evidence: ultimatum game: apparently players care about “fairness” (an issue of larger debate) new theories try to take this into account
7.5 Dynamic Games and Decision Making in Games against Nature so far we considered games with strategic uncertainty, which results from lack of knowledge about others’ actions situations where uncertainty is probabilistic (results from chance events) can be modeled as games against nature, nature being a player who is indifferent over outcomes and decides randomly consider situations where several decisions are to be made over time, i.e. decisions today can affect outcomes tomorrow (can apply backward induction) can model, e.g. optimal search for lowest price: at each stage decide whether to take the current price or to proceed optimally, which is determined by backward induction (i.e. assume that you are rational in the future), Bellman’s Principle of Optimality different opportunity costs can lead to different search behavior
Games of Incomplete Information (Appendix A) in games of complete information, the rules of the game and all resulting payoffs are common knowledge this is often not realistic (other’s payoffs are often not known) in games of incomplete information players do not know the payoffs of others this situation is best described by each player facing different types of the other players, and knows the payoff of each type but does not know of which type his current opponents are (but he knows the probabilities of facing each type)
model: nature first chooses types and players know their own type but not that of others hence the game becomes a game of imperfect information, because players know all outcomes of all branches, but they do not know the move of the “player” nature a strategy then conditions on a player’s own type, but not on the types of other players the appropriate equilibrium concept for such a game is Bayes-Nash equilibrium an array of strategies (s1*,..., sn*) is a Bayes-Nash equilibrium, if given the other players’ strategies (that prescribe a complete action plan for each possible type) and given the probability distribution over the types of the other players, no player can increase his expected payoff by deviating to another strategy
Repeated Games (Appendix B) in real life certain strategic situations appear repeatedly in the same (or slightly changed) manner, e.g. pricing game between car makers this can be modeled as the repetition of the same game in a repeated game the same players repeatedly play the same stage game against each other Repeated Games with Finite Horizons if the stage game is repeated a finite number of times, we can apply backward induction: • in the last period, there is no future to take into account and hence the game is just like a single unrepeated game and hence the equilibrium of the stage game will be played • hence in the second to last period, the result in the last period is known and hence the future will not be affected and hence again the game is like a single game • by backward induction, the equilibrium is to play the equilibrium of the single-shot game in each period
in real life and experiments we often see something different: cooperation in early periods and break-down towards the end this behavior can be explained as equilibria involving strategic reputation building in games with incomplete information Repeated Games with Infinite Horizon In a supergame the stage game is repeated infinitely often a discount factor <1 measures how much the future is valued relative to the present, i.e. a payoff x in period t is valued as much as tx today with an infinite horizon backward induction cannot be applied as with a finite horizon for sufficiently high discount factors we obtain new equilibria where players choose a trigger strategy (cooperate as long as the other does) and hence find it optimal to cooperate forever