450 likes | 744 Views
ECO 610: Lecture 9. Oligopoly, Rivalry, and Strategic Behavior. Oligopoly, Rivalry, and Strategic Behavior: Outline. Porter’s Five Forces Model: when does internal rivalry get interesting? Oligopoly: definition and examples Modeling oligopoly market outcomes Profit possibilities frontier
E N D
ECO 610: Lecture 9 Oligopoly, Rivalry, and Strategic Behavior
Oligopoly, Rivalry, and Strategic Behavior: Outline • Porter’s Five Forces Model: when does internal rivalry get interesting? • Oligopoly: definition and examples • Modeling oligopoly market outcomes • Profit possibilities frontier • Cartel theory, tacit vs. overt collusion, factors facilitating or impeding collusion • Static games of complete information • Dynamic games of complete information
Porter’s Five-Forces Model • Michael Porter developed a model for industry analysis that incorporates many of the concepts we have studied so far. http://www.youtube.com/watch?v=mYF2_FBCvXw • If we want to understand the nature and intensity of competition among firms in a market, we must understand the outside forces acting on firms in that industry. These forces include supplier power, buyer power, the threat of substitutes, and the threat of entry. We must also understand the market structure of the industry that inherently affects internal rivalry. • When there are only a few firms in an industry, and those firms are somewhat insulated from the other four forces, then the internal rivalry aspect of a market gets interesting.
Oligopoly • Oligopoly: a market with a small number of sellers • Characteristics of oligopoly • Homogeneous or differentiated product • Oftentimes significant barriers to entry (perhaps because of economies of scale) • Recognized mutual interdependence, i.e. firms have identifiable rivals • It is this recognized mutual interdependence that sets the analysis of oligopoly apart. We do not have a neat deterministic abstract model that we can apply to oligopoly markets. Instead, the outcome in an oligopoly market depends on how much or how little firms compete vigorously with one another, which can be idiosyncratic to the particular industry being studied.
Modeling Oligopoly • Imagine a market with two firms supplying a homogeneous product to a large number of small, independent buyers. • If these two firms compete vigorously with one another, what do you predict market price and output will be? • If these two firms cooperate totally and behave as one, what do you predict market price and output will be? • What will total profits of the two firms be if they behave competitively? • What will total profits of the two firms be if they collude and behave as a monopolist? • What will price, output, and profits be if they are only partially successful in suppressing competition?
Profit possibilities frontier • [Refer to diagram drawn on board.] • Locate points in the diagram corresponding to • Monopoly by firm A. • Monopoly by firm B. • Shared monopoly by firms A and B (perfectly colluding duopolists). • Aggressive competition between firms A and B. • Imperfect collusion between firms A and B.
Cartel Theory: incentive to collude • Suppose all the alligator farmers in the U.S. form an agricultural cooperative and name it the AAA (American Alligator Association). They hire you as a business consultant to advise them on setting market price and output so as to maximize industry profits. • You are asked to present your recommendations at their annual meeting in Natchitoches. Use the following diagram to explain how to set market output and individual farmer outputs, what the resulting market price of alligators will be, and how much economic profit each farmer will earn. • What incentive do these farmers have to go along with your plan?
Cartel Theory: incentive to cheat • Suppose all members go along with the plan and abide by their production quotas, so that market price rises from PC to PM. • Do you see any problems down the road keeping this cartel functioning as designed? • If you are an alligator farmer and market price is PM , what output would you like to produce and what would be your profits be if you were the only cartel member to cheat on your production quota? • What happens if one member cheats? What happens if several members cheat? • Do you think that the number of alligator farmers will stay the same over time?
Coordinating oligopolistic activity • Why don’t producers just get together with their lawyers and draw up a contract agreeing to collude? • Sherman Antitrust Act (1890): • Overt vs. Tacit Collusion—what’s the difference? • Legal cartels? NCAA, UAW, Sunkist . . . Section 1: "Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal." Section 2: "Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony [. . . ]"
Factors facilitating or impeding oligopolistic coordination among producers in an industry • Number and size distribution of sellers • Number and size distribution of buyers • Extent of product differentiation • Own-price elasticity of demand • Similar or dissimilar costs • Availability of information • Frequency of interaction in the market • Barriers to entry
Game Theory: Payoff interdependency??? • Decision-theoretic vs. Game-theoretic Situations • KY-American Water Co. is contemplating raising local water prices to cover costs of upgrading its infrastructure. • Google contemplates changing the price it charges for “clicks”. • RaywoodStelly contemplates price and output for the upcoming harvest season. • Arby’s on S. Limestone contemplates changing the hourly wage it pays to new employees. • Apple contemplates the features it makes standard on its new iPhone. • Honda contemplates offering a $2000 rebate on new Accords to reduce inventories on dealer lots. • UPS contemplates offering weekend delivery at the same rate as weekday delivery. • American Airlines contemplates raising the price of its non-stop flight from Lexington to Charlotte. • With some situations the optimal strategy for a firm to pursue depends only on the market circumstances and environment in which the firm finds itself. The behavior or reactions of other parties are not a factor. There is no Payoff Interdependency. • In other situations a firm’s optimal strategy will depend on the actions or reactions taken by other parties. There is Payoff Interdependency. This payoff interdependency puts us in the realm of Game Theory.
Elements of a game, Types of games • Elements of a game: • Players • Rules: timing of moves, actions available to players on each move, information available to players when they make a move • Outcomes • Payoffs associated with each possible outcome • Types of games: • Static: players move simultaneously. Examples? https://www.youtube.com/watch?v=ui-mzTCmZPE https://www.youtube.com/watch?v=g6tR78d0cmA • Dynamic: players move sequentially. Examples? https://www.youtube.com/watch?v=GL-uWmw4YMA • Information available: we will restrict our analysis to games where all players have complete information about players, rules, outcomes, and payoffs.
Static Games of Complete Information • The normal form of a static game of complete information is given by its payoff matrix, which describes: • the players (row player, column player) • the strategies available to each player (R1-R3, C1-C3) • the payoffs to each player associated with each strategy pair (R1 and C1 => 4, 3)
Solving a game • Given the payoff matrix to a simultaneous-move game, how do we think the game will turn out? What strategies will each player choose and what will be their payoffs? • We begin by asking, if the row player thinks the column player will choose strategy C1, what is her best response? C2? C3? • Then we ask, if the column player thinks the row player will choose strategy R1, what is his best response? R2? R3? • In the previous game, the row player will choose strategy R1 no matter what strategy the column player plays. And the column player will choose strategy C3 no matter what strategy the row player plays.
Solution strategies • Dominant strategy: a strategy that maximizes a player’s payoffs regardless of the strategy chosen by the other player. • Iterative elimination of dominated strategies: a dominated strategy is one such that there is an alternative strategy that is in all cases better to play than this strategy. • Rationalizable strategies: a rational player will not play a strategy that is never a best response to any strategy the other player might choose. • Nash equilibrium: a strategy profile such that each player’s chosen strategy is a best response to the strategy selected by the other player. Neither player will experience ex-post regret.
1. Dominant strategies and prisoner’s dilemma games • Payoff matrix for prisoner’s dilemma game between the Conductor and Tchaikovsky: • What is the outcome of the game?
Prisoner’s dilemma in a business setting • http://www.llbean.com/llb/shop/32937?page=warm-up-jacket-fleece-lined • https://www.landsend.com/products/mens-classic-squall-jacket/id_326823_59?sku_0=::3K4&dysku=4959900 • Duopolistschoosing a pricing strategy: • Does this outcome seem counter-intuitive to you? See: “Resolving the Prisoner’s Dilemma,” (Chapter 4 in Dixit and Nalebuff)
Cooperative vs. non-cooperative games • If players are able to enter into binding commitments to pursue strategies that are not in their narrow self-interest, then they will be able to collude. We call such games cooperative. • If players cannot make binding commitments, they cannot be counted on to honor any agreements that are not incentive compatible. We call this type of game non-cooperative. • http://www.youtube.com/watch?v=zpahL4fu5R8 • https://www.youtube.com/watch?v=S0qjK3TWZE8 • Are contracts to collude legally enforceable in the U.S? • How can you make your “commitments” credible? (see Ch. 6 in Dixit and Nalebuff, “Credible Commitments”)
2. Iterative elimination of dominated strategies • Now we are ready to explore ways of solving for the outcome of games of strategy. The first thing to do is to look for dominant strategies. If a player has a dominant strategy, she will play it. • The second thing to do, if there is no dominant strategy, is to look for dominated strategies and eliminate them from consideration. A dominated strategy is one such that there is an alternative strategy that is in all cases better to play than this strategy. A rational player would never choose to play a dominated strategy.
Can you predict the outcome of this game? • Does either player have a dominant strategy? Are there any dominated strategies for either player? • Predicted outcome: R1, C1. Note that the level of complexity went up one notch. The column player has to get inside the row player’s head and understand how he will think in order to determine what strategy is best for her.
3. Rationalizable Strategies • How would you solve this game? • Are there any dominant strategies? Are there any dominated strategies? • Another solution concept: a strategy is a best response for player i when player j plays a certain strategy if player i’s strategy choice yields the highest possible payoff among her set of choices. • A rational player will not play a strategy that is never a best response.
Solution to the previous game • Row player: U is the best response to L or M, and D is the best response to R. • Column player: L is the best response to D, and M is the best response to U. • So R is never a best response for the column player. The row player should not expect that strategy to be played if the column player is rational. That being the case, the row player should not play the D strategy, but should always play the U strategy. And if the row player plays U, then the best response of the column player is to play M. • Note that we have added another layer of complexity. The column player reasons that the row player is reasoning that he will never choose R, and so as a result will have a dominant strategy of U which she will play. The column player’s choice of M is wise only if he can count on the row player having “gotten inside” of his head.
4. Nash equilibrium • Let’s try a more complicated example:
Solution to the previous game • Does either player have a dominant strategy? Are there any dominated strategies? Non-rationalizable strategies? So how do we think the game will turn out? • An even stronger solution concept: Nash Equilibrium • A Nash equilibrium is a strategy profile such that each player’s chosen strategy is a best response to the strategy selected by the other player. • In other words, if my strategy choice is the best possible response to your strategy, and at the same time your strategy is the best possible response to my strategy, then this strategy pair is a Nash equilibrium. If that is true, then neither of us will experience ex post regret. • In the previous game, the strategy pair R2, C2 is a Nash equilibrium.
Dynamic Games of Complete Information • Now we turn to game theoretic situations where there can be a sequence of moves and where players may move more than once. • We define a dynamic game by its extensive form, which specifies: • The identity and number of players • When each player can make a move • The choices or options available on each move • The information available when making a move • The payoffs over all possible outcomes of the game • The extensive form of a dynamic game can be represented by a game tree, which has: • Decision nodes • Branches: actions available • Terminal nodes: payoffs
Example of a game tree: 5, 2 u 2 d U 1, 0 1 4, 4 D u 2 d 6, 0
Solving the previous game • Player #1 moves first and has two options, UP or DOWN. • If player #1 plays UP, then player #2 moves next. He has two options, up or down. If player #1 plays DOWN, then player #2 moves next. He has two options, up or down. • If player #1 plays U and player #2 then plays u, the payoff to player #1 is 5 and the payoff to player #2 is 2. U and d results in payoffs of 1 and 0 to players #1 and #2. D and u results in payoffs of 4 and 4. D and d results in payoffs of 6 and 0. • How would player #1 want this game to turn out? • How would player #2 want the game to turn out? • How is the game likely to turn out? Why?
Solving the previous game (continued) • If you are player #1, which strategy do you play? Why? • UP, since your likely payoff from UP is 5 while your likely payoff from DOWN is 4. • If you are player #2, how might you induce player #1 to play DOWN so that you can get to a payoff of 4 instead of 2? • Threaten to play down if player #1 plays UP. • Is such a threat credible? • No, since if player #1 plays UP, you maximize your payoff by playing up. • See Dixit and Nalebuff, Ch. 6: “Credible Commitments”
Solving dynamic games • Let us define a subgame to be a smaller game “embedded” in the complete game. In other words, starting from some point in the original game, a subgame includes all subsequent choices that must be made if players actually reach that point in the game. • This will allow us to test whether conjectural sub-strategies within a game are sequentially rational. • In our original game, there are three subgames: the original complete game and the games that begin at player #2’s decision nodes: 4, 4 5, 2 u u 2 2 d d 1, 0 6, 0
Subgame Perfect Nash Equilibrium • A strategy profile is Subgame Perfect Nash Equilibrium [SPNE] if the strategies are a Nash equilibrium in every subgame. • How do we determine the subgame perfect Nash equilibrium to a dynamic game? • The approach is simple: We look ahead and reason backward. In other words, we use backward induction. • Backward induction involves identifying the smallest possible subgames and determining what the optimal choices are for the player involved. Then we replace these subgames with the implied payoffs, and solve the next highest level of subgames. This process is continued until the Nash moves for every possible subgame have been found.
Another example 2, 1, 2 u 2 d U 1, 3, 4 1 3, 0, 1 D’ 3 D u U’ 4, 3, 3 2 d 1, 2, 6 U’ 3 D’ 2, 6, 0
Another example 2, 1, 2 u 2 d U 1, 3, 4 1 3, 0, 1 U’ 3 D u D’ 4, 3, 3 2 d 1, 2, 6 U’ 3 Solution: Player #1 picks D. Player #2 picks u. Player #3 picks D’. And [D, u, D’] is subgame perfect Nash equilibrium. D’ 2, 6, 0
Strategic Entry Barriers • Are there actions that an incumbent firm might take to deter entry by outsiders? In certain cases there may be. If the incumbent can make some binding commitment and communicate that to potential entrants prior to their decision to enter, then the incumbent may be able to forestall entry. • Consider the following example from Avinash Dixit, “New Developments in Oligopoly Theory,” American Economic Review, May 1982, pp. 12-17.
Consider a two-stage game between an incumbent monopolist and a prospective entrant. The first stage is the latter's entry decision. If he stays out, the incumbent earns monopoly profits PMIf entry occurs, the incumbent decides whether to fight a price war, with profits PWto each, or to share the market, with profits PDto each duopolist.
At each termination point the corresponding payoffs are shown, the first component being the incumbent's. It is assumed that PM > PD > 0 > PW , that is, duopoly is profitable but not as much as monopoly, while a price war is mutually destructive In the above example, the strategy "War if entry" cannot be part of a perfect equilibrium. The entrant knows that the incumbent's optimal response to entry is sharing. Since PD> 0, he stands to gain by entering. Therefore this is the outcome in the perfect equilibrium.
Now suppose the incumbent has available a prior irrevocable commitment, such as incurring cost C in readiness to fight a price war. This does not affect his payoff if a war in fact occurs, but lowers it by C otherwise. The new three-stage game tree is shown below.
An incumbent who has made this commitment will find it optimal to fight in the event of entry if PW > PD - C. An entrant, knowing this, will stay out if the incumbent is committed, and enter if he is passive. The incumbent, knowing this in turn, will make the commitment if the ultimate payoff from doing so, PM - C, exceeds that from being passive, PD. Provided there exists a commitment whose cost satisfies PM – PD > C > PD-PW, it allows the incumbent to employ a credible threat and deter entry to his own advantage. Incidentally, the example also shows how a sequential equilibrium has to be solved backwards. The availability of such a commitment is a matter for each specific case. Sunk capacity is the most cited example.
There are two essential requirements: the commitment should be made (and made known) prior to the entrant's decision, and it should be irre-versible. The incumbent often has a natural advantage of the first move, although an unaware passive incumbent may find himself facing an aggressive committed entrant, when the roles are reversed and the incumbent must contemplate exit. Irreversibility is a matter of technology and institutions. For example, capacity serves the purpose only if it cannot be costlessly liquidated. Capital goods that depreciate rapidly, or ones for which an efficient resale market exists, are not useful instruments for an entry-deterring commitment.