1 / 33

QR 38 4/3/07, Uncertainty I. Risk II. Agency problems III. Asymmetric information and perceptions

QR 38 4/3/07, Uncertainty I. Risk II. Agency problems III. Asymmetric information and perceptions. Uncertainty means lack of certainty about the outcomes of a game. Many possible sources: Outcomes inherently risky (effect of weather on a battle) Actions difficult to observe

amos-guzman
Download Presentation

QR 38 4/3/07, Uncertainty I. Risk II. Agency problems III. Asymmetric information and perceptions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QR 38 4/3/07, Uncertainty I. Risk II. Agency problems III. Asymmetric information and perceptions

  2. Uncertainty means lack of certainty about the outcomes of a game. Many possible sources: • Outcomes inherently risky (effect of weather on a battle) • Actions difficult to observe • Preferences of other player unknown • All are important in IR I. Risk

  3. Imprecise information about outcomes is incomplete information. • Imprecise information about past moves is imperfect information. • In practice, the way that we deal with incomplete information is to transform it into a game of imperfect information. Incomplete and imperfect information

  4. Risk: some probability distribution over outcomes • Payoffs are probabilistic. • Risk is inherent in the situation, not due to the strategic actions of others. Risk

  5. Consider OPEC (or coffee or bauxite suppliers): • Price of commodity fluctuates depending on consumer demand, weather, etc. • This creates risk for both the supplier and consumer. • Assume that a coffee grower makes $150,000 in a good year, $50,000 in a bad year Commodity supplier example

  6. Assume a 50-50 distribution over these two possible outcomes • The expected profit is then $100,000 • Consider the consumer (U.S. distributor): a good year for the grower is a bad year for the consumer. • So a “good” year leads to profits of $50,000 for the consumer; a “bad” year to profits of $150,000 Commodity example

  7. The consumer’s expected profit is $100,000. • Can the consumer and grower collaborate to reduce the risk to which they are exposed? • Yes, by pooling their risks. • In general, pooling works as long as risks are not perfectly positively correlated. Commodity example

  8. Here, since the risks are negatively correlated, pooling should help both. • Their combined income is always $200,000. • They could enter into a contract in which the grower gives the consumer $50,000 in good years, and the consumer gives the grower $50,000 in bad years. Commodity example

  9. Then each is guaranteed a payoff of $100,000 • Risk has been eliminated • There is no advantage to this arrangement if the players are risk-neutral, but it increases their utility if they are risk-averse. • Because of the perfect negative correlation, the risk disappears completely. Commodity example

  10. Problems with risk-pooling schemes: • Enforceability • Commodity arrangements often short-lived; a PD (ICA and OPEC exceptions) • Moral hazard • If insured against risk, may not take precautions you otherwise would (fertilizer) • Adverse selection • Those most likely to need insurance will purchase it (incompetent growers) Risk pooling in IR

  11. Moral hazard problem is linked to a general class of informational problems known as agency issues. • Examples: employer hiring an employee – the employee is the employer’s agent. Choosing leader for an IO. • Using an agent has many advantages, but also problems, such as whether the agent’s actions are observable. II. Agency problems

  12. Consider allowing the IMF to negotiate a bail-out arrangement with Argentina (U.S. the principal, IMF the agent). • Negotiations won’t be entirely public • Even if a good agreement is reached, it may not help Argentina’s economy (risk) • If the economy doesn’t improve, is this the result of the agent not doing a good job or bad luck? U.S. can’t tell. Observability of actions

  13. How to address this problem? • If agent’s actions observable, just pay IMF more for negotiating well. • Otherwise have to make the payoff contingent on the observed outcomes and use incentive schemes. • Solutions have to meet the incentive-compatibility constraint and the participation constraint. Observability of actions

  14. Incentive-compatibility constraint: • How big the payment would have to be to induce good effort. • Have to make the expected payoff big enough to induce good effort. • Because outcomes are risky, this can mean the actual payment must be very large. • Even if the agent does a good job, by bad luck he might get paid nothing. Incentive schemes

  15. Participation constraint: • Have to offer enough that the agent will choose to work with you at all • How big this payment must be depends on the agent’s exit options. • The principal’s lack of information means that he must pay more than under complete information. • This inefficiency falls on the least-informed player, the principal. Incentive schemes

  16. Agency problems are an example of information asymmetry (above, the IMF knew if it negotiated in good faith, but the U.S. didn’t). • Asymmetric information is a general problem in IR. • Players may know their own preferences, but not others’ • Players know their own past behavior, but not others’ III. Asymmetric information and perceptions

  17. When asymmetric information exists, perceptions or beliefs become central to the play and resolution of the game. • Now the equilibrium has to be described in terms of both strategies and beliefs, because they depend intimately on one another. Asymmetric information

  18. Beliefs are not random: a rational player will update beliefs in response to events. • A player that stubbornly stuck to beliefs that were at odds with observed behavior wouldn’t be acting rationally. • Definition of belief: a probability attached to some quantity, such as others’ payoffs. Beliefs

  19. Consider the U.S. trying to convince NK to give up its nuclear program by offering aid in exchange. • NK could put either a high or a low value on aid • This is NK’s type: a type that values aid highly, or that puts a low value on it • NK’s type is private information; NK knows, U.S. only has an estimate. U.S.-North Korea example

  20. E.g., U.S. might believe there is a 70% probability that NK puts a high value on aid, and a 30% probability that it puts a low value on aid. • This probability distribution is the U.S. belief about NK’s type. • We usually assume that beliefs are common knowledge: NK knows what probabilities U.S. assigns, U.S. knows NK knows this, etc. U.S.-North Korea example

  21. How to capture asymmetric information in an extensive-form game? • Trade sanctions example. • U.S. is considering the use of trade sanctions to convince Japan to open its market. • U.S. is not sure whether Japan is cooperative or not. • A cooperative Japan prefers open markets to a trade war; an uncooperative Japan does not. • U.S. believes Japan is cooperative with probability p. Showing asymmetric information in an extensive-form game

  22. U.S.-Japan games OM 3, 1 Open markets 3, 2 Cooperate Coop J J Sanctions Sanctions No TW 0, 2 No Trade war 0, 1 US US No No SQ 2, 3 Status quo 2, 3 Japan cooperative (p) Japan uncooperative (1-p)

  23. To show the U.S. decision problem, we include an initial move by Nature. • Japan can be of two types • Each type has different payoffs • Nature is not strategic, just rolls the dice • An information set ties together nodes where the player making the decision at that node is uncertain about which node has been chosen. Game with uncertainty

  24. Game with uncertainty Coop Open market, 3, 2 Sanction J US No Trade war, 0, 1 Cooperative, prob p No SQ 2, 3 Open market, 3, 1 Nature Coop Sanction Uncooperative, prob 1-p No Trade war, 0, 2 US No SQ 2, 3

  25. The dashed line indicates the U.S. information set: it doesn’t know precisely which of these nodes it is at when it makes its decision • Could also indicate this by circling the nodes • How does the U.S. decide what to do? • It calculates its expected payoff for each strategy (sanction, don’t sanction) • Then chooses the strategy with the highest expected payoff Game with uncertainty

  26. The U.S. looks for the critical value of p: the value at which it switches to the sanctioning strategy. • US payoff from choosing sanctions: • p3+(1-p)0 = 3p • US payoff from no sanctions: 2 • So sanction if 3p>2; p>2/3 • 2/3 is the critical value of p. Critical value of p

  27. BdM then extends the game by considering an interest group that punishes the president if he doesn’t adopt their preferred strategy. • This just changes the president’s payoffs, so that the critical value for sanctioning becomes higher • Work through this calculation for yourself Adding domestic politics

  28. This is a game of one-sided uncertainty: Japan knows U.S. payoffs with certainty. • Two-sided uncertainty would get a little messier, but solving the game would involve the same logic. Analyzing uncertainty

  29. What if the players have the opportunity to try to influence the other’s beliefs? • This is known as signaling. • This opportunity may create incentives to bluff. Signaling

  30. E.g., assume J has the chance to pass a law that makes it easier to impose retaliatory tariffs before US moves. • US, on observing passage of this law, may change its probability estimate about J’s type. • Knowing this, even a cooperative J may have an incentive to bluff. • But US knows this, and so will discount J’s actions. Signaling

  31. Appeasement as an example: should a declining power appease a rising power by making repeated concessions to demands? • The challenger gets to decide in each period whether to make demands • Then the declining power concedes or says no. Appeasement example

  32. There is one-sided uncertainty about the demands of the challenger: declining doesn’t know whether demands are limited or not (how determined the challenger is). • If demands are unlimited, war is inevitable, and the declining power would rather fight sooner than later. • Then no concessions made Appeasement example

  33. But because of uncertainty, sometimes the declining power will end up making concessions but still fighting. • This is an inefficient outcome. • Incomplete information has a cost, and it falls on the less-informed player (as in the agency example). Appeasement example

More Related