1 / 40

Illusion of Control in Minority and Parrondo Games

Illusion of Control in Minority and Parrondo Games. Jeffrey Satinover 1 , Didier Sornette 2 Condensed Matter Physics Laboratory, University of Nice, France, Dept.of Politics, Princeton University jsatinov@princeton.edu

lyre
Download Presentation

Illusion of Control in Minority and Parrondo Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Illusion of ControlinMinority and Parrondo Games Jeffrey Satinover1, Didier Sornette2 Condensed Matter Physics Laboratory, University of Nice, France, Dept.of Politics, Princeton University jsatinov@princeton.edu Chair of Entrepreneurial Risk, Swiss Federal Institute of Technology, Zurich, Switzerland, . dsornette@ethz.ch

  2. I. Message • Optimization often yields perverse results… (In economic policy-making: “Law of Unintended Consequences”) • …but not always: When and why? • Attempt to formally characterize conditions that yield perverse outcomes under optimization

  3. II. Overview: THMG • 10 Time-Horizon MG (THMG): Pro/Con • In general, agents underperform strategies for “reasonable” t (no impact) • Agent performance declines with dH • Agent evolution: dH→ 0 • “Counteradaptive” agents perform best

  4. III. Parrondo Games Briefly • 10 effect: 2 losing games win if alternated • History-dependent games • Attempt to optimize this effect inverts it • Shown in unusual multi-player setting • Here in natural single-player setting

  5. IV. Other Briefly • Cycle decomposition of THMG • Cycle predictor for real-world 1D series • Status Minority Game

  6. Pro MG: “unreasonable” teq Many real-world series not stationary Many real-world trading strategies use short or declining-valued t (expon. damping) Certain kinds of tractability due to “reasonable” t Con Far from equilibrium Arguendo: many real-world series effectively at equilibrium (high-freq data?) Analytic solutions more difficult for finite t Very complex finite-size effects, e.g., s2 periodic in t A. 10 Time-Horizon MG (THMG): Pro/Con

  7. THMG Markov Chain (EPJB, B07270)

  8. THMG Markovian

  9. THMG Markovian

  10. THMG Markovian

  11. THMG Markovian

  12. B. agents underperform strategies for “reasonable” t (no impact) . All N, m, S and

  13. B. agents underperform strategies for “reasonable” t (no impact) {m, S, N}={2,2,31}

  14. B. agents underperform strategies for “reasonable” t (no impact)

  15. B. agents underperform strategies for “reasonable” t (no impact)

  16. B. agents underperform strategies for “reasonable” t (no impact)

  17. B. agents underperform strategies for “reasonable” t (no impact)

  18. B. agents underperform strategies for “reasonable” t (no impact)

  19. B. agents underperform strategies for “reasonable” t (no impact)

  20. B. agents underperform strategies for “reasonable” t (no impact)

  21. B. agents underperform strategies for “reasonable” t (no impact)

  22. B. agents underperform strategies for “reasonable” t (no impact)

  23. B. agents underperform strategies for “reasonable” t (no impact)

  24. B. agents underperform strategies for “reasonable” t (no impact) Do we underestimate the extent to which real-world financial systems are so difficult simply because they are far-from equilibrium? in a THMG composed entirely of impact-accounting agents, with N=31, S=2, a near equilibrium state is attained for 10>t>100. For t=1 or 10, strategies outperform their agents as we have described. For t≥100, the reverse is true.

  25. C. Agent performance declines with dH

  26. D. At all α, agent performance declines with dH

  27. D. Agent Evolution If agents are allowed to evolve strategies (e.g., adaptive evolution, GA): dH → 0

  28. Agent performance declines with dHbut, … …for MG proper (equilibrium), for α> αc, • Agent performance increases with dH • dH → 1

  29. E. “Counteradaptive” agentsperform best

  30. E. “Counteradaptive” agentsperform best (they choose worst strategy) • Carefully designed privileges can yield superior results for a subset of agents • An important question: We pose it carefully so as to avoid introducing either privileged agents or learning: Is the illusion-of-control so powerful that inverting the optimization rule could yield equally unanticipated and opposite results? • The answer is yes: If the fundamental optimization rule of the MG is symmetrically inverted for a limited subset of agents who choose their worst-performing strategy instead of their best, those agents systematically outperform both their strategies and other agents. They also can attain positive gain.

  31. E. “Counteradaptive” agentsperform best (they choose their worst strategy)

  32. E. “Counteradaptive” agentsperform best (they choose their worst strategy)

  33. E. “Counteradaptive” agentsperform best (they choose their worst strategy)

  34. E. “Counteradaptive” agents

  35. Parrondo Games(Physica A, 386,1:339-344) • 10 effect: 2 losing games win if alternated • Capital-dependent → History-dependent • Attempt to optimize this effect inverts it • Shown in unusual multi-player setting • Here (ref.) in natural single-player setting • Choose worst partially restores PE

  36. Parrondo Games(Physica A, 386,1:339-344)

  37. Parrondo Games(Physica A, 386,1:339-344)

  38. Parrondo Games(Physica A, 386,1:339-344) Under optimization (“choose best”) 8 X 8 transition matrix: Under “choose worst”:

  39. IV. Other Briefly • Cycle decomposition of THMG • Cycle predictor for real-world 1D series • Status Minority Game

  40. Status MG: “LMG”→ “SMG”mobile agents; competition for “top”:simple definition of “social” • Boundary conditions: reflective, random, fixed: But NOT circular • Neighborhood size, heterogeneity • Role for different neighborhood functions

More Related