400 likes | 515 Views
Illusion of Control in Minority and Parrondo Games. Jeffrey Satinover 1 , Didier Sornette 2 Condensed Matter Physics Laboratory, University of Nice, France, Dept.of Politics, Princeton University jsatinov@princeton.edu
E N D
Illusion of ControlinMinority and Parrondo Games Jeffrey Satinover1, Didier Sornette2 Condensed Matter Physics Laboratory, University of Nice, France, Dept.of Politics, Princeton University jsatinov@princeton.edu Chair of Entrepreneurial Risk, Swiss Federal Institute of Technology, Zurich, Switzerland, . dsornette@ethz.ch
I. Message • Optimization often yields perverse results… (In economic policy-making: “Law of Unintended Consequences”) • …but not always: When and why? • Attempt to formally characterize conditions that yield perverse outcomes under optimization
II. Overview: THMG • 10 Time-Horizon MG (THMG): Pro/Con • In general, agents underperform strategies for “reasonable” t (no impact) • Agent performance declines with dH • Agent evolution: dH→ 0 • “Counteradaptive” agents perform best
III. Parrondo Games Briefly • 10 effect: 2 losing games win if alternated • History-dependent games • Attempt to optimize this effect inverts it • Shown in unusual multi-player setting • Here in natural single-player setting
IV. Other Briefly • Cycle decomposition of THMG • Cycle predictor for real-world 1D series • Status Minority Game
Pro MG: “unreasonable” teq Many real-world series not stationary Many real-world trading strategies use short or declining-valued t (expon. damping) Certain kinds of tractability due to “reasonable” t Con Far from equilibrium Arguendo: many real-world series effectively at equilibrium (high-freq data?) Analytic solutions more difficult for finite t Very complex finite-size effects, e.g., s2 periodic in t A. 10 Time-Horizon MG (THMG): Pro/Con
THMG Markov Chain (EPJB, B07270)
B. agents underperform strategies for “reasonable” t (no impact) . All N, m, S and
B. agents underperform strategies for “reasonable” t (no impact) {m, S, N}={2,2,31}
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact)
B. agents underperform strategies for “reasonable” t (no impact) Do we underestimate the extent to which real-world financial systems are so difficult simply because they are far-from equilibrium? in a THMG composed entirely of impact-accounting agents, with N=31, S=2, a near equilibrium state is attained for 10>t>100. For t=1 or 10, strategies outperform their agents as we have described. For t≥100, the reverse is true.
D. Agent Evolution If agents are allowed to evolve strategies (e.g., adaptive evolution, GA): dH → 0
Agent performance declines with dHbut, … …for MG proper (equilibrium), for α> αc, • Agent performance increases with dH • dH → 1
E. “Counteradaptive” agentsperform best (they choose worst strategy) • Carefully designed privileges can yield superior results for a subset of agents • An important question: We pose it carefully so as to avoid introducing either privileged agents or learning: Is the illusion-of-control so powerful that inverting the optimization rule could yield equally unanticipated and opposite results? • The answer is yes: If the fundamental optimization rule of the MG is symmetrically inverted for a limited subset of agents who choose their worst-performing strategy instead of their best, those agents systematically outperform both their strategies and other agents. They also can attain positive gain.
E. “Counteradaptive” agentsperform best (they choose their worst strategy)
E. “Counteradaptive” agentsperform best (they choose their worst strategy)
E. “Counteradaptive” agentsperform best (they choose their worst strategy)
Parrondo Games(Physica A, 386,1:339-344) • 10 effect: 2 losing games win if alternated • Capital-dependent → History-dependent • Attempt to optimize this effect inverts it • Shown in unusual multi-player setting • Here (ref.) in natural single-player setting • Choose worst partially restores PE
Parrondo Games(Physica A, 386,1:339-344) Under optimization (“choose best”) 8 X 8 transition matrix: Under “choose worst”:
IV. Other Briefly • Cycle decomposition of THMG • Cycle predictor for real-world 1D series • Status Minority Game
Status MG: “LMG”→ “SMG”mobile agents; competition for “top”:simple definition of “social” • Boundary conditions: reflective, random, fixed: But NOT circular • Neighborhood size, heterogeneity • Role for different neighborhood functions