1 / 74

Presidential Address: A Program of Web-Based Research on Decision Making

Presidential Address: A Program of Web-Based Research on Decision Making. Michael H. Birnbaum SCiP, St. Louis, MO November 18, 2010. Preliminary Remarks. I am grateful for the honor to be president of SCiP and this opportunity to speak.

nola
Download Presentation

Presidential Address: A Program of Web-Based Research on Decision Making

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presidential Address: A Program of Web-Based Research on Decision Making Michael H. Birnbaum SCiP, St. Louis, MO November 18, 2010

  2. Preliminary Remarks • I am grateful for the honor to be president of SCiP and this opportunity to speak. • You are invited to attend the Edwards Bayesian Research Conference in Fullerton in January 2011. Look for details by email. • You, your students, and colleagues are invited to apply to the NSF-funded training sessions in Web-based DRMS research. Look for details by email. Next session follows Edwards Bayesian Meetings.

  3. Preliminary Remarks:A Personal History of Computing in Psychology Michael H. Birnbaum Decision Research Center California State University Fullerton, CA

  4. Lab Research in the 1960s Michael at the Keypunch-- ~ 1965-66 in Parducci Lab in Franz Hall, UCLA

  5. The “monster” in the basement --Parducci Lab • This machine’s relays and rheostats controlled equipment in the next room. Programmed by paper tape reader, knobs and dials, it recorded Ss’ responses on computer cards.

  6. We had to use Mainframe Computers to Analyze data--One Run a Day…

  7. We all Awaited the Personal Computer…Predicted for 2004

  8. Difficulties of Lab Research • Limited “seating” in lab, limited hours • Reconfigure equipment for each study • Lab assistant needed who has skill to trouble-shoot problems with equipment • Experimenter must be present • Experimenter bias effects...

  9. 1970:The Lab Computer --Programmable in BASIC-- Expensive, But it still lacked real power

  10. 1980s: Then came affordable personal computers

  11. One Computer per Participant • Still limited to facilities in a lab • Networks made it more efficient to assemble data from different participants and to mediate interactions between participants.

  12. 1989-90: The WWW…1995: Browsers & HTML2 • One could now: • …Test large numbers of participants • …Test people at remote locations • …Recruit people with special characteristics • …Run the same study in the lab and via the Web for comparison

  13. My First Web Studies • I wanted to compare Highly Educated Participants with Undergraduates. • Critical Tests refuted then-popular theories of Decision Making • Recruit PhDs who are members of SJDM and SMP and studied DM. • Recruit and Test participants via the WWW

  14. Critical Tests are Theorems of One Model that are Violated by Another Model • This approach has advantages over tests or comparisons of fit. • It is not the same as “axiom testing.” • Use model-fitting to rival model to predict where to find violations of theorems deduced from model tested.

  15. Outline • I will discuss critical properties that test between nonnested theories: CPT and TAX. • Lexicographic Semiorders vs. family of transitive, integrative models (including CPT and TAX). • Integrative Contrast Models (e.g., Regret, Majority Rule) vs. transitive, integrative models.

  16. Cumulative Prospect Theory/ Rank-Dependent Utility (RDU)

  17. TAX Model

  18. “Prior” TAX Model Assumptions:

  19. TAX Parameters For 0 < x < $150 u(x) = x Gives a decent approximation. Risk aversion produced by d. d = 1 .

  20. TAX and CPT nearly identical for binary (two-branch) gambles • CE (x, p; y) is an inverse-S function of p according to both TAX and CPT, given their typical parameters. • Therefore, there is little point trying to distinguish these models with binary gambles.

  21. Non-nested Models

  22. CPT and TAX nearly identical inside the prob. simplex

  23. Testing CPT TAX:Violations of: • Coalescing • Stochastic Dominance • Lower Cum. Independence • Upper Cumulative Independence • Upper Tail Independence • Gain-Loss Separability

  24. Testing TAX Model CPT: Violations of: • 4-Distribution Independence (RS’) • 3-Lower Distribution Independence • 3-2 Lower Distribution Independence • 3-Upper Distribution Independence (RS’) • Res. Branch Indep (RS’)

  25. Stochastic Dominance • A test between CPT and TAX: G = (x, p; y, q; z) vs. F = (x, p – s; y’, s; z) Note that this recipe uses 4 distinct consequences: x > y’ > y > z > 0; outside the probability simplex defined on three consequences. CPT  choose G, TAX  choose F Test if violations due to “error.”

  26. Error Model Assumptions • Each choice pattern in an experiment has a true probability, p, and each choice has an error rate, e. • The error rate is estimated from inconsistency of response to the same choice by same person over repetitions.

  27. Violations of Stochastic Dominance 122 Undergrads: 59% TWO violations (BB) 28% Pref Reversals (AB or BA) Estimates: e = 0.19; p = 0.85 170 Experts: 35% repeated violations 31% Reversals Estimates: e = 0.20; p = 0.50

  28. 42 Studies of Stochastic Dominance, n = 12,152 • Large effects of splitting vs. coalescing of branches • Small effects of education, gender, study of decision science • Very small effects of 15 probability formats and request to justify choices. • Miniscule effects of event framing (framed vs unframed)

  29. Summary: Prospect Theories not Descriptive • Violations of Coalescing, Stochastic Dominance, Gain-Loss Separability, and 9 other critical tests. • Summary in my 2008 Psychological Review paper. • JavaScript Web pages contain calculators--predictions • Archives show Exp materials and data

  30. Lexicographic Semiorders • Intransitive Preference. • Priority heuristic of Brandstaetter, Gigerenzer & Hertwig is a variant of LS, plus some additional features. • In this class of models, people do not integrate information or have interactions such as the probability X prize interaction in family of integrative models (CPT, TAX, GDU, EU and others)

  31. LPH LS: G = (x, p; y) F = (x’, q; y’) • If (y –y’ > D) choose G • Else if (y ’- y > D) choose F • Else if (p – q > d) choose G • Else if (q – p > d) choose F • Else if (x – x’ > 0) choose G • Else if (x’ – x > 0) choose F • Else choose randomly

  32. Family of LS • In two-branch gambles, G = (x, p; y), there are three attributes: L = lowest outcome (y), P = probability (p), and H = highest outcome (x). • There are 6 orders in which one might consider the attributes: LPH, LHP, PLH, PHL, HPL, HLP. • In addition, there are two threshold parameters (for the first two attributes).

  33. Testing Lexicographic Semiorder Models Violations of Transitivity Violations of Priority Dominance Integrative Independence Interactive Independence LS Allais Paradoxes CPT TAX EU

  34. New Tests of Independence • Dimension Interaction: Decision should be independent of any dimension that has the same value in both alternatives. • Dimension Integration: indecisive differences cannot add up to be decisive. • Priority Dominance: if a difference is decisive, no effect of other dimensions.

  35. Taxonomy of choice models

  36. Dimension Interaction

  37. Family of LS • 6 Orders: LPH, LHP, PLH, PHL, HPL, HLP. • There are 3 ranges for each of two parameters, making 9 combinations of parameter ranges. • There are 6 X 9 = 54 LS models. • But all models predict SS, RR, or ??.

  38. Results: Interaction n = 153

  39. Analysis of Interaction • Estimated probabilities: • P(SS) = 0 (prior PH) • P(SR) = 0.75 (prior TAX) • P(RS) = 0 • P(RR) = 0.25 • Priority Heuristic: Predicts SS

  40. Results: Dimension Integration • Data strongly violate independence property of LS family • Data are consistent instead with dimension integration. Two small, indecisive effects can combine to reverse preferences. • Observed with all pairs of 2 dims.

  41. New Studies of Transitivity • LS models violate transitivity: A > B and B > C implies A > C. • Birnbaum & Gutierrez (2007) tested transitivity using Tversky’s gambles, using typical methods for display of choices. • Text displays and pie charts with and without numerical probabilities. Similar results with all 3 procedures.

  42. Replication of Tversky (‘69) with Roman Gutierrez • 3 Studies used Tversky’s 5 gambles, formatted with tickets and pie charts. • Exp 1, n = 251, tested via computers.

  43. Three of Tversky’s (1969) Gambles • A = ($5.00, 0.29; $0) • C = ($4.50, 0.38; $0) • E = ($4.00, 0.46; $0) Priority Heurisitc Predicts: A preferred to C; C preferred to E, But E preferred to A. Intransitive. TAX (prior): E > C > A

  44. Response Combinations

  45. Results-ACE

  46. Summary • Priority Heuristic model’s predicted violations of transitivity are rare. • Dimension Interaction violates any member of LS models including PH. • Dimension Integration violates any LS model including PH. • Evidence of Interaction and Integration compatible with models like EU, CPT, TAX. • Birnbaum, J. Mathematical Psych. 2010.

  47. Integrative Contrast Models • Family of Integrative Contrast Models • Special Cases: Regret Theory, Majority Rule (aka Most Probable Winner) • Predicted Intransitivity: Forward and Reverse Cycles • Research with Enrico Diecidue

  48. Integrative, Interactive Contrast Models

  49. Assumptions

  50. Special Cases • Majority Rule (aka Most Probable Winner) • Regret Theory • Other models arise with different functions, f.

More Related