1.24k likes | 1.78k Views
The Psychology of Judgment & Decision Making. MIS 696A – Readings in MIS (Nunamaker) 05 November 2003 [Cha / Correll / Diller / Gite / Kim / Liu / Zhong]. SECTION I: Perception, Memory & Context. Hoon Cha & Jeff Correll. Chapter 1: Selective Perception. Hoon Cha. Define First and See?.
E N D
The Psychology ofJudgment & Decision Making MIS 696A – Readings in MIS (Nunamaker) 05 November 2003 [Cha / Correll / Diller / Gite / Kim / Liu / Zhong]
SECTION I:Perception, Memory & Context Hoon Cha & Jeff Correll
Chapter 1:Selective Perception Hoon Cha
Define First and See? • People selectively perceive what they expect and hope to see
Examples • Any book which is published will have been read possibly hundreds of times, including by professional proof readers. • And yet grammatical and other errors still get into print. Why? • Because the mind is very kind and corrects the errors that our eyes see.
Lessons Learned • Before conducting your research and interpreting your results • Ask yourself what expectations you did bring into the situation? • Consult with others who don’t share your expectations and motives
Are you a sexist person? • People are motivated to reduce or avoid psychological inconsistencies. • Cognitive dissonance • People are in much the same position as an outside observer when making inferences. • Self-Perception
Examples • Smokers find all kinds of reasons to explain away their unhealthy habit. • The alternative is to feel a great deal of dissonance.
Lessons Learned • Change in behavior can influence change in attitude • During your research, get other people to commit themselves to own the object, then they will form more positive attitudes toward an object. • Use systems development as a research methodology
"I knew it all along …" • Memory is reconstructive, not a storage chest in the brain. • Shattered memories • It can be embarrassing when things happen unexpectedly. People tend to view what has already happened as relatively inevitable and obvious. • Hindsight bias
Examples • Just before the election, people tend to be uncertain about who will win; but, after the election, they tend to point to signs that they now say had indicated clearly to them which candidate was going to win. • In other words, they are likely to remember incorrectly that they had known all along who the winning candidate was going to be.
Lessons Learned • During your research, explicitly consider how past events might have turned out differently. • Keep in mind the value of keeping accurate notes and records of past events
Chapter 4:Context Dependence Jeff Correll
4 Illustrations of Context Effect • Contrast Effect • Primacy Effect • Recency Effect • Halo Effect
Contrast Effect • Examples: • Experiment with 3 bowls of water • Sports announcer standing next to basketball players vs. horse jockeys • Only occurs among similar objects – ex: apparent size won’t change if standing next to a large race horse (Ebbinghaus Illusion)
Primacy Effect • Characteristics appearing early in a list influence impressions more strongly than those appearing later – Asch (1946) • The first entry is most important, but 2nd and 3rd also show a primacy effect-Anderson(1965) • This effect also occurs in many other situations involving sequential information
Recency Effect • Sometimes the final presentation has more influence than the first • Which is stronger? – it depends (Miller and Campbell study - 1959) • Hoch (1984) found similar results in human prediction experiments
Halo Effect • People can’t treat an individual as a compound of separate qualities and rate these qualities independent of the others • Examples: Army officer ratings, teacher evaluations, “beauty halo”, warm vs. cold, teacher expectations, etc.
Conclusion – Context Dependence • Everything is context-dependent • Persuasion professionals exploit these effects • Includes us as MIS Researchers! • Contextual effects are limited
SECTION II:How Questions Affect Answers How the format of a problem can influence the way people respond to it Jeff Correll
Chapter 5:Plasticity Jeff Correll
Are you a ‘gambler’? • Same choice in a different context can lead to very different answers: • A: 100% chance of losing $50 • B: 25% chance lose $200, 75% nothing • Worded in ‘sure loss language’ = Risk-taking • Worded in ‘insurance language’=Risk-averse
Order Effects • Order of questions/alternatives also influence responses • Example: Schuman and Presser’s 1981 survey on freedom of the press • Recency effect is the most common response order effect • Example: Survey question about divorce
Pseudo-Opinions • People will offer an opinion on a topic about which they have no real opinion (“pseudo-opinion”) – 25 to 35% • Multiple humorous examples • Common in issues involving foreign and military policy • Must be separated through filtering
Inconsistency • Discrepancy between two related attitudes (attitude-attitude) or an attitude and a corresponding behavior (attitude-behavior) • Attitude-attitude inconsistency: Attitudes about abstract propositions are often unrelated to attitudes about specific applications of the same proposition! • Attitude-behavior inconsistency: People can hold abstract opinions which have little or nothing to do with their actual behavior!
Inconsistency – Continued • Ultimate example of attitude-behavior inconsistency: Darley and Batson’s 1973 experiment on seminary students • Should we abandon the idea of attitudes altogether (Wicker)? • “Revisionist” attitude researchers say no - attitudes are consistent with behavior, provided certain conditions are met (Atzen et al – 1977)
Conclusion – Plasticity • Russian Proverb: • “Going through life is not so simple as crossing a field” • Translation to Judgment and Decision-Making: • “Measuring an attitude, opinion, or preference is not so simple as asking a question” • We as MIS researchers must pay close attention to the structure and context of our survey questions!
Chapter 6:Effects of Wording & Framing Jeff Correll
Question Wording • Small changes in wording can equal big changes in how people answer: • Example: Does your country’s nuclear weapons make you feel “safe”? (40% yes, 50% no, 10% no opinion) vs. “safer”? (50% yes, 36% no, 14% no opinion) • Potential pitfalls in question wording: • “Forced Choice” questions (no middle category) • Questions with a middle category • Open vs. Closed Questions - Schuman and Scott (1987)
Response Scales / Social Desirability / Allow vs. Forbid • Differences in response scales also influence results (ex: reported TV usage) • In the absence of a firm opinion on an issue, respondents typically cling to “catch phrases” that point them in a socially desirable direction • Are you for or against a freeze in nuclear weapons? (one question equated it with “Russian nuclear superiority”, the other with “world peace”) • Varying the words Allow and Forbid leads to very different responses (Rugg -1941)
Framing • People respond differently to losses than to gains (Tversky and Kahneman-1981) • A: Sure gain of $240, or • B: 25% chance to gain $1000, 75% chance to gain $0 • 84% chose A over B (people tend to be risk averse with gains) • C: Sure loss of $750, or • D: 75% chance to lose $1000, 25% chance to lose $0 • 87% chose D over C (people tend to be risk seeking w/losses)
Framing – Continued • Interesting point: A and D are chosen together 73% of time, yet B and C together has a higher expected value outcome • Concept has similar application to Medical Decision Making: • “Asian Disease” question (1981) • Lung cancer treatment decision experiment
Psychological Accounting • Decision makers also frame the outcomes of their choices • Main issue: Is the outcome framed in terms of the direct consequences of an act (“minimal account”) or is it evaluated with respect to a previous balance (“inclusive account”)? • Price to see a play is $10. As you enter theatre, you realize you’ve lost a $10 bill. Would you still pay $10 for a ticket to the play? (88% said yes) • Same situation, but this time you’ve lost your $10 ticket (which you’ve already paid for and can’t replace). Would you pay $10 for another ticket? (only 46% said yes!)
Conclusion – Question Wording and Framing • Can significantly affect how people respond • In our studies, we as MIS researchers must consider how respondents’ answers might have changed based on all of the previous factors • Furthermore, we should probably qualify interpretations of results until multiple variations in wording/framing can be tested: • If multiple procedure results are consistent, there may be some basis for trusting the judgment; otherwise ‘further analysis required’ (Slovic, Griffin, and Tversky – 1990)
SECTION III:Models of Decision Making Chris Diller
Classic Utility Theory • Example: Self-Test Question #30 • The "St. Petersburg Paradox" • Question initially posed by Nicolas Bernoulli (1713) • "Solution" provided by Daniel Bernoulli (1738/1754)
Expected Utility Theory • Expected Utility Theory • Developed by von Neumann & Morganstern (1947) • The value of money DECLINES with the amount won (or already possessed) • Normative … NOT descriptive!
Expected Utility Theory • "Rational Decision Making" Assumptions • Ordering = Preferred alternatives or indifference • Dominance = Alternative with better outcome(s) • "Weakly" dominant vs. "Strongly" dominant • Cancellation = Ignore identical factors/consequences • Transitivity = If A > B and B > C … then A > C ! • Continuity = Prefer gamble to sure thing (odds!) • Invariance = Unaffected by way alt's are presented • A Major Paradigm with Many Extensions
The Allais Paradox • Example: Self-Test Question #28 • Maurice Allais (1953) • Showed how the Cancellation Principle is violated • The addition of equivalent consequences CAN lead people to make different (irrational?) choices
Ellsberg's Paradox • Daniel Ellsberg (1961) • Also showed how Cancellation Principle is violated • People to make different (irrational?) choices in order to avoid uncertain probabilities • Example: Urn with 90 balls (R/B/Y)
Intransitivity • "Money Pump" • Decision makers with intransitive preferences • A < B B < C A > C • Amos Tversky (1969) • Harvard study: 1/3 of subjects displayed this! • "Committee Problem" Example • Choose between three applicants • Leader frames vote to avoid direct comparisons
Preference Reversals • Sarah Lichtenstein & Paul Slovic (1971) • Preferences can be "reversed" depending upon how they are elicited • High payoff vs. High probability • Choosing between a PAIR of alternatives involves different psychological processes … than bidding on a particular alternative separately • Exist even for experienced DMs in real life! • Example: Study of Las Vegas bettors & dealers
Conclusions • Violations of EUT are not always irrational! • Approximations simplify difficult decisions • Increase efficiency by reducing cognitive effort • Lead to decisions similar to optimal strategies • Assume that the world is NOT designed to take advantage of the approximation efforts utilized A decision strategy that can not be defended as logical may be rational if it yields a quick approximation of a normative strategy that maximizes utility.
Satisficing • Herb Simon Blows Up EUT (1956) • Simplifying assumptions make the problems tractable: • DMs are assumed to have complete information • DMs are assumed to understand and USE this information • DMs are assumed to compare calculations & maximize utility • Simon says: People "satisfice" rather than optimize • "People often choose a path that satisfies their most important needs, even though the choice may not be ideal or optimal." • Humans' adaptive nature falls short of economic maximization
Prospect Theory • Daniel Kahneman & Amos Tversky (1979) • Prospect Theory differs from EUT in two big ways: • Replace "Utility" with "Value" (net wealth vs. gains/losses) • The value function for losses is different than the one for gains