1 / 24

Biases in Human Decision Making

The Need to Assess Probabilities. People need to make decisions constantly, such as during diagnosis and therapyThus, people need to assess probabilities to classify objects or predict various values, such as the probability of a disease given a set of symptomsPeople employ several types of heuristics to assess probabilitiesHowever, these heuristics often lead to significant biases in a consistent fashion.

Rita
Download Presentation

Biases in Human Decision Making

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Biases in Human Decision Making Yuval Shahar M.D., Ph.D.

    3. Three Major Human Probability-Assessment Heuristics/Biases (Tversky and Kahneman, 1974) Representativeness The more object X is similar to class Y, the more likely we think X belongs to Y Availability The easier it is to consider instances of class Y, the more frequent we think it is Anchoring Initial estimated values affect the final estimates, even after considerable adjustments

    4. A Representativeness Example Consider the following description: “Steve is very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve a farmer, a librarian, a physician, an airline pilot, or a salesman?

    5. The Representativeness Heuristic We often judge whether object X belongs to class Y by how representative X is of class Y For example, People order the potential occupations by probability and by similarity in exactly the same way The problem is that similarity ignores multiple biases

    6. Representative Bias (1): Insensitivity to Prior Probabilities The base rate of outcomes should be a major factor in estimating their frequency However, people often ignore it (e.g., there are more farmers than librarians) E.g., the lawyers vs. engineers experiment: Reversing the proportions in the group had no effect on estimating the profession, given a description Giving worthless evidence caused the subjects to ignore the odds and estimate the probability as 0.5 Thus, prior probabilities of diseases are often ignored when the patient seems to fit a rare-disease description

    7. Representative Bias (2): Insensitivity to Sample Size The size of a sample withdrawn from a population should greatly affect the likelihood of obtaining certain results in it People, however, ignore sample size and only use the superficial similarity measures For example, people ignore the fact that larger samples are less likely to deviate from the mean than smaller samples

    8. Representative Bias (3): Misconception of Chance People expect random sequences to be “representatively random” even locally E.g., they consider a coin-toss run of HTHTTH to be more likely than HHHTTT or HHHHTH The Gambler’s Fallacy After a run of reds in a roulette, black will make the overall run more representative (chance as a self-correcting process??) Even experienced research psychologists believe in a law of small numbers (small samples are representative of the population they are drawn from)

    9. Representative Bias (4): Insensitivity to Predictability People predict future performance mainly by similarity of description to future results For example, predicting future performance as a teacher based on a single practice lesson Evaluation percentiles (of the quality of the lesson) were identical to predicted percentiles of 5-year future standings as teachers

    10. Representative Bias (5): The Illusion of Validity A good match between input information and output classification or outcome often leads to unwarranted confidence in the prediction Example: Use of clinical interviews for selection Internal consistency of input pattern increases confidence a series of B’s seems more predictive of a final grade-point average than a set of A’s and C’s Redundant, correlated data increases confidence

    11. Representative Bias (6): Misconceptions of Regression People tend to ignore the phenomenon of regression towards the mean E.g., correlation between parents’ and children’s heights or IQ; performance on successive tests People expect predicted outcomes to be as representative of the input as possible Failure to understand regression may lead to overestimate the effects of punishments and underestimate the effects of reward on future performance (since a good performance is likely to be followed by a worse one and vice versa)

    12. The Availability Heuristic The frequency of a class or event is often assessed by the ease with which instances of it can be brought to mind The problem is that this mental availability might be affected by factors other than the frequency of the class

    13. Availability Biases (1): Ease of Retrievability Classes whose instances are more easily retrievable will seem larger For example, judging if a list of names had more men or women depends on the relative frequency of famous names Salience affects retrievability E.g., watching a car accident increases subjective assessment of traffic accidents

    14. Availability Biases (2): Effectiveness of a Search Set We often form mental “search sets” to estimate how frequent are members of some class; the effectiveness of the search might not relate directly to the class frequency Who is more prevalent: Words that start with r or words where r is the 3rd letter? Are abstract words such as love more frequent than concrete words such as door?

    15. Availability Biases (3): Ease of Imaginability Instances often need to be constructed on the fly using some rule; the difficulty of imagining instances is used as an estimate of their frequency E.g. number of combinations of 8 out of 10 people, versus 2 out of 10 people Imaginability might cause overestimation of likelihood of vivid scenarios, and underestimation of the likelihood of difficult-to-imagine ones

    16. Availability Biases (4): Illusory Correlation People tended to overestimate co-occurrence of diagnoses such as paranoia or suspiciousness with features in persons drawn by hypothetical mental patients, such as peculiar eyes Subjects might overestimate the correlation due to easier association of suspicion with the eyes than other body parts

    17. The Anchoring and Adjustment Heuristic People often estimate by adjusting an initial value until a final value is reached Initial values might be due to the problem presentation or due to partial computations Adjustments are typically insufficient and are biased towards initial values, the anchor

    18. Anchoring and Adjustment Biases (1): Insufficient Adjustment Anchoring occurs even when initial estimates (e.g., percentage of African nations in the UN) were explicitly made at random by spinning a wheel! Anchoring may occur due to incomplete calculation, such as estimating by two high-school student groups the expression 8x7x6x5x4x3x2x1 (median answer: 512) with the expression 1x2x3x4x5x6x7x8 (median answer: 2250) Anchoring occurs even with outrageously extreme anchors (Quattrone et al., 1984) Anchoring occurs even when experts (real-estate agents) estimate real-estate prices (Northcraft and Neale, 1987)

    19. Anchoring and Adjustment Biases (2): Evaluation of Conjunctive and Disjunctive Events People tend to overestimate the probability of conjunctive events (e.g., success of a plan that requires success of multiple steps) People underestimate the probability of disjunctive events (e.g. the Birthday Paradox) In both cases there is insufficient adjustment from the probability of an individual event

    20. Anchoring and Adjustment Biases (3): Assessing Subjective Probability Distributions Estimating the 1st and 99th percentiles often leads to too-narrow confidence intervals Estimates often start from median (50th percentile) values, and adjustment is insufficient The degree of calibration depends on the elicitation procedure state values given percentile: leads to extreme estimates state percentile given a value: leads to conservativeness

    21. A Special Type of Bias: Framing Risky prospects can be framed in different ways- as gains or as losses Changing the description of a prospect should not change decisions, but it does Prospect Theory (Kahneman and Tversky, 1979) predicts such anomalies due to the fact that the negative effect of a loss is larger than the positive effect of a gain

    22. Framing Experiment (I) Imagine the US is preparing for the outbreak of an Asian disease, expected to kill 600 people (N = 152 subjects): If program A is adopted, 200 people will be saved (72% preference) If program B is adopted, there is one third probability that 600 people will be saved and two thirds probability that no people will be saved (28% preference)

    23. Framing Experiment (II) Imagine the US is preparing for the outbreak of an Asian disease, expected to kill 600 people (N = 155 subjects): If program C is adopted, 400 people will die (22% preference) If program D is adopted, there is one third probability that nobody will be die and two thirds probability that 600 people will die (78% preference)

    24. Summary: Heuristics and Biases There are several common heuristics people employ to estimate probabilities Representativeness of a class by an object Availability of instances as a frequency measure Adjustment from an initial anchoring value All heuristics are quite effective, usually, but lead to predictable, systematic errors and biases Understanding biases might decrease their effect

More Related