320 likes | 524 Views
The Psychology of Avoiding Disaster Readiness Disasters. Robin Dillon-Merrill Catherine H. Tinsley The McDonough School of Business Georgetown University. Precursors to Catastrophes. People are confronted by the same threats year after year: Hurricanes along the Southeastern coast
E N D
The Psychology of Avoiding Disaster Readiness Disasters Robin Dillon-Merrill Catherine H. Tinsley The McDonough School of Business Georgetown University
Precursors to Catastrophes • People are confronted by the same threats year after year: • Hurricanes along the Southeastern coast • Floods and tornados in the Midwest • Wildfires in the West • Mud slides & earthquakes in California • When catastrophes occur that were preceded by near miss events, the question becomes: • Were near miss events ignored?
Anecdotal Evidence • Governor Haley Barbour of Mississippi: “hurricane fatigue” He feared that his constituents were not evacuating in response to the Katrina threat because they had successfully weathered earlier storms. • A former FEMA official described an agency that was responding to “business as usual” (i.e., treating Katrina like past hurricanes)] • Individual statements: “I survived Camille; my house is sturdy; I am staying put” • Organizational decision making: “This is how we have responded to hurricane warnings in the past.” [1] Quotes from The Washington Post, September 11, 2005 pp. A6-A7
Gap in Current Disaster Research • Research has shown that the level of preparedness is significantly linked to personal experience with disasters (Lindell and Perry, 2000, Wenger, 1980, and Dooley, et al., 1992). • But these experiences can either lead to greater awareness and preparedness or to greater complacency and fatalism offering no conclusions as to why the variation exists (Tierney, Lindell, and Perry, 2001, Jackson, 1981, and Mileti and O’Brien, 1992),. • It is precisely their interpretations of the outcomes of prior disasters and why these outcomes unfolded that will influence their subsequent perceptions of, and preparations for, future disaster events (Lindell and Perry, 1992).
Opportunities for New Orleans to have Learned Prior to Katrina • Hurricane Ivan • 2004: cat 4-5 (140-155 mph winds) • Predicted 25% chance stay on direct track to New Orleans (actual landfall in Mobile Bay, Alabama 2am Sept. 16) • By noon Sept. 15 (when storm turned) estimated 600,000 out of 1.2 million evacuated New Orleans • 2/3 of non-evacuees (with means to evacuate) didn’t evacuate because they felt safe in their homes. Others were discouraged by negative experiences with past evacuations • 120,000 NO residents did not have cars • Superdome was used to shelter non-evacuees
Opportunities for New Orleans to have Learned Prior to Katrina • Hurricane Pam Simulation conducted July 2004 • 8 day table-top exercise with over 250 officials participating • Assumed 120 mph (cat 3) slow moving storm • Assumed more than 1 million evacuated • Recognized that the levees would be overtopped • Recognized the need to rely on state resources for shelters for 3-5 days • Focused recommendations for managing the aftermath of the catastrophe (i.e., search & rescue, debris removal, etc.) rather than for minimizing the magnitude of the catastrophe (i.e., improving evacuation and sheltering strategies remained open issues) • A second exercise in summer of 2005 didn’t take place because of lack of funding
Precursor’s Influence • Decision Makers attend to near-misses • Near-miss information is incorporated into decision calculus • Near-misses will systematically bias decision making • Towards more risk • Near-misses can be evidence of a system’s vulnerability or of a system’s resilience • Resilience > Vulnerability • Good Fortune is Discounted
What is a Near-Miss? • Near-miss • An event that has some probability of a negative (even fatal) outcome and some probability of a positive (safe) outcome, and the actual outcome is non-hazardous • A success that could have been a failure except for good luck
What is a Near-Miss? Definitions: Lx < CMIN = Success Cx > LMAX = Success CMIN < Lx < LMAX AND Lx > Cx = Hit CMIN < Lx < LMAX AND Lx < Cx = Near-miss Examples: L1 < CMIN : SUCCESS C3 > LMAX : SUCCESS L3 > C1 : HIT L2 < C2 : NEAR-MISS L2 < C1 : NEAR-MISS L3 < C2 : NEAR-MISS NEAR MISS Or HIT SUCCESS SUCCESS C2 L2 L3 L1 C1 C3 CMIN LMAX
First Studies • Simulation of a Mars Rover mission • Limited battery life (8 days) • 5 travel days to destination • Rewarded $5 extra dollars for each battery day extra • Weather forecast for each day • Mild weather or 95% chance of severe storm • Severe dust storms can cause catastrophic failure • 40% catastrophic failure if drive through severe storm • 100% safe if stop & deploy wheel guards • Operational decisions (stop/ go) for day 6-~13 • Decide to drive or stop & deploy wheel guards • Manipulation check, risk propensity, and engagement
Manipulation • Near-Miss • Of 5 days before you started operating the rover, had 3 days of severe storms and rover had driven successfully through these • Of 5 days before you started operating the rover, all mild weather
Results: Experiment 2 Those who did NOT use probability information
Second Studies • Given biases exist, how does that influence how managers are evaluated within an organization? • Failures and successes are attributed to poor decision making • Is there another variable? • Loma Prieta 1989 (Friday afternoon rush hour) • Northridge 1994 (4:30 am on a holiday) • If all outcomes are a function of decision quality and luck, how do we evaluate others’ decision processes?
Biases in Decision making • Outcome Bias (Baron & Hershey, 1988, Allison, et al., 1996) The outcome systematically influences people’s evaluations of the quality of the decision making • Hindsight bias (Fischoff, 1982) • Anchor on outcomes • Exaggerate what could have been anticipated at time of decision • Misremember one’s own predictions to be consistent with now-known outcomes • Suggest we will anchor on outcomes
Hypothesis 1 • H1a: Managers whose decisions result in a miss (organizational success) will have their decision making evaluated in a significantly more favorable light than managers whose decisions result in a hit (organizational failure) • H1b: Managers whose decisions result in a miss (organizational success) will be judged to be more competent, to be more intelligent, to have more leadership ability, and to be more promotable than managers whose decisions result in a hit (organizational failure)
What happens with near-misses? • Recall that a near miss is both • Evidence of a system’s resilience • Evidence of a system’s vulnerability • And what if we know the outcome was derived, in part, from good luck? • Prospect theory: reference points • Norm theory: • Immutable features give you class of events to categorize something • Mutable features (easily imagined as different) give you contrast events • What is easily imagined mutable feature? • Failure • Thus near miss = miss and near miss contrasted with failure • Suggests near-misses more likely to be coded as successes than as failures • Suggests we will discount other’s good luck
Hypothesis 2 • H2a: Managers whose decisions result in a near-miss will have their decision making evaluated more favorably than managers whose decisions result in a hit and judged less favorably than managers whose decisions result in a miss. • H2b: Managers whose decisions result in a near-miss will be judged more competent, more intelligent, to have more leadership ability, and to be more promotable than managers whose decisions result in a hit and judged less competent, less intelligent, to have less leadership ability, and to be less promotable than managers whose decisions result in a miss.
Hypothesis 3 • H3: Managers whose decisions result in a near-miss will be judged closer to those whose decisions ended in a miss than to those whose decisions ended in a hit.
Method • Case study loosely based on development details from past unmanned NASA missions • Development problems • Challenges interacting across NASA development centers • A skipped peer review • Mission not delayed over a last-minute potentially fatal problem (considered highly unlikely) • Three different outcomes • Success: Launch and deployment successful (no problem shortly after launch) • Failure: Problem shortly after launch, because of spacecraft’s orientation to sun, problem is catastrophic • Near-miss: Problem shortly after launch, because of spacecraft’s orientation to sun, not a problem, data collection is successful
Participants • 89 undergraduate students • 98 MBA students • 24 NASA managers
Sample Differences • In general, NASA managers tended to be a bit easier on Chris • Rated decision to launch higher (p<.05) • NASA mean = 3.7, MBA mean = 3.4, UG mean = 3.0 • Were marginally more likely to promote Chris (p=.1) • NASA mean = 3.8, MBA mean = 3.3, UG mean = 3.3 • Were significantly less likely to fire Chris (p<.001) • NASA mean = 3.0, MBA mean = 4.2, UG mean = 4.3 • No significant interaction effects between sample and condition
ALL PARTICIPANTS Competence Decision to proceed without peer review 2 (very bad) 2 (not at all) 4 (neutral) 3 3 5 4 (somewhat) 5 6 (greatly) 6 (very good) p<0.001 p<0.001 Intelligence Decision to launch without redesign 2 (not at all) 3 4 (somewhat) 5 6 (greatly) 2 (very bad) 3 4 (neutral) 6 (very good) 5 p<0.05 p<0.001 Decision making ability Decision to promote 2 (not at all) 3 5 2 (very bad) p<0.01 4 (somewhat) 6 (very good) 6 (greatly) 4 (neutral) 5 3 p<0.001 p<0.001 Leadership ability Decision to fire 6 (very good) 2 (very bad) 4 (neutral) 3 2 (not at all) 3 5 6 (greatly) 5 4 (somewhat) p<0.001 p=.11 - Success - Near-miss - Failure
Summary • Rated managers whose decisions resulted in organizational success significantly more favorably than mangers whose decisions resulted in failures • Rated mangers whose decisions, BUT FOR LUCK, would have resulted in failures more favorably than those whose decisions resulted in failure • Did not hold managers accountable for faulty decision making if results in good organizational outcome, EVEN WHEN SUCCESS IS BECAUSE OF LUCK
Implications for organizations • Near-misses categorized as misses rather than hits, meaning organizations fail to take advantage of learning opportunities • Generally lack the formal failure investigation board • Near-miss bias may make organizations more risky • May explain the normalization of deviance (Vaughan, 1996) Without obvious failures, events that once caused concern become accepted as normal occurrences. • If those experiencing near-misses are promoted through organizational ranks, given they make more risky subsequent decisions, organizations will come to embrace more and more risk.
What to do about all this? • Knowledge and recognition that biases exist • Hindsight, Outcome, and Near-Miss Bias • Decisions do have a luck component • Developing an Effective Lessons Learned System • Effectiveness of Lessons Learned systems are dependent on completeness of data • A complete data set requires noticing both failures and successes and being able to distinguish near-misses • How can you increase chances of acknowledging both successes and failures • Improve group decision making: groupthink, escalation, abilene
Avoiding Groupthink • Monitor team size (<10) • Provide face-saving mechanism for dissent and changing one’s mind • Don’t be a bystander because fearful of appearing foolish (evaluation apprehension) • Discuss risks before benefits • Discuss how things might have failed • Encourage & track alternative viewpoints • Get external observers
Avoiding Escalation • All advice for avoiding groupthink, plus: • Set resource limits up front • Recognize sunk costs
Avoiding Abilene Paradox • All advice for avoiding groupthink, plus: • Generate solution alternatives without evaluation (brainstorming) • Conduct a private vote (Delphi) • Create norms for expression of controversial views (rotating devil’s advocate)
Future Work • Determine what factors may help mitigate a near-miss bias • Determine what effect the accumulation of the near-miss bias may have as an inhibitor to organizational learning