1 / 27

Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); May 4, 2004

Don’t forget to make this invisible. Say that at each step I lose more of the audience. Stage 1: those working normatively who do not consider EU to be normative. Stage 2: Those who do not want to be paternalistic. Here I lose all psychologists.

dhenline
Download Presentation

Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); May 4, 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Don’t forget to make this invisible. Say that at each step I lose more of the audience. Stage 1: those working normatively who do not consider EU to be normative. Stage 2: Those who do not want to be paternalistic. Here I lose all psychologists. 3. Those who prefer other nonEU theories than normative. Point is, there is no easy way to do applied work You always get dirty hands. It is easy to criticize everything stated here, but not easy to give alternatives Using Descriptive Decision Theories such as Prospect Theory to Improve Prescriptive Decision Theories such as Expected Utility; the Dilemma of Omission versus Paternalism Explain a lot in words about medical decision making and EU. Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); May 4, 2004 • A typical example of an application of decision theory in the health domain today; based on expected utility. • Inconsistencies. Correct for them at all? Ethical Complications; paternalism … Our model will deliberately deviate from observations …Sin of death in experimental work such as psychology … ??? • Corrections for violations of expected utility, based on prospect theory.

  2. U p Up 2 Before going to hypothetical question, so just after the square appeared around the decision tree, talk some about the tree, pros and cons, essentialness of asking for subjective input of patient where piano player doesn't mind losing voice but teacher does, etc. Also tell already here that analysis is going to be based on expected utilty. cure radio- therapy artificial speech recurrency, surgery  + cure surgery artificial speech recurrency  + nor-mal voice  EU Possibly discuss already here that much can be criticized, such as EU etc. But that this is a machinery that works at least, and that brought many “political” steps forward in the health domain, such as consideration of qualitity of life (iso five-year survival rate), and the very fact that patients and their subjective situation can be involved. That for this technique there are computer programs available to implement it, and C/E analyses can be performed with it. normal voice 99% of applications in the field go like this. I in fact bother more about problems in the model than most applied people. Most applied people say: Peter just don’t bother. You will all be criticizing me for not bothering enough.fs .60 1 .60 0.6 0.4 .744 .144 .9 .16 0.4 0 0 .24 0.6 .744 artificial speech .63 .9 .70 0.7 0.3 .711 .081 .9 .09 0.3 0 0 .21 0.7 .711 Hypothetical standard gamble question: For which p equivalence? Patient with larynx-cancer (stage T3). Radio-therapy or surgery? Patient answers: p = 0.9. Expected utility: U() = 0; U(normal voice) = 1; U(artificial speech) = 0.9 1 + 0.1 0 = 0.9. p artifi-cial speech or 1p Answer: r.th!

  3. Standard gamble question to measure utility: p ? ~ artificial speech 1-p EU = U = p Perf. Health p 1 + (1–p) 0  = p 3 Analysis is based on EU!?!? “Classical Elicitation Assumption” I agree that EU is normative. But …

  4. 4 Now comes a hypothetical example to illustrate inconsistencies, and difficulty of making decisions. It is not important in the example whether or not you consider expected utility to be normative. All critical aspects concern more basic points.

  5. treatmentprobability .90 .10 Cur version of this ethical example will be kept separately, and not in this file. 5 Treatment decision to be taken for your patient Make yellow comments invisible. ALT-View-O (impaired) health state ("not treat") or treat: Your patient is now unconscious.  You must decide: Treat or not treat. Depends on - goodness of health state relative to - treatment probability.

  6. “quality-of-life” For them, qol-probability was elicited, as follows. 6 Give Handout 1 (given at the end of this file). Background info on similar patients Before, 10.000 similar cases were observed.

  7. qol-probability 7 Elicitation of qol-probability: The following were presented to each patient. - A rich set of health states, containing the above one; - A rich set of probabilities (all multiples of 0.01). For each health state, each patient was asked: p health state ~ 1–p For which probability p are they equivalent? The answer is the qol-probability. Is index of quality of the health state. High p => high quality.

  8. qol-probability 0.91 health state ~ 0.09 8 Average, median, and modus qol-probability: 0.91. Question 1 to the audience: Would you now treat or not-treat your patient? (Hint: Compare treatment probability = 0.90 to qol-probability = 0.91.) Do or do not show hint immediately, depending on audience.

  9. p health state ~ 1–p 9 Now suppose something more. Give handout 2. There is also a new elicitation of qol-probability: The following were also presented to each patient. A rich set of health states, containing the above one; A rich set of probabilities (all multiples of 0.01). For each probability p, each patient was asked: For which health state are they equivalent? Such measurements are done for all p. In each case, p is called the newqol-probability of the corresponding health state.

  10. new qol-probability 0.85 health state ~ 0.15 10 For the health state of your patient, you expect new qol probability = 0.91 on average. However, data reveal great inconsistencies: p = 0.85 results as new qol-probability, as average, median, and modus. Repeat that the matching was done here for the health state, I.e., for p = 0.85 given, the matching health state was the one now relevant. Question 2 to the audience: What would you do, treat or not treat, for the one patient now considered?

  11. qol-probability 0.91 health state ~ 0.09 11 Now suppose something more. Give handout 3. For your one patient, you also observed the (old) qol-probability (“for which probability … equivalent?“). It was 0.91, as for most others. No more time for new qol-probability measurement. Unfortunately, the patient became unconscious! Question 3 to the audience: What would you do now, treat or not treat, for the one patient now considered?

  12. 12 My opinion: Treat the patient. Goes against the elicited opinion. However, elicitation is biased (see 10,000 prior cases). Moral of the story: We have to accept the possibility of systematic biases in preference measurement. Should try to deal with them as good as possible.

  13. 13 This completes the hypothetical example about how to treat your patient started at slide 5. We return to the discussion of the classical elicitation assumption. As we saw before: Common justification of classical elicitation assumption: EU is normative (von Neumann-Morgenstern). I agree that EU is normative. But not that this would justify SG (= standard gamble = “qol-probability measurement”) -analysis. SG measurement (as commonly done) is descriptive. EU is not descriptive. There are inconsistencies, so, violations. They require correction (? Paternalism!?).

  14. 14 Replies to discrepancies normative/descriptive in the literature: (1) Consumer Sovereignty("Humean view of preference"): Never deviate from people's prefs. So, no EU analysis here! However, Raiffa (1961), in reply to violations of EU: "We do not have to teach people what comes naturally.“ We will, therefore, try more. (2)Interact with client (constructive view of preference). If possible, this is best. Usually not feasible (budget, time, capable interviewers …) (3) Measure only riskless utility. However, we want to measure risk attitude! (4) We accept biases and try to make the best of it.

  15. 15 That corrections are desirable, has been said many times before. Tversky & Koehler (1994, Psych. Rev.): “The question of how to improve their quality through the design of effective elicitation methods and corrective procedures poses a major challenge to theorists and practitioners alike.” E. Weber (1994, Psych. Bull.) “ …, and finally help to provide more accurate and consistent estimates of subjective probabilities and utilities in situations where all parties agree on the appropriateness of the expected-utility framework as the normative model of choice.” Debiasing (Arkes 1991 Psych. Bull. etc)

  16. 16 Schkade (Leeds, SPUDM ’97), on constructive interpretation of preference: “Do more with fewer subjects.” Viscusi (1995, Geneva Insurance): “These results suggest that examination of theoretical characteristics of biases in decisions resulting from irrational choices of various kinds should not be restricted to the theoretical explorations alone. We need to obtain a better sense of the magnitudes of the biases that result from flaws in decision making and to identify which biases appear to have the greatest effect in distorting individual decisions. Assessing the incidence of the market failures resulting from irrational choices under uncertainty will also identify the locus of the market failure and assist in targeting government interventions intended to alleviate these inadequacies.”

  17. 17 Million-$ question: Correct how? Which parts of behavior are taken as “bias,” to be corrected for, and which not? Which theory does describe risky choices better? Current state of the art according to me: Prospect theory = rank- and sign-dependent utility (Luce & Fishburn 1991, Tversky & Kahneman 1992). Depending on whether public is tired of general discussions or not, state the following point: Several authors have suggested such a role of prospect theory, but always in the context of reconciling inconsistencies. We go one step further. If your data are too poor to elicit inconsistencies if present, then correct for the inconsistencies that you know from other observations, such as collected in prospect theory, nevertheless. As in the ethical example.

  18. 1 w+ 1 0 p Figure. The common weighting function (Luce 2000). 18 First deviation from expected utility: probability transformation w- is similar; Second deviation from expected utility: loss aversion/sign dependence. People consider outcomes as gains and losses with respect to their status quo. They then overweight losses by a factor  = 2.25.

  19. p PT: U(x) = p + (1-p) 19 EU: U(x) = p. We: is wrong!! Have to correct for above “mistakes.” w+( ) w+( ) w-  Quantitative corrections proposed by Bleichrodt, Han, José Luis Pinto, & Peter P. Wakker (2001), "Making Descriptive Use of Prospect Theory to Improve the Prescriptive Use of Expected Utility," Management Science 47, 1498–1514.

  20. .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 .0 0.000 0.025 0.038 0.048 0.057 0.064 0.072 0.078 0.085 0.091 .1 0.097 0.102 0.108 0.113 0.118 0.123 0.128 0.133 0.138 0.143 .2 0.148 0.152 0.157 0.162 0.166 0.171 0.176 0.180 0.185 0.189 .3 0.194 0.199 0.203 0.208 0.213 0.217 0.222 0.227 0.231 0.236 .4 0.241 0.246 0.251 0.256 0.261 0.266 0.271 0.276 0.281 0.286 .5 0.292 0.297 0.303 0.308 0.314 0.320 0.325 0.331 0.337 0.343 .6 0.350 0.356 0.363 0.369 0.376 0.383 0.390 0.397 0.405 0.412 .7 0.420 0.428 0.436 0.445 0.454 0.463 0.472 0.481 0.491 0.502 .8 0.512 0.523 0.535 0.547 0.560 0.573 0.587 0.601 0.617 0.633 .9 0.650 0.669 0.689 0.710 0.734 0.760 0.789 0.822 0.861 0.911 Skip this table. 20 Standard Gamble Utilities, Corrected through Prospect Theory, for p = .00, ..., .99 E.g., if p = .15 then U = 0.123

  21. 21 1 U 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 p Corrected Standard Gamble Utility Curve

  22. *** *** *** ** ** ** * * *** ** * *** *** 0.25 0.20 0.15 0.10 0.05 0.00 0.05 0.10 22 5th 1st 3d 2nd 4th Corrected(Prospect theory) Classical (EU) USG UCE(at 1st = CE(.10), …, at 5th = CE(.90)) USG UTO(at 1st = x1, …, at 5th = x5) UCE UTO(at 1st = x1, …, 5th = x5)

  23. SG(PT) SP U CE1/3 SG(EU) TO t0= FF5,000 t6= FF26,068 FF 23 Abdellaoui, Barrios, & Wakker (2004) 7/6 1 5/6 4/6 3/6 2/6 1/6 0 Utility functions (for mean values).

  24. 24 This completes the lecture. Hereafter follow the handouts printed and given to the audience on slides 6, 9, 11.

  25. 1 Treatment decision for your patient treatmentprobability Question 1 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09

  26. 2 Treatment decision for your patient treatmentprobability Question 2 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09 Mean etc. from 10,000 similar patients new qol-probability 0.85 health state ~ 0.15

  27. 3 Treatment decision for your patient treatmentprobability Question 3 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09 Mean etc. from 10,000 similar patients new qol-probability 0.85 health state ~ 0.15 Your own patient: qol-probability 0.91 No new-qol measurement could be done with your patient. health state ~ 0.09

More Related