1 / 21

Omission or Paternalism

Omission or Paternalism. Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); Seminar at University of Chicago, School of Business, June 23, 2004. Don’t forget to make this invisible.

xenon
Download Presentation

Omission or Paternalism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Omission or Paternalism Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); Seminar at University of Chicago, School of Business, June 23, 2004 Don’t forget to make this invisible. Say that at each step I lose more of the audience. (a) EU-Stage: those working normatively who do not consider EU to be normative. (b) Paternalism-stage: Those who do not want to be paternalistic. Here I lose all psychologists. (c) Prospect-theory-stage. Those who prefer other nonEU theories than normative. Point is, there is no easy way to do applied work You always get dirty hands. It is easy to criticize everything stated here, but not easy to give alternatives. Explain a lot in words about medical decision making and EU. I skipped this in Chicago, Jun’04, and people felt mostly that this whole measurement procedure is silly. • Hypothetical example with inconsistencies in decisions. Correct for them at all? Ethical complications; paternalism … • My proposal in the example will deliberately deviate from the observations …Sin of death in experimental work such as psychology … ??? • A typical example of an application of decision theory in the health domain today; based on expected utility. • Corrections for violations of expected utility, based on prospect theory.

  2. 2 Now comes a hypothetical example to illustrate inconsistencies, and difficulty of making decisions. It is not important in the example whether or not you consider expected utility to be normative. All critical aspects concern more basic points here.

  3. treatmentprobability .90 .10 3 Treatment decision to be taken for your patient Make yellow comments invisible. ALT-View-O (impaired) health state ("not treat") or treat: Your patient is now unconscious.  You must decide: Treat or not treat. Depends on - goodness of health state relative to - treatment probability. Give Handout 0

  4. “quality-of-life” For them, qol-probability was elicited, as follows. 4 Give Handout 1. Background info on similar patients Before, 10.000 similar cases were observed.

  5. qol-probability 5 Elicitation of qol-probability: The following were presented to each patient. - A rich set of health states, containing the above one; - A rich set of probabilities (all multiples of 0.01). For each health state, each patient was asked: p health state ~ 1–p For which probability p are they equivalent? The answer is the qol-probability. Is index of quality of the health state. High p => high quality => not treat.

  6. qol-probability 0.91 health state ~ 0.09 6 Average, median, and modus qol-probability: 0.91. Question 1 to the audience: Would you now treat or not-treat your patient? (Hint: Compare qol-probability = 0.91 to treatment probability = 0.90.) Do or do not show hint immediately, depending on audience.

  7. p health state ~ 1–p 7 Now suppose something more. Give handout 2. There is also a new elicitation of qol-probability: The following were also presented to each patient. A rich set of health states, containing the above one; A rich set of probabilities (all multiples of 0.01). For each probability p, each patient was asked: For which health state are they equivalent? Such measurements are done for all p. In each case, p is called the new qol-probabilityof the corresponding health state.

  8. new qol-probability 0.85 health state ~ 0.15 8 For the health state of your patient, you expect new qol probability = 0.91 on average. However, data reveal great inconsistencies: p = 0.85 results as new qol-probability, as average, median, and modus, of the 10,000 patients. Repeat that the matching was done here for the health state, I.e., for p = 0.85 given, the matching health state was the one now relevant. Question 2 to the audience: What would you do, treat or not treat, for the one patient now considered?

  9. qol-probability 0.91 health state ~ 0.09 9 Now suppose something more. Handout 3. For your one patient, you also observed the (old) qol-probability (“for which probability … equivalent?“). It was 0.91, as for most others of the 10,000. No more time for new qol-probability measurement. Unfortunately, the patient became unconscious! For the 10,000 patients, the data are bad (inconsistencies). Not for your one patient; there you have no inconsistency. Question 3 to the audience: What would you do now, treat or not treat,for the one patient now considered?

  10. 10 My opinion: Treat the patient. Goes against the elicited opinion. However, elicitation is biased (see 10,000 prior cases).

  11. qol-probability 0.91 health state ~ 0.09 11 Now suppose something less. Handout 4. For your one patient, you observed the (old) qol-probability (“for which probability … equivalent?“). It was 0.91, as for most others. No new ql-probability measurement; patient unconscious. Now no data on previous cases. Instead, use your knowledge of decision theory. Question 4 to the audience: What would you do now, treat or not treat, for the one patient now considered? Use knowledge of decision theory: only for specialized audiences.

  12. 12 I would still treat, based on the literature on biases. Moral of the story: We have to accept the possibility of systematic biases in preference measurement. Should try to deal with them as good as possible. I think: correct for them, based on knowledge of literature.

  13. U p Up 13 Before going to hypothetical question, so just after the square appeared around the decision tree, talk some about the tree, pros and cons, essentialness of asking for subjective input of patient where piano player doesn't mind losing voice but teacher does, etc. Also tell already here that analysis is going to be based on expected utilty. cure radio- therapy artificial speech recurrency, surgery  + cure surgery artificial speech recurrency  + nor-mal voice  EU Possibly discuss already here that much can be criticized, such as EU etc. But that this is a machinery that works at least, and that brought many “political” steps forward in the health domain, such as consideration of qualitity of life (iso five-year survival rate), and the very fact that patients and their subjective situation can be involved. That for this technique there are computer programs available to implement it, and C/E analyses can be performed with it. normal voice 99% of applications in the field go like this. I in fact bother more about problems in the model than most applied people. Most applied people say: Peter just don’t bother. You will all be criticizing me for not bothering enough.fs .60 1 .60 0.6 0.4 .744 .144 .9 .16 0.4 0 0 .24 0.6 .744 artificial speech .63 .9 .70 0.7 0.3 .711 .081 .9 .09 0.3 0 0 .21 0.7 .711 Hypothetical standard gamble question: For which p equivalence? Patient with larynx-cancer (stage T3). Radio-therapy or surgery? Patient answers: p = 0.9. Expected utility: U() = 0; U(normal voice) = 1; U(artificial speech) = 0.9 1 + 0.1 0 = 0.9. p artifi-cial speech or 1p Answer: r.th!

  14. 14 Million-$ question: Correct how? Which parts of behavior are taken as “bias,” to be corrected for, and which not? Which theory does describe risky choices better? Current state of the art according to me: Prospect theory, Tversky & Kahneman (1992). Depending on whether public is tired of general discussions or not, state the following point: Several authors have suggested such a role of prospect theory, but always in the context of reconciling inconsistencies. We go one step further. If your data are too poor to elicit inconsistencies if present, then correct for the inconsistencies that you know from other observations, such as collected in prospect theory, nevertheless. As in the ethical example.

  15. 1 w+ 1 0 p Figure. The common weighting function (Luce 2000). 15 First deviation from expected utility: probability transformation w- is similar; Second deviation from expected utility: loss aversion/sign dependence. People consider outcomes as gains and losses with respect to their status quo. They then overweight losses by a factor  = 2.25.

  16. 16 1 U 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 p Corrected Standard Gamble Utility Curve

  17. 0 not treat! if Good impaired health state Qol probability big: Treatment probability big: treat! Treatment probability = 0.90 throughout the lecture. Qol probability =: to be measured. Please remember the enlarged text through the o.

  18. 1 Treatment decision for your patient treatmentprobability Question 1 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09

  19. 2 Treatment decision for your patient treatmentprobability Question 2 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09 Mean etc. from 10,000 similar patients new qol-probability 0.85 health state ~ 0.15

  20. 3 Treatment decision for your patient treatmentprobability Question 3 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Mean etc. from 10,000 similar patients qol-probability 0.91 health state ~ 0.09 Mean etc. from 10,000 similar patients new qol-probability 0.85 health state ~ 0.15 Your own patient: qol-probability 0.91 No new-qol measurement could be done with your patient. health state ~ 0.09

  21. 4 Treatment decision for your patient treatmentprobability Question 4 to the audience: Would you treat or not treat your patient? .90 (impaired) health state ("not treat") or treat .10 Your own patient: qol-probability 0.91 No new-qol measurement could be done with your patient. health state ~ 0.09

More Related