1 / 23

Where is Epidemiology going?

Jan P Vandenbroucke Bern, STROBE meeting August 2010 Part I version 22 Aug. Where is Epidemiology going?. Four topics. The ‘surge’ of Comparative Effectiveness Research New statistical techniques (or old ones that are suddenly popular)

penha
Download Presentation

Where is Epidemiology going?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jan P Vandenbroucke Bern, STROBE meeting August 2010 Part I version 22 Aug Where is Epidemiology going?

  2. Four topics • The ‘surge’ of Comparative Effectiveness Research • New statistical techniques (or old ones that are suddenly popular) • New methodologic insights (confounding, selection bias, interaction, mediation..) • The call for registration of observational research

  3. Surge of Comparative Effectiveness research (1) • The impact of Obama on epidemiologic theory: • 1 billion $ for CER • New or reinvigorated agencies that want to know which health care actions are worthwhile and which are not (Emmanuel, NEJM 2010) • A lot of persons storm into non-randomized effectiveness comparisons; a few pause, think and realize: “attempting the impossible”. Still, they are enthusiastic about the challenge and seek ways out.

  4. Surge (2) Classic papers about ‘the impossible” • Miettinen, intended and unintended effects (1980): Therapeutic effects = RCTs Adverse effects = possible with data usual practice • Rubin (1978, Ann Statistics) in health care the assignment variables too many & subtle, unclear in their definition & relationship with other variables poorly understood: Bayesian analysis to enter assignment in models becomes too sensitive to prespecifications – randomization solves the problem. • Rubin DB Bayesian Inference for Causal Effects: The Role of Randomization. The Annals of Statistics, Vol. 6, No. 1 (Jan., 1978)

  5. Surge of Comparative Effectiveness research (3) • All admit that RCT ideal, but will never deliver the goods • Never sufficient head-to-head comparisons • Never sufficient long-time • Never sufficient real life • Preconference courses at 2010 Int Soc Pharmacoepidemioly and at 2010 American College of Epidemiology meetings, by group of mostly Harvard-based epidemiologists

  6. Surge of Comparative Effectiveness research (4) • Solutions are sought (Stürmer 2009): • Severe restriction for indication & contraindication • New user cohorts e.g. at least 1 to 3 yrs medication free • Comparisons with active drugs for similar indication • Propensity Score and/or strong Instrumental Variables • Agency for Health Care Research and Quality (AHRQ) & Int Soc for Pharmacoeconomics and Outcome Research (ISPOR) published series of papers as guides to CER (J Clin Epidemiol 2010, Value in Health 2009) • Current consensus: it may be possible, but at the expense of generalizability

  7. New Techniques or older ones that suddenly become popular • Propensity score • Confounding score • Instrumental variable analysis

  8. Propensity Score (1) • Rosenbaum & Rubin 1983 • Strong recent increase in popularity • Idea: model ‘propensity’ to be exposed; for two persons with similar propensity, the choice (assignment) is ‘ignorable’ – under the assumption of perfect knowledge (like with usual thinking about confounding)

  9. (C1A) Stürmer, preconference courseInt Soc Pharmaco Epi, Aug 201

  10. Propensity Score (2) • Construction of propensity score: • Regression of determinants of exposure • Every person gets score • Overlapping area between scores of exposed and unexposed is determined • Either used for matching on score, or in multivariate analysis

  11. PropScore (3) Schneeweiss, 2009

  12. Prop Score (4) Appendix, Hackam Lancet 2006

  13. PS (5) Long term debate • Is it better than adjustment for confounding? • Logically, only variables that determine outcome can make a difference (proven in simulations and real life examples; Brookhart AJE 2009) • Variables that are only related to exposure increase standard error and may even introduce confounding – ideally use include variables that are somehow related to outcome, do not use variables that only predict exposure (Brookhart AJE 2006) • Good tool if outcome rare relative to number of variables to stratify for

  14. PS (6) New arguments “pro” • In large database settings with hundreds of variables; outcome always relatively rare relative to number of variables • Hundreds of variables may capture the complexity of prescribing even if underlying reasons for prescription cannot be identified…. Answer to Rubin?

  15. (C6A) Schneeweiss, preconference course Int Soc Pharmaco Epi, Aug 201 log (relative risk) Age-sex-race-year adjustment Unadjusted

  16. PS (7) What should be reported?(My digest) • The choice of variables: care taken to use only variables related to outcome? • The way of making the score (model) • The discrimination achieved by the score: mind!! if too much discrimination: shows that there is too much confounding by indication – PS analysis can’t be done (look for another comparator, etc) • The trimming of the data (restriction of score) • The use of either matching or multivariate analysis • Additional analyses: enter also major confounders like age, sex and institution, next to prop score or in matched model: “Dual robustness”

  17. Confounder score • Cornfield JAMA 1971 • Miettinen 1976 (Disease and exposure risk scores) • Equivalent of prop score, but this time a score made with confounders [Fine point: if PS only of variables also related to outcome – identical?] • Mentioned for completeness – sometimes both used in a sensitivity analysis

  18. Instrumental variable analysis (1) • Idea: a variable that • Determines exposure • Unrelated to patient characteristics • Unrelated to (perceived) risk of outcome • E.g. postal code in cardiovascular resuscitation • Long history in econometrics!

  19. Instrumental variable analysis (2) • To understand: in essence any randomization is an instrumental variable: • ‘Flip of coin’ satisfies all three conditions • Mind!: ‘flip of coin’ gives no guarantee that patient receives treatment! • Analysis, e.g. postal code: • As such: one area vs. other = intention to treat • Or IV analysis: ‘rule of three’: example: in one postal code area 30% new intervention, in other 70% new intervention; what would happen if all received new treatment (= regression analysis of percentage outcome with difference percentage treatment)

  20. IV (3) New use in pharmacoepi • The previous prescription: patient ‘John Smith’ in study; received Vioxx for joint pain; outcome of interest is association with MI (unintended) or GI bleeding (intended); • What was previous NSAIDs prescription in same practice? John Smith enrolled in study with all his data, but with exposure (prescription) of previous patient. Same happens with all patients: some ‘switch’ NSAIDs, some do not • Rationale: previous prescription is not guided by perceived risk of John Smith, but gives info about prescription preference of physician (Brookhart, Epidemiology 2006) • Counterargument: becomes a comparison by health care practice. If different types of patients, or different other treatments, then confounded (Hernan, Robins 2006) • Applicable in large data-bases • [Fine point: analogous to argument why no confounding by indication if risk of adverse effect is unknown]

  21. IV (4) What should be reported From paper Brookhart et al 2010 • Justify need for and role of IV in study • Describe theoretical basis for choice of IV • Report strength of instrument and results from first stage model (=intention to treat) • Distribution of patient risk factors across levels of IV and exposure (answer to Hernan and Robins) • Explore concomitant treatments (answer to Hernan & Robins) • Evaluate sensitivity of IV to modeling assumptions • Discuss issues related to interpretation of estimator

  22. What is best: IV or PS?(My digest) • Different experiences: papers in literature: Stukel JAMA 2007 finds IV superior, vs. Bosco, Lash J Clin Epidemiology 2010 “A most stubborn bias” • IV strongly related to exposure intuitively seems best; weak IV may leave confounding and imprecision. However, strong IV rare and if assumptions violated (e.g., when strong confounding by indication) may also leave confounding (Martens, Epidemiology 2006) • Combine? All three (classic confounding, PS, & IV) presented in one paper as a mutual sensitivity analysis: Schneeweiss NEJM 2008

More Related