• 290 likes • 487 Views
Can causal inference methods using routine data ("Real world evidence") replace randomised trials - a case study in breast cancer. Dr Jeremy Wyatt DM FRCP, Professor of Digital Healthcare, University of Southampton; Clinical Advisor on New Technologies, Royal College of Physicians.
E N D
Can causal inference methods using routine data ("Real world evidence") replace randomised trials - a case study in breast cancer Dr Jeremy Wyatt DM FRCP, Professor of Digital Healthcare, University of Southampton; Clinical Advisor on New Technologies, Royal College of Physicians Image: KACPER PEMPEL
Outline • Routine data & real world evidence (RWE); why might be useful, and some challenges • Causality versus association • Some examples of RWE studies and the biases they present • Instrumental variable methods as a potential solution to these challenges • A case study in breast cancer • Conclusions
Some advantages of big health data & “real world evidence” • Datasets 100-1000 times larger than for RCTs, so can examine patient subgroups • Data captured from routine care, so more representative / pragmatic • Wider variety of data items, so can answer more questions eg. on side effects, effect modifiers • Uses existing data, so quicker to start up and cheaper to answer questions (but EPIC in Cambridge cost £200M + 1-2 years of lower Care Quality Commission ratings) Sherman et al – FDA view on RWE - NEJMed 2016 Lars Hemkens, Ioannidis et al – Routinely collected data, promises & limitations. CMAJ 2016 3/39
Diabetes prevalence in UK primary care? It depends on which database you check, and how… 4.6% 2.3% Slide from Niels Peek
Leeds study of accuracy of Hospital Episode Statistics (HES)Work of Rosy Tsopra Methods: • Obtain case notes from 105 consecutive hospital discharges from 5 chest medicine wards in Leeds hospitals Trust • Consultant & coder work together to develop Gold Standard list of coded diagnoses Preliminaryresults: • Median of 11.5 gold standard diagnoses, HES (“official”) coding process: 7 diagnoses • Correctness of official codes (TP rate): 88% • Completeness of official codes (PVP): 43% Implication:you will only find about half the cases you are looking for through HES
Where does patient data come from? Hospital dataset Person with illness Eligible population Pt: contact NHS ? Dr: collect data item ? [Who?] transfer data item ? [Who?] store data item ? Anonymise Dept. dataset Researcher
Association vs. causation • Fallacy of post hoc ergo propter hoc • Risk of random association (“coincidence”) - can estimate that risk using significance testing • A real association may be just that: both phenomena are connected via a common cause
Do lemons from Florida cause highway fatalities ? Source: www.cqeacademy.com/cqe-body-of-knowledge/continuous-improvement/quality-control-tools/
Graphical model Improved roads, better vehicles…. Common underlying cause ? Intervention or risk exposure Outcome Mexican lemons Reduced highway deaths
Association vs. causation: Rochester library study Study question: is hospital length of stay (LOS) shorter in patients whose doctors used the Rochester NY library ? Method: compared LOS in patients of library-using Drs vs. patients of Drs who do not (case-control) Result: LOS 1 day less in library-using Drs; savings would easily pay for the library ! Possible interpretations: Library use is the cause of reduced LOS Library use is a marker of doctors who keep their patients in hospital for less time Library use results from doctors keeping patients in hospital less ! A better question: What is the impact on LOS of providing a sample of doctors with access to the library ?
Graphical model Better doctors ? Use the library Patients spend less time in ward Reverse causation
Concerns about making inferences from routine data https://utmost.org/going-through-spiritual-confusion/
Simpson’s Paradox: mortality in diabetes < > > 64% of 358 97% of 544 Data from Poole Diabetes cohort, cited by Julious et al BMJ 1994
Confounding by indication • 40% of cancer patients treated with new drug survive 5 years versus 30% of patients treated with old drug • Difference persist despite taking account of differences in age, baseline cancer severity, genetic markers… • Conclusion: the new drug reduces mortality by 10% • But maybe allocation to the new drug depends on the doctor’s intuition on who will survive (subtle predictive feature not recorded in any database) • So, receipt of the new drug is a marker of better outcome - not the cause Propensity scoring as a potential solution to this
Propensity scoring • Estimatethe propensity score (the probability of a patient etc. receiving a treatment, policy, or other intervention) from the measured covariatesusing logistic regression • Use that score as an independent variable in later analysis, or when matching cases & controls. • This should reduceconfounding found when estimating treatment effect by simply comparing outcomes among units that received the treatment versus those that did not. • Source: Paul Rosenbaum & Donald Rubin,Biometrika 1983
Estimating survival benefit of Ezetimibe in 2233 all-cause deaths in heart attack survivors using routine GP data Eg. First incident MI; missing cholesterol levels; medication covariates Source: Pauriah et al. Ezetimibe Use and Mortality in Survivors of an Acute Myocardial Infarction: A Population-based Study. Heart 2014
Immortal time bias • Occurs in cohort studies if you include in the time you analyse the time before the exposure in exposed cases • Rationale: the time from cohort entry to receiving the intervention is “immortal time” because the person cannot die or they would not have been an exposed case (Suisa S. Pharmacoepid & Drug Safety 2007) • Result: effect size is overestimated because of inflated time when the event cannot occur in cases • Solution: analyse time after actual exposure, not from cohort entry
Estimating causality from big health data: some possible solutions Understand & quantify the biases & apply expertise in relevant analytical methods: • life course epidemiology, multi-level modelling • functional data analysis for intermittent monitoring data • case-crossover design (Farrington) • mediation and Rubin causal modelling • instrumental variable analysis eg. regression discontinuity
The idea of instrumental variables Instrumental variable No direct link with outcome except via intervention Determines whether intervention occurs ? Intervention or risk exposure Outcome
Examples of IVs • Allocation by a random number – an RCT • Allocation by doctor, time period, day of week, distance from hospital, district… • Some obscure factor that makes treatment feasible, eg. an HLA matched sibling, allowing bone marrow transplant in children with AML (Davey Smith G, JLL 2006 citing Gray & Wheatley 1990) • Allocation by a test result or risk score – an instrumental variable design
Regression discontinuity design • Some drugs / procedures are used according to the threshold in a continuous variable eg. test result or predicted risk • But due to measurement error, people just above & just below an allocation threshold are verysimilar • So, if you have enough people to compare, you can estimate the impact of the intervention, just like an RCT… Thistlethwaite & Campbell, 1960
Assumptions of RDD design • Therapy is assigned (deterministically or probabilistically) according to a known cut off point in a continuous assignment variable • Assignment variable is measured before treatment and is not changed by treatment • There is continuity in the outcomes at the threshold (no measured or unmeasured confounders) eg. due to random measurement error • Covariates similar above & below threshold
Carrying out RDD studies • Test feasibility: continuous assignment variable, universal outcome assessment, check if treatment assignment rule fuzzy or sharp • Check covariate balance, manipulation of treatment status (ie. bunching near cutoff) • Visually check for treatment effect – plot outcome vs. assignment variable • Fit regression models to estimate treatment effect using different distances form cutoff (bandwidths)
Our attempted RDD study in 45,000 Scottish women with breast cancer • NHS Predict score is an accurate, well calibrated algorithm for predicting p(Response|Chemotherapy) • NICE: doctors should offer women chemotherapy when p(R|C) >5%, be reluctant to give it if <3% and discuss it with woman if 3-5% • However, this is what happens in Scotland: Gray, Hall, Marti, Brewster, Wyatt, sub. to J ClinEpidemiol. Funded by CSO Scotland
Some further concerns about RDD design • Do doctors closely follow the calculated risk to allocate treatment ? • Sample size – large cohort needed to give enough data points close to risk threshold • Allocation bias: if my GP wants to prescribe statins & my cholesterol is 4.9, she may retest me • Data recording bias: GP may record 4.9 as 5 – so eligible for statin !
Scenarios when RDD may be useful • When routine data are available • Treatment has already become established • “Randomisation is unethical” • Rare diseases with reluctance to refer to single centre • When RCTs recruit unrepresentative samples
Beware: non-randomised study designs are associated with replication failure ! Ionnidis et al. Contradicted and initially stronger effects in highly cited clinical research. JAMA2005 [original articles with 1000+ citations,1990-2003]
Conclusions • It make sense to use routine health data to improve patient safety, target interventions, evaluate process innovations and create a “Learning Health System” • But it’s often hard to know if our data is biased or lacks key unmeasured variables • Propensity scoring, IV or RDD can sometimes help - but not always • More research is needed to understand when we can trust the results of PS, IV, RDD and other inferential methods