220 likes | 324 Views
USE OF EVIDENCE IN DECISION MODELS: An appraisal of health technology assessments in the UK. Nicola Cooper Centre for Biostatistics & Genetic Epidemiology, Department of Health Sciences, University of Leicester, U.K. http://www.hs.le.ac.uk/personal/njc21/.
E N D
USE OF EVIDENCE IN DECISION MODELS: An appraisal of health technology assessments in the UK Nicola Cooper Centre for Biostatistics & Genetic Epidemiology, Department of Health Sciences, University of Leicester, U.K. http://www.hs.le.ac.uk/personal/njc21/ Acknowledgements to: Doug Coyle, Keith Abrams, Miranda Mugford & Alex Sutton
OUTLINE • Background to empirical research • Methods & Findings from Study • Conclusions
BACKGROUND • Increasingly decision models developed to inform complex clinical/economic decisions (e.g. NICE technology appraisals) • Decision models provide: • Explicit quantitative & systematicapproach to decision making • Compares at least 2 alternatives • Useful way of synthesising evidence from multiple sources (e.g. effectiveness data from trials, adverse event rates from observational studies, etc.)
BACKGROUND • Decision modelling techniques commonly used for: • i) Extrapolation of primary data beyond endpoint of a trial, • ii) Indirect comparisons when no ‘head-to-head’ trials • iii) Investigation of how cost-effectiveness of clinical strategies/interventions changes with values of key parameters • iv) Linking intermediate endpoints to ultimate measures of health gain (e.g. QALYs) • v) Incorporation of country specific data relating to disease history and management.
BACKGROUND • Decision models contain many unknown parameters & evidence may include published data, controlled trial data, observational study data, or expert knowledge. • Need to utilise/synthesise available evidence • Model parameters can include: • clinical effectiveness, • costs, • disease progression rates, & • utilities • Evidence-based models – Require systematic methods for identification & synthesis of evidence to estimate model parameters with appropriate levels of uncertainty • If select only “best” (most relevant) evidence – potentially ignore valuable information from other sources
ECONOMIC DECISION MODEL DATA SOURCES RCT1 RCT2 RCT3 OBS1 OBS2 ROUTINE EXPERT Opinion pooling EVIDENCE SYNTHESIS Meta-analysis Gen. synthesis Bayes theorem In combination Adverse Events Clinical Effect MODEL INPUTS Cost Utility DECISION MODEL
MRC FELLOWSHIP • The use of evidence synthesis & uncertainty modelling in economic evidence-based health-related decision models • Part 1) To review and critique use of evidence in decision models developed as part of health technology assessments to date • Part 2) Develop practical solutions for synthesising evidence, with appropriate uncertainty, to inform model inputs: • For example, combining evidence in different formats (e.g. mean and median), from different sources (e.g. RCT, cohort, registry, etc.), etc.
NICE GUIDANCE • NICE methods guidelines to Health Technology Assessment(2004) • ‘all relevant evidence must be identified’ • ‘evidence must be identified, quality assessed and, where appropriate, pooled using explicit criteria and justifiable and reproducible methods’ • and • ‘explicit criteria by which studies are included or excluded’
USE OF EVIDENCE IN HTA DECISION MODELS (Cooper et al, In press) • OBJECTIVE: • Review sources & quality of evidence used in the development of economic decision models in health technology assessments in the UK • METHODOLOGY: • Review included all economic decision models developed as part of the NHS Research & Development Health Technology Assessment (HTA) Programme between 1997 and 2003 inclusively. • Quality of evidence was assessed using a hierarchy of data sources developed for economic analyses (Coyle & Lee 2002) & good practice guidelines (Philips et al 2004).
GOOD PRACTICE CRITERIA FOR DECISION MODELS (Philips et al 2004) • Statement of perspective (e.g. healthcare, societal, etc.) • Description of strategies/comparators • Diagram of model/disease pathways • Development of model structure and assumptions discussed • Table of model input parameters presented • Source of parameters clearly stated • Model parameters expressed as distributions • Discussion of model assumptions • Sensitivity analysis performed • Key drivers/influential parameters identified • Evaluation of internal consistency undertaken
HIERARCHY OF DATA SOURCES • Hierarchy of evidence - a list of potential sources of evidence for each data component of interest: • Main clinical effectiveness • Baseline clinical data • Adverse events and complications • Resource use • Costs • Utilities • Sources ranked on increasing scale from 1 to 6, most appropriate (best quality) assigned a rank of 1
HIERARCHY OF DATA SOURCES #Surrogate outcomes = an endpoint measured in lieu of some other so-called true endpoint (including survival at end of clinical trial as predictor of lifetime survival)
#One HTA reported both decision & Markov models, one reported both Markov & Individual Patient models, and one model type was unclear. 147 out of 180 (73%) considered Health Economics 48 out of 147 (33%) Developed Decision Models 6 out of 48 (15%) Cost Analyses Models 42 out of 48 (88%) Economic Evaluation Models 26 out of 42 (62%) Decision Trees# 12 out of 42 (29%) Markov Models# 5 out of 42 (12%) Individual Sampling# FLOW DIAGRAM 180 HTA published 1997-2003 22 (out of 42) NICE Appraisals
RESULTS FROM APPLYING HIERARCHIES OF EVIDENCE (n=42 decision models)
Rank 1 Rank 2 Rank 3 Rank 4 Unclear N/A Rank 5 Rank 6 High Medium low
CONCLUSIONS • Evidence on main clinical effect mostly: • identified & quality assessed (76%) as part of companion systematic review for HTA • reported in a fairly transparent & reproducible way. • For all other model inputs (i.e. adverse events, baseline clinical data, resource use, and utilities) • search strategies for identifying relevant evidence rarely made explicit • sources of specific evidence not always reported
CONCLUSIONS • Concerns about decision models confirmed by this study: • (1) Use of data from diverse sources (e.g. RCTs, observational studies, expert opinion) - may be subject to varying degrees of bias due to confounding variables, patient selection, or methods of analysis • (2) Lack of transparency regarding identification of model input data & key assumptions underlying model structure and evaluation • (3) Bias introduced by the researcher with regards to choice of model structure & selection of parameter values to input into the model.
CONCLUSIONS • Hierarchies of evidence for different data components provide useful tool for assessing • i) quality of evidence, • ii) promoting transparency, & • iii) informing weakest aspects of model for future work. • Acknowledged, highly ranked evidence for certain model parameters may not always be available but needs to be made explicit (e.g. expert opinion used as no other data available?). • Value of evidence input into decision models, regardless of position in hierarchy, depends on its quality & relevance to question of interest. • QUANTITYvs.QUALITY (PRECISIONvs.BIAS)
UNANSWERED QUESTIONS • How best to identify the relevant evidence? • How much evidence is sufficient and when would there be benefit from identifying additional/supplementary evidence (possibly from lower levels of the hierarchy)? • How to appropriately assess, and where possible adjust for, quality of different types of evidence? • - Instruments for assessing quality within study designs but across different study designs non-trivial (Downs & Black 1998) • How to appropriately combine/synthesis evidence from different study types? For example, • meta-analyse all data assuming equal weight, • observational data as prior for RCT data, or • hierarchical synthesis model
REFERENCES • Cooper NJ, Coyle D, Abrams KR, Mugford M, Sutton AJ. Use of evidence in decision models: An appraisal of health technology assessments in the UK to date.Journal of Health Services Research and Policy (In press 2005). • Coyle D, Lee KM. Evidence-based economic evaluation: how the use of different data sources can impact results. Donaldson C, Mugford M, Vale L. Evidence-based health economics: From effectiveness to efficiency in systematic review. London: BMJ Publishing Group, 2002: 55-66. • Downs SH,.Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. Journal of Epidemiology and Community Health 1998;52:377-84. • Philips Z, Ginnelly L, Sculpher M et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technology Assessment.2004; 8(36). • National Institute for Clinical Excellence (NICE). Guide to the methods of technology appraisal. London: National Institute of Clinical Excellence, 2004. Copy of slides available at: http://www.hs.le.ac.uk/personal/njc21/