250 likes | 334 Views
INCREASING THE TRANSPARENCY OF CEA MODELING ASSUMPTIONS: A SENSITIVITY ANALYSIS BASED ON STRENGTH OF EVIDENCE RS Braithwaite MS Roberts AC Justice. Introduction. Tragicomic anecdote. Introduction. Policy makers/clinicians reluctant to use CEA because assumptions difficult to understand
E N D
INCREASING THE TRANSPARENCY OF CEA MODELING ASSUMPTIONS: A SENSITIVITY ANALYSIS BASED ON STRENGTH OF EVIDENCERS Braithwaite MS Roberts AC Justice
Introduction • Tragicomic anecdote
Introduction • Policy makers/clinicians reluctant to use CEA because assumptions difficult to understand • Using Cost-Effectiveness Analysis to Improve Health Care: Opportunities and Barriers. Neumann PJ 2005 • CMS (26th National meeting of SMDM, 2004) • CEA modelers may base parameter estimates on studies that have limited evidence. • Modelers may not consider all studies with comparable evidence and applicability
Objective • To develop a method to clarify the tradeoff between strength of evidence and precision of CEA results.
Methods • Proof of concept based on hypothetical data and simplified model of HIV natural history. • Question: What is the cost-effectiveness of Directly Observed Therapy (DOT) for HIV patients?
Methods • Basic idea • When data sources have insufficient strength of evidence, we should no longer use them to estimate model parameters. • Instead, we should assume that little is known and specify them using wide probability distributions with the fewest embedded assumptions • Uniform distribution
Methods • Assess strength of evidence based on USPTF guidelines which specify three valuation domains • Study design • Extent to which design differs from controlled experiment • Level 1 = best (RCT) • Level 3=worst (expert opinion, anecdotal evidence) • Internal validity • Extent to which results represent truth in study population • Good = best (little LTFU, objective assessment) • Poor = worst (large or diverging LTFU, subjective assessment) • External validity • Extent to which results represent truth in target population • High = best (similar pt characteristics, care settings) • Low = worst (dissimilar pt characteristics, care settings)
Methods • Vary evidence criteria in 3 domains from most to least inclusive • Individually and in aggregate • If evidence meets or exceeds criteria, use it to estimate parameter input distribution • If evidence does not meet criteria, do not use it • Use uniform distribution over plausible range sufficiently wide to be acceptable to all CEA users
Methods • For natural history parameters that can only be observed rather than determined experimentally observational studies eligible for Level 1 design • Overall mortality rate due to age-, sex-, and race-related causes • When more than one source of evidence met criteria, we used that source with greatest statistical precision • Alternative: pool weighting by inverse of variance • When substituting uniform distribution make sure that direction of aggregate effect is neutral • Maximizes conservatism of approach
Methods • Model: extremely simple 10-parameter probabilistic simulation of DOT in HIV • 17 data sources considered
Results • Base Case: No evidence criteria • All 17 data sources eligible for parameter estimation • Study Design = High • 13 out of 17 sources were eligible • Internal Validity = Good • 9 out of 17 sources were eligible • External Validity = High • 5 out of 17 sources were eligible • All three criteria • Only 3 out of 17 sources were eligible
Results – Overall • No evidence criteria $78,000/QALY • Study Design = 1 $227,000/QALY • Internal Validity = Good $158,000/QALY • External Validity = High >$6,000,000/QALY • All three criteria > $6,000,000/QALY
Limitations • Incorporates a simple model of HIV that was constructed solely for the purpose of illustrating proof of concept. • Method is likely to need further refinement before it could be used on more complex and realistic simulations. • Method only addresses parameter uncertainty, leaving other determinates of modeling uncertainty unexplored.
Conclusions • Strength of evidence may have profound impact on the precision and estimates of CEAs • With all evidence was permitted results similar to previously published DOT CEA (Goldie03) • $40,000 to $75,000/QALY • Little uncertainty • With stricter evidence criteria our results differed markedly • > $ 150,000/QALY • Great uncertainty
Implications • Sensitivity analysis by strength of evidence concept can be linked to any desired ranking method for strength of evidence, and therefore can be customized to facilitate its use by expert panels and organizations. • Advance of this work does not lie in its specification of particular hierarchy of strength of evidence • Advance lies in showing how any hierarchy can be implemented within CEA model.
Implications • Users who think “any data is better than no data” will likely base inferences on model results that incorporate all data sources, regardless of strength of evidence • Users who think “my judgment supersedes all but the best data” would likely only base inferences on model results that reflect only highest grades of evidence. • Many models may fail to provide conclusive results when validity criteria are stringent. • Nonetheless, in the long run this may help CEA to become a more essential decision making tool.