140 likes | 280 Views
Populating decision analytic models. Laura Bojke, Zo ë Philips With M Sculpher, K Claxton, S Golder, R Riemsma, N Woolacoot, J Glanville. Methodological research required. Provide guidance on key theoretical, methodological and practical issues not yet covered in published guidelines.
E N D
Populating decision analytic models Laura Bojke, Zoë Philips With M Sculpher, K Claxton, S Golder, R Riemsma, N Woolacoot, J Glanville
Methodological research required • Provide guidance on key theoretical, methodological and practical issues not yet covered in published guidelines. • Two areas identified in advance as priorities • Literature searching for parameter estimation in decision models • Adjusting for bias in treatment effect estimates from observational studies used in decision models.
Identification of parameter estimates from published literature • Ad hoc methods used to search for model data • Methods to identify non treatment effect data are not well established • Comprehensive searching is extremely resource intensive • May conflict with producing timely decisions • May be consequences of not identifying all relevant data • Integrating review and model building process
Methods • Preliminary investigation of the feasibility of carrying out focused search strategies to populate a case-study model • Model of prophylactic antibiotics for preventing (UTIs) in children • Questions, relating to specific types of model data (and their distributions), were generated to focus the searches • Iterative approach • Communication between modeller and information expert
Methods (2) • 42 searchable questions for a relatively simple model • Focus on different databases to effectiveness searches (IPD, QOLID) • Different types of data are found in different databases • Searches were more focussed – risk of missing relevant papers
Results • Baseline event rates: 5/607 records relevant. All from EMBASE • Resource use & costs: 3/99 records relevant. Majority identified in HEED • Quality of life: 1st set of searches did not focus on utilities. 2nd set produced 2/75 relevant records • Relative treatment effects: 17/126 records relevant. MEDLINE/EMBASE • Antibiotic resistance: 4/242 records relevant. MEDLINE/EMBASE
Results (2) • Many papers identified were those already identified through informal searches • Model structure did not alter given new evidence from additional papers
Issues • Even focussed searches require significant time input • Quality of life information is difficult to identify – in part this reflects that fact that so little info is available • Difficult to conclude which databases are most relevant for different types of data • Some overlap between QoL, resource use & costs papers identified • Difficult to generalise results • More complex model may produce unfeasibly large number of search questions
Issues (2) • Developing search questions is difficult and not exact – some trial and error • Need to determine impact of parameter searches more formally • Did systematic searching produce better/more data • Did this change the model results • Can VOI be used to prioritise searching for model data? • Can a preliminary model structure produce reasonable VOI estimates?
How to handle bias in parameter estimates incorporated into decision models • Selection bias may affect estimates of relative treatment effect taken from non-randomised studies • Scoping exercise of the literature describing the effect of selection bias in terms of the uncertainty associated with an estimate of treatment effect • Quantification of uncertainty • Methods to deal with selection bias
Methods • Pragmatic search used due to abundance of ‘bias’ literature • Still 768 records identified • Unfeasible to sift through given scope of project • 14 references identified from citation checking and contacting experts • No all 14 references were found in 768 from formal searches
Results • 3 studies did not provide actual estimates of bias. • Of remaining 11 studies, 5 concluded that a non-randomised trial design is associated with bias • 6 studies found ‘similar’ estimates of treatment effects from observational studies or non-randomised clinical trials and RCTs • 3 studies suggest it may be possible to minimise any differences by ensuring that subjects included in each type of study are comparable
Conclusions • RCTs and observational studies do not necessarily produce different results • Where results do differ significantly from one another, the direction of that difference is neither constant nor predictable • Results from the good quality controlled cohort study may more accurately reflect those of the ‘true’ patient population • Failed to uncover any information on the quantification of bias in uncontrolled studies • Subsequent HTA report applied methods to deal with selection bias
References • Full HTA report: Philips Z, Ginnelly L, Sculpher M, Claxton K,Golder S,Riemsma R, Woolacoot N, Glanville J. A review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technology Assessment. 2004; Vol 8: No 36 • Parameter searches paper: Golder S, Ginnelly L, Glanville J. International Journal of Technology Assessment in Healthcare. Forthcoming, 2005.