110 likes | 217 Views
DISCUSSION. Alex Sutton Centre for Biostatistics & Genetic Epidemiology, University of Leicester. DEFINING “RELEVANT” EVIDENCE. Driven by pre-specified model structure? Or should model be malleable to available evidence? What evidence should be used to define structure
E N D
DISCUSSION Alex Sutton Centre for Biostatistics & Genetic Epidemiology, University of Leicester
DEFINING “RELEVANT” EVIDENCE • Driven by pre-specified model structure? • Or should model be malleable to available evidence? • What evidence should be used to define structure • Necessarily a dynamic process? • e.g. look for lower grades of evidence if higher grades are not found? • How much evidence is sufficient? • Bias precision trade-off? • Utilising related evidence • e.g. different population groups / time periods • e.g. indirect comparisons • Would guidance / rules be helpful? • Rulebook helpful / fair?
IDENTIFYING “RELEVANT” EVIDENCE • Should we always be systematic? • Should we always be exhaustive? • or do we need to be more pragmatic for some model parameters? • role of expert? • Need to be explicit about what has been done? • Document search strategies • Develop standardised search strategies for different parameter types / model structure? • Use of Google ?!? • Required data may not be primary focus of paper
ASSESSING QUALITY OF EVIDENCE • Are hierarchies of evidence useful / correct? • do we need to empirically validate them? • Assessing quality of individual studies • scores / checklists • difficult across study designs? • really assessing quality of reporting?
INCORPORATING QUALITY OF EVIDENCE • Incorporation of quality into synthesis / model? • Experience from meta-analysis of RCTs suggests very difficult problem • Options: • Define inclusion threshold • Use to adjust weighting given to studies • Adjust point estimates • Cross design synthesis modelling • Sensitivity analysis • Problem: little relationship between quality and magnitude of estimate • Has anyone ever said evidence not good enough?
HOW TO SYNTHESISE THE DATA • Use meta-analysis when multiple sources? • Complications: • Data reported in different formats • e.g. mean / median survival • Checking for consistency • heterogeneity • data logically “adding up” • Little developed for costs / utilities • Many extensions of meta-analysis / multi parameter evidence synthesis in development • WinBUGS – “realistically complex models”
OTHER ISSUES • When is it worth trying to obtain individual patient data? • When relevant info not reported in paper? • e.g. side effects / population groups / uncertainty estimate etc. • To increase precision? • exploration of individual level characteristics? • Publication / outcome reporting bias • What data is the responsibility of the companies? • Set threshold levels of evidence – fairness?? • What have I missed??
“CASE STUDY” • Model to evaluate optimum method for diagnosing DVT • Several (5+) tests available • Approx 500 studies • Diagnostic test studies notoriously poor • Poorly reported • Unclear population groups etc
INDIVIDUAL STUDIES OF D-DIMER • Substantial heterogeneity! • Publication bias? • Reduce to the best studies??
OUTPUTS FROM TODAY • Put presentations on website (with speakers’ permission) • Write up and distribute discussion to attendees • Consensus paper following both workshops? • Identification of future empirical / methodological research required • Empirical comparisons of thorough with quick
NEXT WORKSHOP “Appropriate methodology for identifying & combining the evidence for use in decision analytical models” Monday 26th September 2005 (Room G20, Department of Health Sciences, University of Leicester) Suggestions for format / talks?