330 likes | 465 Views
Lecture 10: Meta-analysis of intervention studies. Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods of analysis and presentation Sources of bias. Definitions. Traditional (narrative) review: Selective, biassed
E N D
Lecture 10:Meta-analysis of intervention studies • Introduction to meta-analysis • Selection of studies • Abstraction of information • Quality scores • Methods of analysis and presentation • Sources of bias
Definitions • Traditional (narrative) review: • Selective, biassed • Systematic review (overview): • Synthesis of studies of a research question • Explicit methods for study selection, data abstraction, and analysis (repeatable) • Meta-analysis: • Quantitative pooling of study results
Protocol preparation • Research question • Study “population”: • search strategy • inclusion/exclusion criteria
Protocol preparation • Search strategy: • computerized databases (Medline, CINAHL, Psychinfo, etc.): • test sensitivity and predictive value of search strategy • hand-searches (reference list, relevant journals, colleagues) • “grey” (unpublished) literature: • pro: publication bias • con: results less reliable
Identifying relevant studies for systematic reviews of RCTs in vision research(Dickerson, in Systematic Reviews, BMJ,1995) • Sensitivity and precision” of Medline searching • Gold standard: • registry of RCTs in vision research • extensive computer and hand searches • contacts with investigators to clarify design • Sensitivity: • proportion of known RCTs identified by the search • “Precision”: • proportion of publications identified by search that were RCTs
Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995
Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995
Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995
Protocol preparation • Study “population”: • inclusion/exclusion criteria: • language • study design • outcome of interest • etc. Source: Data abstraction form for meta-analysis project
Protocol preparation • Data collection: • standardized abstraction form • number of abstractors • blinding of abstractors • rules for resolving discrepancies (consensus, other) • use of quality scores
Analysis • Measure of effect: • odds ratio, risk/rate ratio • risk/rate difference • relative risk reduction • Graphical methods: • conventional (individual studies) • cumulative • exploring heterogeneity
Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
Analyses • Pooling results: • is it appropriate? • equivalent to pooling results from multi-centre trials • fixed (e.g., Mantel-Haenzel) methods • assume that all trials have same underlying treatment effect • random effects methods (e.g., DerSimonian & Laird): • allow for heterogeneity of treatment effects
Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
Quality scores • Rating scales and checklists to assess methodological quality of RCTs • How should they be used? • Qualitative assessment • Exclusion of weaker studies • Weighting of estimates
Does quality of trials affect estimate of intervention efficacy? (Moher et al, 1998) • Random sample of 11 meta-analyses of 127 RCTs • Replicated analysis • Used quality scales/measures • Results: • masked abstraction provided higher quality score than unmasked • low quality trials found stronger effects than high quality trials • quality-weighted analysis resulted in lower statistical heterogeneity
Unresolved questions about meta-analysis • Apples and oranges? • Between-study differences in study population, design, outcome measures, etc. • Inclusion of weak studies? • Publication bias • methods to evaluate impact • - particularly with small studies • Is it better to do good original studies?
Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) • Selected meta-analyses from Medline and Cochrane pregnancy and childbirth database with at least 1 “large” study and 2 smaller studies: • sample size approach (n=1000+) - 79 meta-analyses • statistical power approach (adequate size to detect treatment effect from pooled analysis - 61 meta-analyses • Results: • agreement between larger trials and meta-analysis 82-90% using random effects models • greater disagreement using fixed effects models
Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) • Results: • agreement between larger trials and meta-analysis 82-90% using random effects models • greater disagreement using fixed effects models • Conclusion: • large and small trial results generally agree • each type of trial has advantages and disadvantages: • large trials provide more stable estimates of effect • small trials may better effect heterogeneity of clinical populations
Risk ratios from large studies vs pooled smaller studies (Cappeleri et al,1996)(Left- sample size approach; right - statistical power approach) Source: Cappeleri et al, JAMA 1996, 276: 1332-1338
Discrepancies between meta-analyses and subsequent large RCTs (LeLorier et al, 1997) • Compared results of 12 large (n=1000+) RCTs with results of 19 prior meta-analyses (M-A)on same topics • For total of 40 primary and secondary outcomes, agreement between large trial and M-A only fair (kappa = 0.35, 95% CI .06 to .64) • Positive predictive value of M-A = 68% • Negative predictive value of M-A= 67%