1 / 33

Lecture 10: Meta-analysis of intervention studies

Lecture 10: Meta-analysis of intervention studies. Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods of analysis and presentation Sources of bias. Definitions. Traditional (narrative) review: Selective, biassed

violet-ward
Download Presentation

Lecture 10: Meta-analysis of intervention studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 10:Meta-analysis of intervention studies • Introduction to meta-analysis • Selection of studies • Abstraction of information • Quality scores • Methods of analysis and presentation • Sources of bias

  2. Definitions • Traditional (narrative) review: • Selective, biassed • Systematic review (overview): • Synthesis of studies of a research question • Explicit methods for study selection, data abstraction, and analysis (repeatable) • Meta-analysis: • Quantitative pooling of study results

  3. Source: l’Abbé et al, Ann Intren Med 1987, 107: 224-233

  4. Protocol preparation • Research question • Study “population”: • search strategy • inclusion/exclusion criteria

  5. Protocol preparation • Search strategy: • computerized databases (Medline, CINAHL, Psychinfo, etc.): • test sensitivity and predictive value of search strategy • hand-searches (reference list, relevant journals, colleagues) • “grey” (unpublished) literature: • pro: publication bias • con: results less reliable

  6. Identifying relevant studies for systematic reviews of RCTs in vision research(Dickerson, in Systematic Reviews, BMJ,1995) • Sensitivity and precision” of Medline searching • Gold standard: • registry of RCTs in vision research • extensive computer and hand searches • contacts with investigators to clarify design • Sensitivity: • proportion of known RCTs identified by the search • “Precision”: • proportion of publications identified by search that were RCTs

  7. Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

  8. Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

  9. Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995

  10. Protocol preparation • Study “population”: • inclusion/exclusion criteria: • language • study design • outcome of interest • etc. Source: Data abstraction form for meta-analysis project

  11. Protocol preparation • Data collection: • standardized abstraction form • number of abstractors • blinding of abstractors • rules for resolving discrepancies (consensus, other) • use of quality scores

  12. Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233

  13. Analysis • Measure of effect: • odds ratio, risk/rate ratio • risk/rate difference • relative risk reduction • Graphical methods: • conventional (individual studies) • cumulative • exploring heterogeneity

  14. Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

  15. Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

  16. Analyses • Pooling results: • is it appropriate? • equivalent to pooling results from multi-centre trials • fixed (e.g., Mantel-Haenzel) methods • assume that all trials have same underlying treatment effect • random effects methods (e.g., DerSimonian & Laird): • allow for heterogeneity of treatment effects

  17. Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995

  18. Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233

  19. Quality scores • Rating scales and checklists to assess methodological quality of RCTs • How should they be used? • Qualitative assessment • Exclusion of weaker studies • Weighting of estimates

  20. Does quality of trials affect estimate of intervention efficacy? (Moher et al, 1998) • Random sample of 11 meta-analyses of 127 RCTs • Replicated analysis • Used quality scales/measures • Results: • masked abstraction provided higher quality score than unmasked • low quality trials found stronger effects than high quality trials • quality-weighted analysis resulted in lower statistical heterogeneity

  21. Source: Moher et al, Lancet 1998, 352: 609-13

  22. Source: Moher et al, Lancet 1998, 352: 609-13

  23. Source: Moher et al, Lancet 1998, 352; 609-13

  24. Unresolved questions about meta-analysis • Apples and oranges? • Between-study differences in study population, design, outcome measures, etc. • Inclusion of weak studies? • Publication bias • methods to evaluate impact • - particularly with small studies • Is it better to do good original studies?

  25. Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) • Selected meta-analyses from Medline and Cochrane pregnancy and childbirth database with at least 1 “large” study and 2 smaller studies: • sample size approach (n=1000+) - 79 meta-analyses • statistical power approach (adequate size to detect treatment effect from pooled analysis - 61 meta-analyses • Results: • agreement between larger trials and meta-analysis 82-90% using random effects models • greater disagreement using fixed effects models

  26. Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) • Results: • agreement between larger trials and meta-analysis 82-90% using random effects models • greater disagreement using fixed effects models • Conclusion: • large and small trial results generally agree • each type of trial has advantages and disadvantages: • large trials provide more stable estimates of effect • small trials may better effect heterogeneity of clinical populations

  27. Risk ratios from large studies vs pooled smaller studies (Cappeleri et al,1996)(Left- sample size approach; right - statistical power approach) Source: Cappeleri et al, JAMA 1996, 276: 1332-1338

  28. Source: Cappeleri et al, JAMA 1996, 276: 1332-1338

  29. Discrepancies between meta-analyses and subsequent large RCTs (LeLorier et al, 1997) • Compared results of 12 large (n=1000+) RCTs with results of 19 prior meta-analyses (M-A)on same topics • For total of 40 primary and secondary outcomes, agreement between large trial and M-A only fair (kappa = 0.35, 95% CI .06 to .64) • Positive predictive value of M-A = 68% • Negative predictive value of M-A= 67%

  30. Source: Lelorier et al, NEJM 1997, 337: 536-42

  31. Source: Lelorier et al, NEJM 1997, 337: 536-42

More Related