470 likes | 739 Views
Advanced topics in meta-analysis. Wim Van den Noortgate Katholieke Universiteit Leuven, Belgium Belgian Campbell Group Wim.VandenNoortgate @ kuleuven-kortrijk.be Workshop systematic reviews Leuven June 4 -6, 2012. Content. Modelling heterogeneity Publication bias.
E N D
Advanced topics in meta-analysis Wim Van den Noortgate Katholieke Universiteit Leuven, Belgium Belgian Campbell Group Wim.VandenNoortgate@kuleuven-kortrijk.be Workshop systematicreviews Leuven June4-6, 2012
Content • Modelling heterogeneity • Publication bias
Growing popularity of evidence-based thinking: Decisions in practice and policy should be based on scientific research about the effects of these decisions/interventions But: conflicting results (failures to replicate), especially in social sciences!
Explanation for failures to replicate? 1. The role of chance - in measuring variables - in sampling study participants 2. Study results may be systematically biased due to - the way variables are measured - the way the study is set up 3. Studies differ from each other (e.g., in the kind of treatment, the duration of treatment, the dependentvariable, the characteristics of the investigated population, …)
Fixed effects model Differences between observed effect sizes due to chance only Population effect sizes all equal
Measuring heterogeneity = percentage of variability in effect estimates due to heterogeneity rather than chance • Rough guidelines: • 0% to 40%: might not be important • 30% to 60%: may represent moderate heterogeneity • 50% to 90%: may represent substantial heterogeneity • 75% to 100%: considerable heterogeneity • Interpretation based on both I² and heterogeneity test!
An example in education (Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, 85-97.)
Mixing apples and oranges? • Not always wise: make set of studies more homogeneous! • Can help to say something about ‘fruit’ • Can help to make detailed conclusions: Does the effect depend on the kind of fruit?
Fixed effects model with categorical moderator variable Population effect size possibly depends on study category Differences between observed effect sizes within the same category due to chance only
An example in education (Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, 85-97.)
+ Variability between groups Variability within groups = A weighted ANOVA Total variability in observed ES’s QT : homogeneity test QB : moderator test QW : test for within group homogeneity H0: QT ~²k-1 H0: QB ~²J-1 H0: QW ~²k-J
= Mean ES REM A second example(using a sorted caterpillar plot)
Fixed effects model with continuous moderator variable Population effect size possibly depends on continuous study characteristic e.g., After taking into account this study characteristic, differences between observed effect sizes due to chance only
Conclusions for Raudenbush (1984): Initial effect is moderate (0.41, p < .001), but decreases with increasing prior contact (with -0.16 per week, p <.001)
Random effects model Population effect size possibly varies randomly over studies Differences between observed effect sizes are due to - chance - ‘true’ differences
Random effects model with categorical moderator variable Population effect size possibly depends on study category Differences between observed effect sizes within the same category are due to - chance - ‘true’ differences
Random effects model with continuous moderator variable Population effect size possibly depends on continuous study characteristic e.g., After taking into account this study characteristics, differences between observed effect sizes are due to - chance - ‘true’ differences
Random effects model with moderators: • The least restrictive model: allows moderator variables & random variation • Also called a ‘Mixed effects model’
Basic meta-analytic questions • Is there an overall effect? • How large is this effect? • Is the effect the same in all studies? • How large is the variation over studies? • Is this variation related to study characteristics? • Is there variation that remains unexplained? • What is the effect in the specific studies?
An example in education (Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, 85-97.)
Remarks • Models can include multiple moderators • REM assumes randomly sampled studies • REM requires enough studies • Association (over studies) ≠ causation! Be aware of potential confounding moderators (studies are not ‘RCT participants’!)
Note: Sources of dependencies Dependencies between studies • E.g., research group, country, … Multiple effect sizes per study • Several samples • Same sample but, e.g., several indicator variables
How to account for dependence? • Ignoring dependence? NO! • Avoiding dependence • (Randomly choosing one ES for each study) • Averaging ES’s within a study • Performing separate meta-analyses for each kind of treatment or indicator • Modelling dependence • Performing a multivariate meta-analysis, accounting for sampling covariance. • Performing a three level analysis
Example 1: advanced ovarian cancer: monotherapy alkylating agent vs. combination chemotherapy?International Cancer Research Data Bank (Egger, M. D., & Smith, G. (1998). Meta-analysis. Bias in location and selection of studies. British Medical Journal, 316, 61-66. http://www.bmj.com/cgi/content/full/316/7124/61).
Example 2: 510 large trials presented at conferences of the American Society of Clinical Oncology (ASCO) Proportion of publication within 5 years after conference: • 81 % (of 233 trials) for significant results • 68 % (of 287 trials) for nonsignificant results (Kryzanowska, M. K., Pintilie, M., & Tennock, I. F. (2003). Factors associated with failure to publish large randomized trials presented at an oncology meeting. Journal of the American Medical Association, 290, 495-501).
Thorough search for all relevant published and unpublished study results • Articles • Books • Conference papers • Dissertations • (Un)finished research reports • …
Note: sensitivity analysis: how robust are conclusions? • outliers - detection using graphs (or tests) - conduct analysis with and without outliers • calculation effect sizes : several analyses • publication bias: analysis with and without unpublished results • design & quality: compare results from studies with strong design or good quality, with those of all studies • researcher: literature search, effect size calculation, coding quality, …, done by two researchers • …
Software for MA • Spreadsheets (e.g., MS Excel, …) • Some general statistical software (note: often not possible to fix the sampling variance) SAS Proc Mixed, Splus, R Metafor package, … • Software for meta-analysis (note: often not MEM; often only one moderator!) CMA (http://www.meta-analysis.com/), RevMan, … • Software for multilevel/mixed models HLM, MLwiN, …
Recommended literature: • Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.) (2009). The handbook of research synthesis and meta-analysis. New York: The Russell Sage Foundation. • Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage. • Van den Noortgate, W., & Onghena, P. (2005). Meta-analysis. In B. S. Everittt, & D. C. Howell (Eds), Encyclopedia of Statistics in Behavioral Science (Vol. 3 pp. 1206-1217). Chichester, UK: John Wiley & Sons.
Recommended sites • Site of David Wilson http://mason.gmu.edu/~dwilsonb/ma.html • Site of William Shadish faculty.ucmerced.edu/wshadish/