200 likes | 377 Views
Meta-analysis: pooling study results. Simon Thornley. Objective. Understand the philosophy of meta-analysis and its contribution to epidemiology and science. Understand the limitations of meta-analysis. Introduction.
E N D
Meta-analysis: pooling study results Simon Thornley
Objective • Understand the philosophy of meta-analysis and its contribution to epidemiology and science. • Understand the limitations of meta-analysis
Introduction • Systematic quantitative integration of results several independent studies • Distinct from a narrative review “expert” • Synthesis of published information. • Usuallyconsidered only appropriate for RCTs • Still controversial even in this context. • Google search on “meta-analysis” 8 million hits!
Criticism • “statistical alchemy” for the 21st Century • “The intellectual allure of making mathematical models and aggregating collections of studies has been used as an escape from the more fundamental scientific challenges” • -Feinstein.
Purposes of meta analysis • Inefficiency of traditional narrative reviews. • Allow researchers to keep abreast of accumulating evidence • Resolution of uncertainty when research disagrees? • Increase statistical power, enhances precision of effect estimates – especially small effects • Allows exploratory analysis (subgroups)
Inadequate sample size? (Deal with type-2 error) • Single trials too small to detect moderate effects • (low power – high chance of Type-2 error (false-negative)) • Investigators often over enthusiastic about size of treatment effects and sample size • Meta-analysis doesn’t deal with other threats to study validity (bias, measurement error; in fact, may increase) • e.g. CVD death vs. total mortality
Statistical Test result Accept H0 Reject H0 Type-1 error OK True H0 Type-2 error False OK Prob of a type 1 error = alpha a (usually fixed, say 0.05) Prob of a type 2 error = beta b= 1-power
Random error lecture Average odds ratio is 21?? Consistency??
Which studies? • Need defined question, state MESH terms • Reproducible • Exhaustive search • Unpublished and published studies • Variety of databases.
Typical summary outcome measures Binary: Continuous: • Difference in means, • Standardized differences in means • Survival measures • Relative risk • Odds ratio • Risk difference • NNT [=1/RD] • Incidence rate ratios (person time data)
Methods of analysis Fixed effect Random effect • Assume distribution of true effects • Aim is to measure mean of distribution of true effects • Greater heterogeneity --> greater variation • Gives greater weight to small studies than fixed effect method of analysis. • More conservative (wider confidence interval around effect estimate, compared to fixed effect method) • Mantel-Haenszel method • treat each trial as a “stratum” take weighted average of effects. • O-E (Peto) method • Binary outcome (e.g. death) • Oi=observed # deaths on treatment in trial i • Ei=expected # deaths (assuming no treat effect) • look at average of Oi- Eiover all trials • Assumes underlying true effect for each study and differences only due to random error
When meta-analysis goes bad… • In CVD drug research, CVD outcomes often favoured over total mortality • Which would you prefer????
Publication bias: other methods • Ioannidis JPA, Trikalinos TA. An exploratory test for an excess of significant findings. Clin. Trials 2007;4(3):245-53. • Calculate expected number of positive studies, given: • Sample size of individual studies • Number of events in controls • Summary effect (assumed true)
Problems • Combining heterogeneous studies (apples and oranges) • Combining good and bad studies (good and bad apples) (study quality) • Publication bias (tasty apples only) • The "Flat Earth" criticism – reductionism–(Braeburns only) • Combining data (individual v summary data stewed apples have different character to raw) • Application to randomized studies only? • Type-2 error only one problem with epi studies
Meta analysis in observational studies • MA often applied in observational studies • As often as RCTs (Egger et al) • …. with controversy …. • Confounding and bias unlikely to “cancel out” • Publication bias and “research initiation bias” (i.e. studies only done when there is an association) • Different ways of reporting/analysing result (e.g different outcome measures, confounders, models, exposure levels)
Summary • Meta-analyses increasingly used • Logical only for RCTs? • Summarise medical literature • Reduce type-2 error by increasing sample size. • Don’t deal with other types of epidemiological error (confounding/measurement error) • Prone to unique type of error (Publication bias) • Can be difficult to detect