720 likes | 732 Views
Learn about effect sizes and their significance in meta-analysis. Understand computation methods and interpretation using various metrics. Gain practical skills in interpreting standardized mean differences. Join us for this insightful training session.
E N D
C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes
A brief introduction to effect sizes Meta-analysis expresses the results of each study using a quantitative index of effect size (ES). ESs are measures of the strength or magnitude of a relationship of interest. ESs have the advantage of being comparable (i.e., they estimate the same thing) across all of the studies and therefore can be summarized across studies in the meta-analysis. Also, they are relatively independent of sample size.
Effect Size Basics • Effect sizes can be expressed in many different metrics • d, r, odds ratio, risk ratio, etc. • So be sure to be specific about the metric! • Effect sizes can be unstandardized or standardized • Unstandardized = expressed in measurement units • Standardized = expressed in standardized measurement units
Unstandardized Effect Sizes • Examples • 5 point gain in IQ scores • 22% reduction in repeat offending • €600 savings per person • Unstandardized effect sizes are helpful in communicating intervention impacts • But in many systematic reviews are not usable since not all studies will operationalize the dependent variable in the same way
Standardized Effect Sizes • Some standardized effect sizes are relatively easy to interpret • Correlation coefficient • Risk ratio • Others are not • Standardized mean difference (d) • Odds ratio, logged odds ratio
Types of effect size Most reviews use effect sizes from one of three families of effect sizes: • the d family, including the standardized mean difference, • the r family, including the correlation coefficient, and • the odds ratio (OR) family, including proportions and other measures for categorical data.
Effect size computation • Compute a measure of the “effect” of each study as our outcome • Range of effect sizes: • Differences between two groups on a continuous measure • Relationship between two continuous measures • Differences between two groups on frequency or incidence
Types of effect sizes • Standardized mean difference • Correlation Coefficient • Odds Ratios
Standardized mean difference • Used when we are interested in two-group comparisons using means • Groups could be two experimental groups, or in an observational study, two groups of interest such as boys versus girls.
Notation for study-level statistics n is sample size
Standardized mean difference Pooled sample standard deviation
Example • Table 1 from: Henggeler, S. W., Melton, G. B. & Smith, L. A. (1992). Family preservation sing multisystemic therapy: An effective alternative to incarcerating seriuos juvenile offenders. Journal of Consulting and Clinical Psychology, 60(6), 953-961.
Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.
95% Confidence interval for ES’sm The 95% confidence interval for the standardized mean difference in weeks of incarceration ranges from -1 sds to -0.2 sds. Given that the sd of weeks is 16.6, the juveniles in MST were incarcerated on average -1.06*16.6 = -17.6 to -0.18*16.6 = -3 less weeks than juveniles in the standard treatment. In weeks, the confidence interval is [-17.6, -3.0].
Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.
Practice computations • Compute effect size for number of arrests • Compute effect size with bias correction • Compute 95% confidence interval for effect size • Interpret the effect size
95% Confidence interval for ES’sm The 95% confidence interval for the standardized mean difference in number of arrests is from -0.87 sds to -0.01 sds. Given that the sd of arrests is 1.44, the juveniles in MST were arrested on average -0.87*1.44 = -1.25 to -0.01*1.44 = -0.01 less than juveniles in the standard treatment. In arrests, the confidence interval is [-1.25, -0.01].
Computing standardized mean differences The first steps in computing d effect sizes involve assessing what data are available and what’s missing. You will look for: • Sample size and unit information • Means and SDs or SEs for treatment and control groups • ANOVA tables • F or t tests in text, or • Tables of counts
Sample sizes Regardless of exactly what you compute you will need to get sample sizes (to correct for bias and compute variances). Sample sizes can vary within studies so check initial reports of n against • n for each test or outcome or • dfassociated with each test
Standardized Mean Differences • Means, standard deviations and sample sizes the most direct method • Without individual group sample sizes (n1 and n2), assume equal group n’s • Can compute standardized mean differences from t-statistic and from one-way F-statistic
ESsm from F-tests (one-way) Note that you have to decide the direction of the effect given the results.
Standardized mean difference from F-test Note that we choose a negative effect size since the number of arrests is less for the MST group than for the control group