240 likes | 377 Views
Psychology 290 Special Topics Study Course: Advanced Meta-analysis. February 5, 2014. Overview. Likelihood equations. Maximum likelihood and fixed-effects meta-analysis. Likelihood-ratio tests. Q -between and maximum likelihood. Review of properties of maximum likelihood estimates.
E N D
Psychology 290Special Topics Study Course: Advanced Meta-analysis February 5, 2014
Overview • Likelihood equations. • Maximum likelihood and fixed-effects meta-analysis. • Likelihood-ratio tests. • Q-between and maximum likelihood.
Review of properties of maximum likelihood estimates • Maximum likelihood estimates are “minimum variance” estimators. • That means that it is impossible to have an estimator of a parameter that has a more compact sampling distribution than that of the MLE. • That is a good thing.
Review of properties of maximum likelihood estimates • Maximum likelihood estimates are consistent. • Consistent estimators are ones that approach the true value of the parameter as the sample size becomes large. • That’s a good thing.
Review of properties of maximum likelihood estimates • Maximum likelihood estimates are often (but not always) biased. • A biased estimator is one that, on average, misses the true value of the parameter. • Although they are consistent, MLEs often are biased for smaller sample sizes. • That’s a not good thing.
Biased estimators • Even though MLEs are sometimes biased, they are still often the best estimators. • In the following plot, X marks the true value of a parameter being estimated.
Likelihood equations • A basic concept in calculus is the idea of a line that is tangent to a curve. • The slope of such a tangent line is called the derivative of the function that defines the curve.
Likelihood equations (cont.) • When the curve reaches its maximum, the derivative has the value zero. • Note that this would also happen if the function reached a minimum. • However, many likelihood functions (including the ones we will deal with) have only a maximum.
Likelihood equations (cont.) • If we set the derivative of a likelihood function to zero and solve for the parameter, we can often find a closed-form expression for the MLE. • Derivative = 0 is called the “likelihood equation.” • Let’s consider the example of fixed-effects meta-analysis.
Maximum likelihood and fixed-effect meta-analysis • In meta-analysis, we have a vector T of effect sizes, and a vector v of conditional variances. • In fixed-effect meta-analysis, each Ti is assumed to be normally distributed about a common mean qwith variance vi .
The likelihood • This leads to the following likelihood:
The log-likelihood • If we take the log, we get:
The likelihood equation • The derivative of the log-likelihood is • which leads to the following likelihood equation:
Solving the likelihood equation • Apply algebra to solve for q :
The MLE of q • We have just shown that the conventional inverse-variance-weighted mean is the maximum likelihood estimate of the population effect. • (Demonstration in R.)
Likelihood-ratio tests • One very handy property of maximum likelihood is that there is an easy way to test whether anything is lost when a model is simplified. • A model is said to be nested within another model if one can produce the nested model by fixing parameters of the more complex model.
Likelihood-ratio tests (cont.) • Most often, a nested model is reached by fixing some parameters to zero. • Under such circumstances, if the population value is zero, then twice the difference between the log-likelihoods has a chi-square distribution. • The degrees of freedom for the likelihood-ratio chi-square is the number of parameters that were constrained.
An example of likelihood-ratio testing • A study of gender differences in conformity included effect-size estimates from studies with all male authors, and effect-size estimates from studies that had some female authors. • We are interested in testing whether the population effect differs for those two groups of studies. • (Digression in R)
Q-between and maximum likelihood • A common way to perform meta-analysis is to use weighted regression. • Group membership may be indicated by using a dummy variable (0 indicates first group, 1 indicates second group). • The sum of squares that forms the numerator of the F statistic in the regression output is Q-between. • (Example in R.)
Q-between and maximum likelihood (cont.) • We have just demonstrated empirically that the Q-between statistic is a likelihood-ratio chi-square (in the context of fixed-effect meta-analysis).
Next time • Random-effects meta-analysis using maximum likelihood.