100 likes | 306 Views
Extension to ANOVA. From t to F. Review. Comparisons of samples involving t-tests are restricted to the two-sample domain Independent samples tests differences between samples in which cases do not in some way influence or contribute to one another. Experimental vs. Control, Gender etc.
E N D
Extension to ANOVA From t to F
Review • Comparisons of samples involving t-tests are restricted to the two-sample domain • Independent samples tests differences between samples in which cases do not in some way influence or contribute to one another. • Experimental vs. Control, Gender etc. • Paired samples involve testing differences among correlated samples • Pre-Post test, Matched samples
Extension • Analysis of Variance, as it is called even though most analyses involve an analysis of variance, allows us to move beyond the two-sample setting. • One-way between groups design • Between groups factor(s) with 2 or more levels each • One-way within groups design • Repeated measures factor(s) with 2 or more levels • Interactions • Mixed design
Between Groups Design • For simplicity we can provide a simple overview of a three group setup • Experimental setup: random assignment to control, treatment 1, and treatment 2 groups • Our null hypothesis is conceptually: H0: μc = μtreat1 = μtreat2 • The alternative is ‘not H0’ • The GLM is Yij = μGM + τj + eij • In other words, a person’s score is a reflection of the grand mean, the effect of being in group j (the mean difference between their group mean and the grand mean) and ‘error’ variance, the variance within that group j
An Obvious Problem • The issue with ANOVA is that, like our nil hypothesis, it tests an hypothesis that no one is interested in. • The ANOVA, if statistically significant, can only tell us that there is some statistical difference among the groups (at least between the largest and smallest means) • However we are typically concerned with specific group comparisons (rendering a one way ANOVA fairly useless)
Multiple Comparisons • Comparisons can be planned in advance (a priori) or conducted in exploratory fashion (post hoc) • Some techniques are applicable to either setting, but some only one or the other. • Any one you would use today1 does not require a significant omnibus ANOVA to be found first • With planned comparisons, one has an idea of what to expect or has thought enough about their situation to narrow down the hypotheses to test • Control vs TreatmentsTreatment 1 vs. Treatment 2 • However even post hocs do not have to be conducted on every single comparison possible • Planned comparisons are more statistically powerful, and fewer comparisons are more statistically powerful than many • A key issue with post hocs is the control of family-wise type I error rate, i.e. control of making a type I error among any of the pairwise comparisons (along with control for each comparison itself).
Within Groups Designs • Much of the same goes for repeated measures designs • While more statistically powerful (all else being equal), there is an additional assumption (sphericity) • But in general we are looking for trends or specific comparisons within this design also
Interactions • What we really like with ANOVA is the ability to test interactions • For example with a academic study- random assignment to control and treatment groups, home vs. school setting. • Now we can see whether there is a difference in the treatment difference depending on which setting the student is in • E.g. no treatment effect at home, treatment > control at school • Whenever there is more than one factor, we will be primarily interested in interactions, since with a significant interaction, any main effect seen would be dependent on the levels of another factor
Mixed Design • With mixed designs we have at least one between groups factor and one within groups factor • E.g. control vs. treatment, pre vs. post test • We can test interactions between the two types of factors and if more factors are involved, between multiple factors of the same type and their interactions
Summary • Regardless of the complexity of the design, people end up interpreting specific group comparisons or one-way situations, so if you can understand t-tests and the one-way ANOVA, you’re fine for interpreting papers using more complex designs • However, as you can probably tell, designs can get complicated quickly and one can easily get lost in the details • ANOVA was extremely common when the typical approach to research was experimental, where it appropriate to use • However, as people looked at more complex models, common practice became to force non-experimental design into ANOVA (e.g. by categorizing continuous variables) because that’s what they knew • But some statisticians consider ANOVA to be inappropriate, or at least uninterpretable, for non-experimental research • Some even suggest that an experiment is the only place one can find a true ‘interaction’ • Unless you are conducting an experiment, you’d probably do well to attempt alternative approaches.