110 likes | 122 Views
Learn about systematic and chance variation, F-ratio, testing null hypothesis, types of errors, reporting F-ratio, ANOVA, when to use ANOVA, threats to internal validity, and achieving control in experiments.
E N D
BHS 204-01Methods in Behavioral Sciences I May 9, 2003 Chapter 6 and 7 (Ray) Control: The Keystone of the Experimental Method
Sources of Variance • Systematic variation – differences related to the experimental manipulation. • Can also be differences related to uncontrolled variables (confounds) or systematic bias (e.g. faulty equipment or procedures). • Chance variation – nonsystematic differences. • Cannot be attributed to any factor. • Also called “error”.
F-Ratio • A comparison of the differences between groups with the differences within groups. • Between-group variance = treatment effect + chance variance. • Within-group variance = chance variance. • If there is a treatment effect, then the between-group variance should be greater than the within-group variance.
Testing the Null Hypothesis • Between-group variance (treatment effect) must be greater than within-group variance (chance variation), F > 1.0. • How much greater? • Normal curve shows that 2 SD, p <.05 is likely to be a meaningful difference. • The p value is a compromise between the likelihood of accepting a false finding and the likelihood of not accepting a true hypothesis.
Types of Errors • Type I error – likelihood of rejecting the null when it is true and accepting the alternative when it is false (making a false claim). • This is the p value -- .05 is probability of making a Type I error. • Type II error – likelihood of accepting null when it is false and rejecting the alternative when it is true. • Probability is b, the power of a statistic is 1-b.
Reporting the F-Ratio • ANOVA is used to calculate the F-Ratio. • Example: • The experimental group showed significantly greater weight gain (M = 55) compared to the control group (M = 21), F(1, 12) = 4.75, p=.05. • Give the degrees of freedom for the numerator and denominator.
When to Use ANOVA • When there are two or more independent groups. • When the population is likely to be normally distributed. • When variance is similar within the groups compared. • When group sizes (N’s) are close to equal.
Threats to Internal Validity • It is the experimenter’s job to eliminate as many threats to internal validity as possible. • Such threats constitute sources of systematic variance that can be confused with an effect, resulting in a Type I error. • Potential threats to validity must be evaluated in the Discussion section of the research report.
Two Ways of Achieving Control • Participant assignment and selection: • Random sampling. • Random assignment to conditions. • Experimental design: • Add a control group. • Include a baseline measurement before the treatment (pretest). • Treat subjects consistently across all groups. • Four-group design tests for effects of the testing.
Figure 7.3. (p. 159)The four conceptual steps in experimentation.