390 likes | 540 Views
Psych 5500/6500. t Test for Dependent Groups (aka ‘Paired Samples’ Design). Fall, 2008. Experimental Designs. t test for dependent groups is used in the following two experimental designs: Repeated measures (a.k.a. within-subjects) design. Matched pairs design.
E N D
Psych 5500/6500 t Test for Dependent Groups (aka ‘Paired Samples’ Design) Fall, 2008
Experimental Designs t test for dependent groups is used in the following two experimental designs: • Repeated measures (a.k.a. within-subjects) design. • Matched pairs design.
Repeated Measures (Within-subjects) Design Measure each participant twice, once in ‘Condition A’ and once in ‘Condition B’. The scores in the two groups are no longer independent as they come from the same participants. I like to use ‘Condition A’ and ‘Condition B’ rather than ‘Group 1’ and ‘Group 2’ as the latter terms seem to imply (at least to me) that there are different subjects in each group.
Design Each subject’s two scores are dependent, but are independent of the other subjects’ scores.
Matched Pairs The two paired scores don’t have to come from the same person, there are other ways the scores within the pairs could be associated (dependent). For example, measuring marital satisfaction within married couples (a static group design).
Design The scores within each couple are dependent, but each couple’s scores are independent of the other other couples’ scores
Example: Repeated Measuresor Within-Subjects Design You are interested in whether attending a mixed-race day camp affects children’s racial prejudice. Six children attending the day camp were given a test to measure racial prejudice (higher scores = more prejudice) when they first arrived at camp. The same six children were given the same test seven days later when they left the camp.
Getting Rid of the Nonindependence Because we have two scores per person the scores are not all independent of each other, which means we can’t do a t test. The solution is simple, we will turn those two scores per person into just one score per person, a score which reflects the difference in each person’s score when they are in Condition A compared to when they are in Condition B.
Difference Scores For each subject we now have just one score, their ‘difference’ score. 1) The difference scores measure how much the subject’s score differed in the two conditions. 2) The difference scores are independent of each other, we can now perform the t test for a single group of scores on the difference scores.
Difference Scores To simply, let’s call the difference scores ‘d scores’. The mean of the d scores is a measure of the average difference between the scores in condition A and the scores in condition B.
Difference Scores In our sample, the prejudice scores were on average 2.83 higher before the day camp than they were after the day camp.
Difference Scores Mean D is a statistic, it reflects what we found in those six kids. Our hypotheses will concern the larger population these six kids represent (2-tailed): H0: μD=0 Ha: μD0
Same Thing What we are about to do is exactly the same thing as performing a t test for a single group of scores, we have simply relabeled our variable as ‘D’ (to stand for ‘difference scores’) rather then ‘Y’. This is not really a third t test, it is just another context in which we can use the t test for a single group of scores.
Sampling Distribution All the results we could get for mean D assuming H0 were true.
df and tc so tc=±2.571
est. standard error (Compare to t test for a single group mean).
tobt You should be able to guess what this formula is.
Difference Scores If we analyze the difference scores to see if the mean of their population differs from zero we get: t(5)=3.248, p=.023, we can conclude that their is a statistically significant difference in the before and after scores (i.e. μD0), if we have no serious confounding variables then we conclude that the day camp affected prejudice scores.
One-Tail Tests If we are testing a theory which predicts that prejudice should be less after the day camp then that would imply that the mean of the difference scores should be greater than zero (write Ha to express the prediction). H0: μD 0 Ha: μD > 0 This is indeed the direction the results fell, so the p value would be p=.023/2=.012 So the results are t(5)=3.248, p=.012
One-Tail Tests If we are testing a theory which predicts that prejudice should be greater after the day camp then that would imply that the mean of the difference scores should be less than zero (write Ha to express the prediction). H0: μD 0 Ha: μD < 0 This is opposite from the direction the results fell, so the p value would be p=1-.023/2=.988 So the results are t(5)=3.248, p=.988
Matched Pairs Design This type of design is analyzed exactly the same way as a repeated measures design, you analyze the difference scores.
Lowering Variance Since the beginning of the semester I’ve been making the point that lowering the variance of the data is a good thing, it leads to more representative data and thus makes it easier to draw conclusions about the population from which the sample was drawn. Lowering variance increases power. I have been promising we would look at a way of accomplishing that other than simply sampling from a more homogeneous population, here it is...
Look again at our original data, if these scores came from an independent groups design (e.g. random half of the kids measured before the day camp and the other half measured after the day camp) we would be in trouble, look at how much the scores vary within each group, the kids really differed in prejudice levels. This variance would kill the power of our experiment.
But with a repeated measures design we are just looking at the effect of the independent variable (attending the camp) on each kid (how much they differed before and after rather than at how prejudiced they are). The independent variable had fairly similar effects on the kids (from –1 to 5), and thus the difference scores don’t have nearly as much variability as the prejudice levels of the various kids.
Analyzed as t for independent groups Analyzed as t for dependent
Variability and Designs Which t test you use is based upon how you run the study. In deciding how to run the study: • If you think the effect of the independent variable will be rather similar for each subject and that the subjects’ actual scores will vary quite a bit then use a paired sample design (repeated measure or matched pairs design). • If you think the effect of the independent variable will vary quite a bit and that the subjects’ actual scores will be rather similar than use an independent groups design (true experiment, quasi-experiment, static group design). A repeated measures design is usually more powerful than an independent groups design.
Effect Size The direct measure of effect size in this t test is simply the mean of the difference scores. This value represents the effect of the independent variable on the participants, and it also happens to equal the mean of the first group minus the mean of the second group (making it the same as the measure of effect size in the t test for independent groups).
From SPSS When doing a ‘Paired Samples t Test’ (what SPSS calls what I call ‘t test for correlated groups’) the analysis will provide the following under the title ‘Paired Samples Test’: Mean = 2.83333 Std Deviation=2.13698 In our use of symbols these would be represented as: Which is enough to compute Hedges’s g, for Cohen’s d we need the standard deviation of the sample, which can be found by:
Warning.... There is some controversy about the correct calculations for standardized effect size. The shortcuts provided in the earlier lecture on a single group t test (repeated below) don’t work in this context: If we were to use those formulas we would get larger effect sizes:
GPower 3.0 In GPower this t test is called the t test for “Means: Difference between two dependent means (matched pairs)”. If you give it mean D and the standard deviation of D (‘SD’) it will compute Cohen’s d (big deal, as we have seen that is a simple formula). The ‘Total sample size’ is the number of pairs of scores (6 in our example). By the way, the post hoc analysis shows that this example had a power of 0.80! This was due to my having the mean D be rather large compared to the SD.
Carry-Over Effect Carry-Over Effect: A confounding variable that may arise due to measuring the same person more than once, thus can only happen in a repeated-measures design. Practice effect: the general term for when a carry-over effect leads to an increase in performance over subsequent measures. Fatigue effect: the general term for when a carry-over effect leads to a decrease in performance over subsequent measures.
Options for Controlling Carry-Over Effects • If your independent variable is a carry-over effect (e.g. the effect of practice) then you do not need or want to control it. Otherwise.... • If applicable, use different forms of the same test. • Minimize the carry-over effect (e.g. increase the time between first measure and second measure). • Counterbalance the order of conditions.
Counterbalancing the Order of Conditions Half the participants are in Condition A first and in Condition B second. The other half of the participants are in Condition B first and Condition A second.