540 likes | 1.17k Views
Ch. 8 Differences Among Groups HOW STATISTICS TEST DIFFERENCES AMONG GROUPS Do the two levels of treatment differ significantly (p<.05)? The statistical test is to determine whether the “Null H o ” can be rejected or cannot be rejected.
E N D
Ch. 8 Differences Among Groups • HOW STATISTICS TEST DIFFERENCES AMONG GROUPS • Do the two levels of treatment differ significantly (p<.05)? • The statistical test is to determine whether the “Null Ho” can be rejected or cannot be rejected. • Also, to establish the strength of the association between the INDP. & DEP. variables or the size of the difference between two groups.
HOW STATISTICS TEST DIFFERENCES AMONG GROUPS(cont…) • Omega Squared (w²) – to estimate the degree of association (or % variance accounted for) between the INDEP. & DEP. variables. • Effect size(ES): The meaningfulness of the differences. • Four assumptions in using the t & the F ratios • 1. Data is drawn from normally distributed pop. • 2. Data represent random samples from pop. • 3. Data is an estimate of the same pop. Variance • 4. Data of F (or T) ratios are independent.
Three Types of t Tests • t Test Between a Sample and a Population Mean • Independent t Test • Using the Independent t Test • Checking for homogeneity of variance • Estimation Meaningfulness of Treatments • Omega Squared (w²) • Effect Size • Dependent t Test
t-Test Between a Sample Mean and a Population Mean • A test of the null hypothesis • There is no difference between the sample mean (M) and the population mean (ц), or M – ц = 0 • t = ______________ (8.1) • EXAMPLE: Known Values: Pop.: N = 10,000 Fitness class: n = 32 Sample mean: M = 81 Pop. Mean: ц = 76 Working it out : (equation 8.1) t = 5/1.59 = 3.14 • df = n - 1
Using the Independent t Test • t-test formula for two independent samples: t = _________________ (8.3) (p. 143) • t-test formula most easily performed with a calculator t = _________________ (8.4) (p. 143) df = n1 + n 2 - 2
Example 8.2 Known Values Group 1 Group 2 (70%VO2max) (40%VO2max) Mean Distance M = 3,004m M = 2,456m Standard Dev. s= 114m s= 103 Number of Subjects n= 15 n= 15 Working it out (Equation 8.3) t= 548/39.67 = 13.81 t(28) = 13.81, p<.05
Checking For Homogeneity of Variance • The variances between the groups are equivalent. • Homogeneity assumption should be checked if group sizes are very different or even when variances are very different.
Estimating Meaningfulness of Treatments • Omega Squared (w²) • w² = (t² - 1)/(t² + n1 + n2 – 1) • Example 8.3 Known Values Differences between groups: t=13.81 Number of subjects in Group 1: n1= 15 Number of subjects in Group 2: n2= 15 Working it out (Equation 8.3) w² = 189.72/219.72
Con’t: Meaningfulness of Treatments • We can conclude that w²=.86 means 86% of the total variance in the distance run scores can be accounted for by the difference in the two groups’ levels of training. • 14% is accounted for by other factors.
Effect Size • The degree to which the treatment influenced the outcome is by effect size (ES) • ES = (M1 – M2)/s • Pooled standard deviation: • sp = _________________ (8.8) • An ES greater or equal to .8 is “large” • An ES around .5 is “moderate” • An ES of .2 or smaller is considered “small”
Example 8.4: Meaningful treatment effect • Known Values Group 1 Group 2 Mean Distance M1 = 3,004 m M2 = 2,456 Standard dev. S1 = 114m s2 = 103 Number n1 = 15 n2 =15 Working it out (equation 8.7 & 8.8) sp = _________________ = 108.64 ES= (3004 – 2456)/108.64 = 5.0 *An ES of 5.0 is a large value and would typically be judged as a meaningful treatment effect
Dependent t Test • Test of the significance of differences between means of two sets of scores that are related, such as when the same subjects are measured on two occasions • Two groups of subjects are matched on one or more characteristics and thus are no longer independent • Or one group of subjects is tested twice on the same variable, and the experimenter is interested is the change between the two tests
Formula for dependent t test: • t = ___________________ (8.9) • Df = N – 1 • t= __________________ (8.11)
Working out example 8.5 • (Equations 8.11 , 8.10) t =___________ = 32/6.63 = 4.83 t(9) = 4.83, p<.05. The null hypothesis can be rejected. • Example 8.6 Known Value: Difference between post- and pretest means: MD =3.2 Mean of pretest scores : Mpre= 16.5 Working it out: Magnitude of increase = MD/ Mpre * 100 = 3.2/16.5 * 100 = 19.4%
Ex 8.5 Analysis • The gain is 19.4% of the pretest and represents nearly a 20% improvement. • We could estimate ES for the pretest to posttest change by subtracting the pretest M from the posttest M and dividing by the pretest standard deviation.
Interpreting t • One-Tailed Versus Two-Tailed t Tests: • One-Tailed: Test that assumes that the difference between the two means lies in one direction only. • Two-Tailed: Test that assumes that the difference between the two means could favor either group.
The 1st Level (M1 – M2) • Larger difference between the means increases the size of the t ratio, which increases the odds of rejecting the null hypothesis and thus increases power. • Can increase by applying stronger, more concentrated treatments.
The 2nd Level (S12 , S22) • The standard deviation represent the spread of the scores about the mean. • If spread becomes smaller, the variance is also smaller • F it is smaller, the t ratio, will become larger, thus increasing the odds of rejecting the null hypothesis as well as the power • Apply treatments consistently
The 3rd Level (n1 , n2) • It represents the number of subjects in each group. • If n1 and n2 are increased the t ratio will become large, thus increasing the odds of rejecting the null hypothesis and obtaining power. • If alpha is set at .10 as opposed to .05, increased power is attained.
Summary of Levels • Power may be obtained by using strong treatments, administering those treatments consistently, using as many subjects as feasible, or varying alpha. • After the null hypothesis is rejected, the strength (meaningfulness) of the effects must be evaluated.
The t Ratio • t= true variance/error variance • True variance = M1 – M2 • Error variance = _________________ • When a significant t ratio is found, true variance exceeds error variance to a certain degree
Strength of the Relationship (omega square) • The estimate of the strength of the relationship (w2) between the independent and dependent variables is represented by the ratio of true variance to total variance. w2= true variance/total variance
Ch. 8: Effect size calculation • Effect size is also an estimate of the strength or meaningfulness of the group differences or treatments. (M1 – M2) /s
Figure analysis • Comparing figure8.1b to 8.1a, there is less overlap between the two groups’ distribution of scores. • The treatment group mean was 1.0s higher than the control group mean.
Figure analysis (cont…) • Only .1587 (16%) of the scores are higher than 1.0 • By interpreting an effect size of 1.0, we can infer that the treatment the treatment improved average performance by 34 percentile points. • (i.e., treatment group=84, control group =50, 84-50=34)
Problem 3 (p. 152) • Apply the correlation formula (Equation 7.1, p. 120). If we treat the dummy-coded variable as X and the dependent variable as Y and ignore group membership (10 subjects with two variables), then the correlation formula used earlier can be applied to the data. • r = ______________ = 125/144 = .87
Ch. 8: Analysis of Variance (ANOVA) • Many research problems or questions involve more than two treatment groups. • Analysis of variance (ANOVA) is used to analyze the data.
Using the ANOVA technique, the total variability in a set of scores is divided into two or more components. • Variability values = sums of squares (SS). • The sum of squares value for each component is divided by its degrees of freedom value to obtain a mean square (MS) value. • The ratio of two mean square values is an F statistic, which is used to test the null hypothesis.
Ch. 8: One-way ANOVA • Typically involves statistical analysis of three or more independent groups. • Can be used with two or more independent groups, so it could be used in place of the t test for two independent groups.
One-way ANOVA (cont…) • The number of groups will be represented symbolically by k, and the total number of scores by N. • The sum of squares total is divided into sum of squares among groups (SSA) and the some of squares within groups (SSW). SST = SSA + SSW
One-way ANOVA (cont…) • If the group means are not equal, the sum of squares among groups is greater than zero. • SSA is an indication of differences among the groups. • If in any groups all scores are not the same value, SSW will be greater than zero.
Ch. 8: ANOVA (con’t) • The degrees of freedom for total (dfT). dfT= dfA + dfW, Where dfT = N-1 dfA = K-1 dfW = N-K
Ch. 8: ANOVA (con’t) • Sums of squares values for among and within groups are divided by their degrees of freedom to provide mean square (MS) values and an F statistic is calculated. • MSA = SSA / dfA • MSW = SSW /dfW • F = MSA /MSW
Ch. 8: ANOVA (con’t) • F = MSA /MSW has degrees of freedom equal to (K – 1) and (N – K) * The F table is presented in appendix B.
Five-Step hypothesis-testing procedure (data in table 11.1) STEP 1: • H0 : μA = μB = μC (the three population means are equal ) • H1: μA != μB != μC (the three population means are not equal • STEP 2: Alpha = 0.5
STEP 3: • Find the critical value in the F test table with degrees of freedom (K – 1) and (N – K) at the alpha level selected. • The degrees of freedom are 2 and 21 so the table value is 3.47 for alpha equals .05.
STEP 4: • Analyze the data using a one-way ANOVA. • The statistical test is the F ratio,and it can be seen that the F ratio is 4.45 and the p value of the F ratio is .024
STEP 5: • Since the p value of .024 is less than the alpha level of .05 a researcher concludes that there is a significant difference among the group means and accepts the alternate hypothesis.
TWO –WAY ANOVA • In a two-way ANOVA design, each score has a row and a column classifier. • The first example is a random blocks design with more than one score per cell.
For example, if there are five treatment groups, the number of subjects in each block has to be ten, fifteen, or some other multiple of five. • The statistical hypothesis is always that the treatment groups are equal in mean score.
Factorial Design • The second example, where the rows represent some classification of the subjects (e.g., gender, age, school grade), the columns identify the treatment group, and there are multiple subjects in each cell.
Two-dimensional ANOVA • The analysis for this example design could be referred to as a two-dimensional ANOVA or a 2 X 3 (Gender X Treatment) ANOVA. • Gender has two levels and treatment has three. • If gender is not a dimension, the design is a one-way ANOVA with three treatment groups.
Third statistical hypothesis • No interaction between the row variable and the column variable. • Hypothesis states that any differences in the effectiveness of the treatments are the same for both genders.
The third example is also a factorial design. • It is commonly used because it is a combination of two treatments. • Treatment A = the number of days per week people participated in adult fitness programs. (A1= 1day, A2 = 3days) • Treatment B = how many minutes the participated. (B1 = 15 minutes, B2 = 30 minutes, B3 = 45 minutes) • The A1,B1 group participated 1 day a week, 15 minutes a day during a 6 month experimental period. • All subjects are tested on a fitness test where a low score reflects high fitness.
1st Hypothesis • The first hypothesis is that there is no interaction between the row and column variables. • If there is no interaction, the differences between any two column means is the same for each row. 2nd Hypothesis • The second hypothesis is that the row means are equal. • If the hypothesis is true, the A1 and A2 row means are equal.
3rd Hypothesis • The third hypothesis is that the column means are equal. • If the hypothesis is true, the B1 and B2, and B3 column means are equal. In the two- way ANOVA design with multiple scores per cell, the sum of squares total (SST ) is partitioned in to a sum of squares columns (SSC), a sum of squares rows (SSR), a sum of squares interaction (SS1), and a sum of squares within (SSW): SST = SSC + SSR + SS1 + SSW
ANOVA – follow-up Testing(p. 157) • After your data analysis & know that significant differences exist among the group means, bu do not know all groups differ. • So next task is to perform a follow-up test because type I error is possible. • We need to protect experimentalwise error rate (the type I error) • The methods used is called multiple comparison techniques (or post-hoc test) which include: Scheffe, Tukey, Newman-Keuls, Duncan’s, etc.
The Scheffe Test controls Type I error for any number of appropriate comparisons • Calculate the t value using formula 8.14 (p. 158) • t = / (K – 1)x F (df = k – 1) (df for the MSwithin) • F ratio can be found in Table A.6 with 2 (k-1 = 3 – 1 = 2) & 12 (MS within = 12) • See example 8.10 (p. 158-59) & practice the calculation for t values between group 1 & 2, between group 1 & 3, and between group 2 & 3. • Formula 8.15 t = (M1 – M2) / [(MSw / n) x 2] 1/2