310 likes | 489 Views
Chapter 13. Analysis of Variance (ANOVA). Analysis of Variance (ANOVA). ANOVA can be used to test for differences between three or more means. The hypotheses for an ANOVA are always: H 0 : m 1 = m 2 = . . . = m k (where k is the number of groups)
E N D
Chapter 13 Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA) ANOVA can be used to test for differences between three or more means. The hypotheses for an ANOVA are always: H0: m1 = m2 = . . . = mk (where k is the number of groups) Ha: Not all of the population means are equal
Analysis of Variance: Assumptions • For each population the response variable is normally distributed • The variance of the response variable, s2, is the same for all of the populations • The observations must be independent
Analysis of Variance (ANOVA) The ANOVA hypothesis test is based on a comparison of the variation between groups (treatments) and within groups (treatments).
Between vs. Within Variation Case 1: All variation is due to differences between groups
Between vs. Within Variation Case 2: All variation is due to differences within groups
Analysis of Variance (ANOVA) If the variation is primarily due to differences between groups then we would conclude the means are different and reject H0. If the variation is primarily due to differences within the groups then we would conclude the means are the same and accept H0.
Analysis of Variance (ANOVA) The relative sizes of between and within group variation are measured by comparing two estimates of the variance. nsx̄2 is used to estimate nsx̄2 and s2. If the means are equal nsx̄2 will be an unbiased estimator of s2. If the means are not equal nsx̄2 will overestimate s2.
ANOVA Sampling Distribution of Given H0 is True Sample means are close together because they are drawn from the same sampling distribution when H0 is true.
ANOVA Sampling Distribution of Given H0 is False Sample means come from different sampling distributions and are not as close together when H0 is false. 3 1 2
Analysis of Variance (ANOVA) The second way of estimating the population variance is to find the average of the variances of the different groups. This approach provides an unbiased estimate regardless whether or not the null hypothesis is true.
Analysis of Variance (ANOVA) If we take the ratio of the two approaches we have a measure that has an expected value of 1 if the null hypothesis is true. It will be larger than 1 if the null hypothesis is false.
Analysis of Variance (ANOVA) If the null hypothesis is true and the conditions for conducting the ANOVA test are met then the sampling distribution of the ratio is an F distribution with k-1 degrees of freedom in the numerator and nT– k degrees of freedom in the denominator.
F Distribution As before a is the probability of rejecting H0 when it is true (the probability of making a Type I error). Fa is the critical value such that an area equal to a lies in the upper-most tail. For example, with 5 degrees of freedom in the denominator and 10 degrees of freedom in the numerator, an F value of 4.74 would capture an area equal to 0.05 in the tail.
ANOVA Hypothesis Test The steps for conducting an ANOVA hypothesis test are the same as for conducting a hypothesis test of the mean: State the hypotheses State the rejection rule Calculate the test statistic State the result of the test and its implications
ANOVA Table Typically when we do the test we organize the calculations in table with a specific format:
Between-Treatments Estimate of Population Variance The estimate of 2 based on the variation of the sample means is called the mean square due to treatments and is denoted by MSTR. Numerator is called the sum of squares due to treatments(SSTR) Denominator is the degrees of freedom associated with SSTR
Between-Treatments Estimate of Population Variance Assume there are the same number of observations in each group so that nj = n.
Within-Treatments Estimateof Population Variance s 2 The estimate of 2 based on the variation of the sample observations within each sample is called the mean square error and is denoted by MSE. Numerator is called the sum of squares due to error(SSE) Denominator is the degrees of freedom associated with SSE
Within-Treatments Estimateof Population Variance Assume there are the same number of observations in each group so that nj = n and nT = nk.
ANOVA Table SST divided by its degrees of freedom nT-1 is the overall sample variance that would be obtained if we treated the entire set of observations as one data set. • With the entire data set as one sample, the formula • for computing the total sum of squares, SST, is:
ANOVA Table Source of Variation Sum of Squares Degrees of Freedom Mean Square F Treatments k - 1 SSTR Error SSE nT - k Total SST nT - 1 SST’s degrees of freedom (d.f.) are partitioned into SSTR’s d.f. and SSE’s d.f. SST is partitioned into SSTR and SSE.
ANOVA Example Assume we are interested in finding if the average number of cars owned is different for three different towns. Five people are interviewed in each town. Assume a = .05
Analysis of Variance (ANOVA) H0: m1 = m2 = m3 Ha: Not all of the population means are equal Reject H0 if: F >Fa F > 3.89 Given k - 1 = 3 - 1 = 2 df in the numerator and nT- k = 15 - 3 = 12 df in the denominator
ANOVA Example = (2+2.2+4.8)/3 = 9/3 = 3
Analysis of Variance (ANOVA) SSTR = 5(2-3)2 + 5(2.2-3)2 + 5(4.8-3)2 = 24.4 SSE = (0-2)2 + (1-2)2 + (6-2)2 + (2-2)2 + (1-2)2 + (2-2.2)2 + (3-2.2)2 + (4-2.2)2 + (2-2.2)2 + (0-2.2)2 + (12-4.8)2 + (0-4.8)2 + (2-4.8)2 + (6-4.8)2 + (4-4.8)2 =115.6
Graded Homework Assume we are interested in finding if the average number of bedrooms per home is different for three different towns. Data on four houses were collected in each town. Assume a = .05 P. 401, #7 (just do a hypothesis test)