1 / 22

STA291

STA291. Statistical Methods Lecture 31. Analyzing a Design in One Factor – The One-Way Analysis of Variance. Consider an experiment with a single factor of k levels. Question of Primary Interest: Is there evidence for differences in effectiveness for the treatments?.

cate
Download Presentation

STA291

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STA291 Statistical Methods Lecture 31

  2. Analyzing a Design in One Factor – The One-Way Analysis of Variance Consider an experiment with a single factor ofklevels. Question of Primary Interest: Is there evidence for differences in effectiveness for the treatments? Let be the mean response for treatment group i. Then, to answer the question, we must test the hypothesis: (no difference in treatments) (at least one treatment has a different result)

  3. Analyzing a Design in One Factor – The One-Way Analysis of Variance What criterion might we use to test the hypothesis? The test statistic compares the variance of the means to what we’d expect that variance to be based on the variance of the individual responses.

  4. Analyzing a Design in One Factor – The One-Way Analysis of Variance The F-statisticcompares two measures of variation, called mean squares. The numerator measures the variation between the groups (treatments) and is called the Mean Square due to treatments (MST). The denominator measures the variation within the groups, and is called the Mean Square due to Error (MSE). The F-statistic is their ratio: Every F-distributionhas two degrees of freedom, corresponding to the degrees of freedom for the mean square in the numerator and for the mean square (usually the MSE) in the denominator.

  5. Analyzing a Design in One Factor – The One-Way Analysis of Variance To quantify these two classes of variation, we introduce two new measures of variability for one-factor experiments with k levels: • The Mean Square due to Treatments (between-group variation measure)

  6. Analyzing a Design in One Factor – The One-Way Analysis of Variance • The Mean Square due to Error (within-group variation measure)

  7. Analyzing a Design in One Factor – The One-Way Analysis of Variance This analysis is called an Analysis of Variance (ANOVA) The null hypothesis is that the means are all equal Usually presented in a table, called the ANOVA table,like this one:

  8. Analyzing a Design in One Factor – The One-Way Analysis of Variance Example: Tom’s Tom-Toms tries to boost catalog sales by offering one of four incentives with each purchase: • Free drum sticks • Free practice pad • Fifty dollars off any purchase • No incentive (control group)

  9. Analyzing a Design in One Factor – The One-Way Analysis of Variance Here is a summary of the spending for the month after the start of the experiment. A total of 4000 offers were sent, 1000 per treatment.

  10. Analyzing a Design in One Factor – The One-Way Analysis of Variance Use the summary data to construct an ANOVA table. (This table is most often created using technology.) • Since P is so small, we reject the null hypothesis and conclude that the treatment means differ. • The incentives appear to alter the spending patterns.

  11. Assumptions and Conditions for ANOVA Independence Assumption The groups must be independent of each other. No test can verify this assumption. You have to think about how the data were collected and check that the Randomization Condition is satisfied.

  12. Assumptions and Conditions for ANOVA Equal Variance Assumption ANOVA assumes that the true variances of the treatment groups are equal. We can check the corresponding Similar Variance Conditionin various ways: • Look at side-by-side boxplots of the groups. Look for differences in spreads. • Examine the boxplots for a relationship between the mean values and the spreads. A common pattern is increasing spread with increasing mean. • Look at the group residuals plotted against the predicted values (group means). See if larger predicted values lead to larger-magnitude residuals.

  13. Assumptions and Conditions for ANOVA Normal Population Assumption Like Student’s t-tests, the F-test requires that the underlying errors follow a Normal model. As before when we faced this assumption, we’ll check a corresponding Nearly Normal Condition. • Examine the boxplots for skewness patterns. • Examine a histogram of all the residuals. • Example a Normal probability plot.

  14. Assumptions and Conditions for ANOVA Normal Population Assumption For the Tom’s Tom-Toms experiment, the residuals are not Normal. In fact, the distribution exhibits bimodality.

  15. Assumptions and Conditions for ANOVA Normal Population Assumption The bimodality shows up in every treatment! This bimodality came as no surprise to the manager. He responded, “…customers …either order a complete new drum set, or…accessories… or choose not to purchase anything.”

  16. Assumptions and Conditions for ANOVA Normal Population Assumption These data (and the residuals) clearly violate the Nearly Normal Condition. Does that mean that we can’t say anything about the null hypothesis? No. Fortunately, the sample sizes are large, and there are no individual outliers that have undue influence on the means. With sample sizes this large, we can appeal to the Central Limit Theorem and still make inferences about the means. In particular, we are safe in rejecting the null hypothesis.

  17. Assumptions and Conditions for ANOVA Example: A test prep course A tutoring organization says of the 20 students it worked with gained an average of 25 points on a given IQ test when they retook the test after the course. Explain why this does not necessarily prove that the special course caused scores to go up. The students were not randomly assigned. Those who signed up for the course may be a special group whose scores would have improved anyway. Design an experiment to test their claim Give the IQ test to a group of volunteers and then randomly assign them to take the review course or to not take the review course. After a period of time re-administer the test.

  18. Assumptions and Conditions for ANOVA Example: A test prep course A tutoring organization says of the 20 students it worked with gained an average of 25 points on a given IQ test when they retook the test after the course. It is suspected that the students with particularly low grades would benefit more from the course, how would you change your design to account for this suspicion. After the initial test, group the volunteers based on their scores. Randomly assign half of each group to either take the review course or not. Compare the results from the two groups in the block design.

  19. Multiple Comparisons Knowing that the means differ leads to the question of which ones are different and by how much. Methods that test these issues are called methods formultiple comparisons. Question: Why don’t we simply use a t-test for differences between means to test each pair of group means? Answer: Each t-test is subject to a Type I error, and the chances of committing such an error increase as the number of tested pairs increases.

  20. Multiple Comparisons The Bonferroni Method Use the t-test methodology to test differences between the means, but use an inflated value for t* that lessens the accumulated chance of committing a Type I error by keeping the overall Type I error rate at or below This wider margin of error is called the minimum significant difference or MSD. • Let J represent the number of pairs of means. • Then, find the confidence interval for each difference using the confidence level instead of If a confidence interval does not contain 0, then a significant difference is indicated.

  21. Watch out for outliers. • Watch out for changing variances. • Watch for multiple comparisons. • Be sure to fit an interaction term when it exists. • When the interaction effect is significant, don’t interpret the main effects.

  22. Looking back • Understand how to use Analysis of Variance (ANOVA) to analyze designed experiments. • ANOVA tables follow a standard layout; be acquainted with it. • The F-statistic is the test statistic used in ANOVA. F-statistics test hypotheses about the equality of the means of two groups. • The Bonferroni Method is for identifying which treatments are different

More Related