1 / 29

Chapter 16

Chapter 16. Chi-Squared Tests. A Common Theme…. One data type…. …Two techniques. Two Techniques…. The first is a goodness-of-fit test applied to data produced by a multinomial experiment , a generalization of a binomial experiment and is used to describe one population of data.

ora-kerr
Download Presentation

Chapter 16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 16 Chi-Squared Tests

  2. A Common Theme… One data type… …Two techniques

  3. Two Techniques… • The first is a goodness-of-fit test applied to data produced by a multinomial experiment, a generalization of a binomial experiment and is used to describe one population of data. • The second uses data arranged in a contingency table to determine whether two classifications of a population of nominal data are statistically independent; this test can also be interpreted as a comparison of two or more populations. • In both cases, we use the chi-squared ( ) distribution.

  4. The Multinomial Experiment… • Unlike a binomial experiment which only has two possible outcomes (e.g. heads or tails), a multinomial experiment: • • Consists of a fixed number, n, of trials. • • Each trial can have one of k outcomes, called cells. • • Each probability pi remains constant. • • Our usual notion of probabilities holds, namely: • p1 + p2 + … + pk = 1, and • • Each trial is independent of the other trials.

  5. Chi-squared Goodness-of-Fit Test… • We test whether there is sufficient evidence to reject a specified set of values for pi. • To illustrate, our null hypothesis is: • H0: p1 = a1, p2 = a2, …, pk = ak • (where a1, a2, …, ak are the values of interest) • Our research hypothesis is: • H1: At least one pi ≠ ai

  6. Chi-squared Goodness-of-Fit Test… • The test builds on comparing actual frequency and the expected frequency of occurrences in all the cells. • Example 16.1… • We compare market share before and after an advertising campaign to see if there is a difference (i.e. if the advertising was effective in improving market share). • H0: p1 = a1, p2 = a2, …, pk = ak • Where ai is the market share before the campaign. If there was no change, we’d expect H0 to not be rejected. If there is evidence to reject H0 in favor of: H1: At least one pi ≠ ai, what’s a logical conclusion?

  7. Example 16.1… IDENTIFY • Market shares before the advertising campaign… • Company A – 45% • Company B – 40% • All Others – 15 % • 200 customers surveyed after the campaign. The results: • Company A – 102 customers preferred their product. • Company B – 82 customers… • All Others – 16 customers. • Before the campaign, we’d expect 45% of 200 customers (i.e. 90 customers) to prefer company A’s product. After the campaign, we observe its 102 customers. Does this mean the campaign was effective? (i.e. at a 5% significance level).

  8. Observed Frequency Expected Frequency A A B B Example 16.1… Are these changes statistically significant?

  9. Example 16.1… IDENTIFY • Our null hypothesis is: • H0: pCompanyA = .45, pCompanyB = .40, pOthers = .15 • (i.e. the market shares pre-campaign), and our alternative hypothesis is: • H1: At least one pi ≠ ai • In order to complete our hypothesis testing we need a test statistic and a rejection region…

  10. Chi-squared Goodness-of-Fit Test… • Our Chi-squared goodness of fit test statistic is given by: • Note: this statistic is approximately Chi-squared with k–1 degrees of freedom provided the sample size is large. The rejection region is: observed frequency expected frequency

  11. Example 16.1… COMPUTE • In order to calculate our test statistic, we lay-out the data in a tabular fashion for easier calculation by hand: Check that these are equal

  12. Example 16.1… INTERPRET • Our rejection region is: • Since our test statistic is 8.18 which is greater than our critical value for Chi-squared, we reject H0 in favor of H1, that is, • “There is sufficient evidence to infer that the proportions have changed since the advertising campaigns were implemented”

  13. Example 16.1… COMPUTE • Note: Table 5 in Appendix B does not allow for the direct calculation of , so we have to use Excel:

  14. Example 16.1… p-value • Note: There are a couple of different ways to calculate the p-value of the test: Computed manually from our table Computed directly from the data

  15. Required Conditions… • In order to use this technique, the sample size must be large enough so that the expected value for each cell is 5 or more. (i.e. n x pi ≥ 5) • If the expected frequency is less than five, combine it with other cells to satisfy the condition.

  16. Identifying Factors… • Factors that Identify the Chi-Squared Goodness-of-Fit Test: ei=(n)(pi)

  17. Chi-squared Test of a Contingency Table • The Chi-squared test of a contingency table is used to: • • determine whether there is enough evidence to infer that two nominal variables are related, and • • to infer that differences exist among two or more populations of nominal variables. • In order to use use these techniques, we need to classify the data according to two different criteria.

  18. Example 16.2… IDENTIFY • The demand for an MBA program’s optional courses and majors is quite variable year over year. • The research hypothesis is that the academic background of the students (i.e. their undergrad degrees) affects their choice of major. • A random sample of data on last year’s MBA students was collected and summarized in a contingency table…

  19. Example 16.2 The Data

  20. Example 16.2… • Again, we are interesting in determining whether or not the academic background of the students affects their choice of MBA major. Thus our research hypothesis is: • H1: The two variables are dependent • Our null hypothesis then, is: • H0: The two variables are independent.

  21. Example 16.2… • In this case, our test statistic is: • (where k is the number of cells in the contingency table, i.e. rows xcolumns) • Our rejection region is: • where the number of degrees of freedom is (r–1)(c–1)

  22. Example 16.2… COMPUTE • In order to calculate our test statistic, we need the calculate the expected frequencies for each cell… • The expected frequency of the cell in row i and column j is:

  23. Contingency Table Set-up…

  24. Example 16.2 COMPUTE Compute expected frequencies… e23 = (31)(47)/152 = 9.59—compare this to f23 = 7

  25. Example 16.2… • We can now compare observed with expected frequencies… • and calculate our test statistic:

  26. Example 16.2… INTERPRET • We compare = 14.70 with: • Since our test statistic falls into the rejection region, we reject • H0: The two variables are independent. • in favor of • H1: The two variables are dependent. • That is, there is evidence of a relationship between undergrad degree and MBA major.

  27. Example 16.2… COMPUTE • We can also leverage the tools in Excel to process our data: Tools > Data Analysis Plus > Contingency Table Compare p-value

  28. Required Condition – Rule of Five… • In a contingency table where one or more cells have expected values of less than 5, we need to combine rows or columns to satisfy the rule of five. • Note: by doing this, the degrees of freedom must be changed as well.

  29. Identifying Factors… • Factors that identify the Chi-squared test of a contingency table:

More Related