E N D
RMTD 404 Lecture 4
Chi Squares • In Chapter 1, you learned to differentiate between quantitative (aka measurement or numerical) and (aka frequency or categorical) qualitative variables. Most of this course will focus on statistical procedures that can be applied to quantitative variables. This chapter, however, focuses on qualitative variables. • This chapter describes 4 concepts relating to the use of the term chi-square (χ2). They are… • A sampling distribution named the chi-square distribution. • A statistical test for comparing marginal proportions of a categorical variable to theory-based proportions (goodness-of-fit chi-square). • A statistical test for evaluating independence of two categorical variables (Pearson’s chi-square). • A statistical test for comparing the relative fit of two models to the same data (Likelihood Ratio chi-square).
Chi Squares • This is the probability density function that depicts the relationship between an observed score (x) and the height of the y axis as a function of the population mean and standard deviation
Chi Squares • The chi-square distribution takes the following form (k is the degree of freedom). • Fortunately, you don’t really need to use these equations because tables exist that contain computed values of the areas under these curves and computer programs automatically produce these areas.
Chi-Squares • An interesting feature of the chi-square distribution is how its shape changes as the parameter k increases. In fact, as k approaches , the chi-square distribution becomes normal in shape. k=5 k=6 k=7 k=8
Chi-Square Goodness of Fit • One application of the chi-square distribution comes into play when we want to compare observed relative frequencies (percentages) to theory-based relative frequencies. Recall that in a hypothesis testing framework, we compare observed statistics to null parameters and determine the likelihood of obtaining the observed statistic due to sampling error under the assumption that the null parameter is correct. • We use the chi-square goodness-of-fit test in contexts in which we have a single categorical variable, and we want to determine whether observed classifications are consistent with a theory. In this case, our observed statistics are the proportions associated with each classification, and our null parameters are the expected proportions for each classification.
Chi-Square Goodness of Fit • For example, we might be interested in determining whether a purposive sample that we have drawn is comparable to the US population with respect to ethnicity. That is, we want to determine whether the observed proportions of Asians, African Americans, Hispanics, and Caucasians in our sample reflect the proportions of these groups in the general population. • In this case, the frequencies and proportions are observed, as in the following table. The theory-based null parameters (shown in the bottom row of the table) are obtained from the US Census.
Chi-Square Goodness of Fit • In the case of the chi-square goodness-of-fit test, the chi-square statistic is defined as: • O is the observed frequency, and k equals the number of classifications in the table (i.e., the number of cells). The expected value (designated E) is defined as the null proportion (i.e., theory-based) for that classification (ρ) times the sample size (N). • Also note the meaning of the expected frequencies (E). These values constitute what we would expect to be the values of the observed frequencies (O) if, indeed, our theory was true. That is, the expected number of cases in each group should be consistent with p (our theory-based proportions).
Chi-Square Goodness of Fit • Hence, the chi-square statistic tells us how far, on average, the observed cell frequencies are from the theory-based expectations. • The table below shows the computations for the example. The sum of the last row, the chi-square statistic, equals 33.15.
Chi-Square Goodness of Fit • We can compare our obtained chi-square value of 33.15, which has 3 degrees of freedom (degrees of freedom equals k – 1, the number of columns that are free to vary in the table), to the table values of the chi-square statistic—the range of values that occur due solely to random sampling. According to this table (Appendix on page 671), the critical value of the chi-square distribution with 3 degrees of freedom for α = .05 equals 7.82. • Hence, the observed differences between our sample and our expected values are extremely unlikely if the null hypothesis is true—that the vector of observed probabilities equals the vector of theory-based probabilities.
Chi-Square Goodness of Fit • For the sake of being thorough, let’s summarize how we would utilize the chi-square goodness-of-fit test. • Determine which test statistic is required for your problem and data. The chi-square goodness-of-fit statistic is relevant when you want to compare the observed frequencies or proportions for a single categorical variable to the frequencies predicted by a theory. • State your research hypothesis—that the observed frequencies were not generated by the population described by your theory. • State the alternative hypothesis: that the observed proportions are not equal to the theory based proportions (i.e., ρobservedρtheory—this is a non-directional test). • State the null hypothesis: that the observed proportions are equal to the theory-based proportions (i.e., ρobserved = ρtheory—here, ρ is the population parameter estimated by p, which is not the p-value but the proportion in each group observed).
Chi-Square Goodness of Fit • Summary of how we would utilize the chi-square goodness-of-fit test … continued. • Compute your observed chi-square value. • Determine the critical value for your test based on your degrees of freedom and desired a level OR determine the p-value for the observed chi-square value based on its degrees of freedom. • Compare the observed chi-square value to your critical value OR compare the p-value for the observed chi-square statistic to your chosen α, and make a decision to reject or retain your null hypothesis • Make a substantive interpretation of your test results.
Chi-Square Test of Association • A second important application involving the chi-square distribution allows us to evaluate whether two categorical variables are related to one another (aka associated). If they are not related, we say that they are independent of one another (i.e., knowledge about one of the variables does not tell us anything about the other variable.). • One way of depicting the relationship between two variables involves creating a contingency table (aka crosstab) showing the frequencies for each pairing of levels of the two variables. Consider the table below, a contingency table comparing the SES quartile of two Ethnicity groups. Note that the cell frequencies within a row or column constitute conditional totals, and the conditional totals in a row or margin sum to the marginal totals (i.e., row and column totals).
Chi-Square Test of Association • The distinction between conditional and marginal distributions is an important one because it highlights the manner in which the Pearson chi-square test of association is linked to the hypothesis testing framework. • Specifically, when we believe that there is no relationship between ethnicity and SES, we can predict cell frequencies based on the marginal frequencies. That is, when there is no relationship between the two variables, the conditional distributions of ethnicity across the SES quartiles should be similar enough to one another that we can conclude that any observed differences are due to sampling error. Specifically, when there is no association, all of the conditional distributions should merely be random deviations from the marginal distributions. • Sample 1 gives us p11 and p21. Sample 2 gives us p12 and p22. Sample 3 gives us p13 and p23. Sample 4 gives us p14 and p24. All of these are assumed to differ from the marginal distribution, p1+ and p2+, due to sampling error.
Chi-Square Test of Association • To perform the Pearson chi-square test of association we do the following: • Determine which statistic is required for your problem and data. The chi-square test of association is relevant when you want to compare observed frequencies of two categorical variables to those implied by the tables marginal when no relationship exists. • State the research hypothesis: There is a relationship between ethnicity and SES in the population. • State the alternative hypothesis: Ethnicity and SES are associated in the population. • State the null hypothesis: Ethnicity and SES are independent (note that our test determines whether the observed ps are too different to have been generated from the same p due to random sampling variation). • Compute the chi-square statistic.
Chi-Square Test of Association • Determine the critical value for your test based on your degrees of freedom [(R – 1)(C– 1)] and desired a level OR determine the p-value for the observed test statistic. • Compare the observed chi-square value to your critical value or compare the p-value for the observed chi-square to your chosen a, and make a decision to reject or retain your null hypothesis. • Make a substantive interpretation of your test results.
Chi-Square Test of Association • Again, recall the formula for the chi-square statistic • Note that in this case, we sum the values across both rows and columns. In the case of the test of association, we compute our expected values based on the marginal totals (rather than state them based on substantive theory). where Ri is the row total for the cell and Cj is the column total for the cell. • However, just because our expected values come from numbers does not mean that we are not imposing a substantive theory on their generation. In fact, we are imposing a substantive theory. Computing expected frequencies based on the marginal totals implies that the cell frequencies only depend on the joint distribution of ethnicity and SES. That is, the proportion of members in each ethnicity group will be the same at each level of SES. Hence, our computation of the expected value imposes a theory-based assumption that there is no relationship between ethnicity and SES.
Chi-Square Test of Association • For our example, we need to determine the critical value for our hypothesis test. To do this, we need to state a level—we’ll use the traditional .05 level. We also need to know the degrees of freedom for the test—recall that the shape of the chi-square distribution changes depending on the value of k. • Degrees of freedom is a concept that will reappear several times in this course. We use the term degrees of freedom to relay the fact that only a portion of the values in our data set are free to vary once we impose the null assumptions. In our case, only some of the cell frequencies are free to vary once we impose the marginal totals on the table. As shown, in our case only three of the cells are free to vary. • In general, the degrees of freedom for a chi-square test of association are defined by
Chi-Square Test of Association • That means that the degrees of freedom for our example equals 3 [or (2 – 1)(4 – 1)]. From the chi-square table, we see that the critical value for our test equals 7.82.
Chi-Square Test of Association • Here are expected frequencies for each cell. • And the difference between the observed and expected frequencies (aka residuals). Notice that blacks are over represented in the first SES quartile and whites are over represented in quartiles 2 through 4 under the null assumption of no association.
Here are the squared difference between observed and expected frequencies. • And here are the squared differences divided by the expected frequencies. Each of these is equivalent to a chi-square with one degree of freedom. Based on this, we see that the largest deviations are due to the frequency of blacks in the extreme SES quartiles. The sum of these values is our chi-square statistic (15.07).
Chi-Square Test of Association • Because our observed chi-square of 15.07 is greater than our critical value (7.82), we reject the null hypothesis and conclude that there is a relationship between ethnicity and SES. Note that the p-value for the observed statistic (which is reported by most statistical software—the critical value typically is not reported) equals .002 (less than α = .05), indicating that the observed pattern of cell frequencies is highly unlikely under the null assumption of independence. • A substantive interpretation of this test might read something like this: • A chi-square test of association indicated that ethnicity and SES are related, χ2(3) = 15.07, p = .002. Examination of the residuals indicates that blacks appear in the low SES category too frequently and in the high SES category too infrequently to assume that the observed frequencies are random departures from a model of independence.
Example – Goodness of Fit • 6.1 The chairperson of psychology department suspects that some of her faculty members are more popular with students than are others. There are three sections of introductory psychology, taught at 10:00am, 11:00am, and 12:00pm by professors Anderson, Klatsky, and Kamm. The number of students who enroll for each is • State the null hypothesis, run the appropriate chi-square test, and interpret the results.
Example – Goodness of Fit • The null hypothesis in this study is that students enroll at random in the population. • H0: πAnderson=πKlatsky=πKamm • The critical χ2(2)=5.99 at 0.05 level. So?
Example – Goodness of Fit • 6.2. The data in 6.1 will not really answer the question the chairperson wants answered. What is the problem , and how could the experiment be improved?
Example – Test of Association • 6.8 We know that smoking has all sorts of ill effects on people; among other things, there is evidence that it affects fertility. Weinberg and Gladen (1986) examined the effects of smoking and the ease with which women become pregnant. The researchers asked 586 women who had planned pregnancies how many menstrual cycles it had taken for them to become pregnant after discontinuing contraception. Weinberg and Gladen also sorted the women into whether they were smokers or nonsmokers. The data follow. • Is there an association between smoking and pregnancy?
Example – Test of Association • The expected values are in the parentheses. • The critical χ2(2) =5.99 at 0.05 level. So?