500 likes | 708 Views
XII. Justifying the ANOVA-based hypothesis test. XII.A The sources for an ANOVA XII.B The sums of squares for an ANOVA XII.C Degrees of freedom of the sums of squares for an ANOVA XII.D Expected mean squares for an ANOVA XII.E The distribution of the F statistics for an ANOVA
E N D
XII. Justifying the ANOVA-based hypothesis test XII.A The sources for an ANOVA XII.B The sums of squares for an ANOVA XII.C Degrees of freedom of the sums of squares for an ANOVA XII.D Expected mean squares for an ANOVA XII.E The distribution of the F statistics for an ANOVA XII.F Application of theory for ANOVA-based hypothesis test to an example Statistical Modelling Chapter XI
Introduction • In chapter VI we gave a procedure for determining an ANOVA table. • In this chapter we look at the theory that justifies this procedure. • In particular, examine basis of the SSqs, dfs, E[MSq]s and the distribution of the F statistic. Statistical Modelling Chapter XI
XII.A The sources for an ANOVA • Various methods for determining the sources to include in an ANOVA table. • Perhaps most popular is to try to identify the situation that is closest to yours in a textbook. • Big disadvantage is not exact match and use wrong analysis. • Our approach based on dividing the factors into unrandomized and randomized factors Statistical Modelling Chapter XI
Our approach 1. identify the unrandomized and randomized factors; 2. determine the structure formulae that describe the crossing and nesting relations a) amongst the unrandomized factors and b) amongst the randomized factors (and in some cases between the randomized and some unrandomized factors); 3. expand the structure formulae to obtain two sets of sources; 4. form the lines in the ANOVA table by a) listing the unrandomized sources in the table, b) entering the randomized sources indented under the appropriate unrandomized sources, and c) adding Residuals for unrandomized sources where there are df left over. • This extracts the key features of the experiment arising from the randomization and captures these in the structure formulae for use in determining the terms in the analysis. • The ANOVA table produced using this method displays the confounding arising from the randomization, something that traditional tables do not do. Statistical Modelling Chapter XI
XII.B The SSqs for an ANOVA • Derivation of SSqs, given the sources in the ANOVA, is based on the S matrices. • For a structure formula, one must identify the generalized factors corresponding to the sources derived from that formula. • For each generalized factor there is an S that specifies the units that are related in having the same level of that generalized factor. • For example, SBlocks has a one for every pair of units that occur in the same block. • Generally have two sets of relationship (S) or, equivalently, mean-operator matrices (M= (n/f)-1S=g-1S): one for the unrandomized factors and one for the randomized factors. (relationship matrices also called summation matrices) • Now a set of such matrices forms the basis for an algebra that is called a relationship algebra. • Associated with a set of Ss is a set of Qs, again one Q for each generalized factor, that are the idempotents of the relationship algebra and provide the quadratic-form matrices for the analysis. • Under certain conditions, met by all our examples, a set of Qs derived from a structure formula forms a complete set of mutually-orthogonal idempotents (CSMOI — see defn XI.10). • So we form SSqs of the projections of the data vector into the subspaces projected onto by the Qs. Statistical Modelling Chapter XI
Obtaining expressions for Qs and Ms • Rule VI.6 used to obtain expressions for Qs in terms of Ms. • Following rule used to obtain expressions for the S (& Ms) as the direct products of I and J matrices. (generalizes Rule XI.1) Rule XII.1: The S for a generalized factor, formed as the subset of a set of s equally-replicated factors that uniquely index the units, • is the direct product of s matrices, provided the s factors are arranged in standard order. • Taking the s factors in the sequence specified by the standard order, • for each factor in the generalized factor include an I matrix in the direct product and • J matrices for those that are not. • The order of a matrix in the direct product is equal to the number of levels of the corresponding factor. • The M matrix is the same direct product of I and J matrices, except that each J is multiplied by the reciprocal of its order. • Rule applies to factors from the unrandomized structure. Statistical Modelling Chapter XI
Ms are symmetric and idempotent • Prove this useful result for those that can be expressed as the direct product of I and J matrices in the next lemma. Lemma XII.1: Let M be the direct product of I and J matrices, the latter multiplied by the reciprocal of their order, as prescribed in rule XII.1. Then M is symmetric and idempotent. Proof: Since (AB)' =A'B' and I and J are both symmetric, M must also be symmetric. • Since (AB)(CD) = (ACBD) and in particular (AB)(AB) = (A2B2), M2 must be the direct product of I and the square of J matrices, each J multiplied by the reciprocal of its order. • Consequently, M2 and M will be same direct product of I and J matrices, latter multiplied by the reciprocal of their order. Statistical Modelling Chapter XI
XII.C Degrees of freedom of the SSqs for an ANOVA • Definition XII.1: The degrees of freedom of a sum of squares is the rank of the idempotent of its quadratic form. That is the degrees of freedom of Y'QY is given by rank(Q). • The following definition and lemma establish that the trace of an idempotent is the same as rank and hence its df. • Definition XII.2: The trace of a square matrix is the sum of its diagonal elements. • Lemma XII.2: For B idempotent, trace(B) =rank(B). • Proof: The proof is based on the following facts: • the trace of a matrix is equal to the sum of its eigenvalues; • the rank of a matrix is equal to the number of nonzero eigenvalues; and • the eigenvalues of an idempotent can be proved to be either 1 or zero and so sum = number. Statistical Modelling Chapter XI
Results on traces of matrices • Clearly, the dfs can be established by determining the trace of a matrix so some results on the traces. • Lemma XII.3: Let c be a scalar and A, B and C be matrices. Then, when the appropriate operations are defined, we have • trace(A') = trace(A) • trace(cA) =ctrace(A) • trace(A + B) =trace(A) + trace(B) • trace(AB) =trace(BA) • trace(ABC) =trace(CAB) =trace(BCA) • trace(AB) =trace(A) trace(B) • trace(A'A) = 0 if and only if A= 0. Statistical Modelling Chapter XI
Trace of a Q • Each Q matrix is a linear combination of M matrices so that • the trace(Q) will be the same linear combination of the traces of the Ms. • In the next lemma we prove that the trace of an M is equal to the no. of levels in its generalized factor, • when corresponding summation matrix is direct product of I and Js. Lemma XII.4: Let MF be a mean operator matrix for a generalized factor F that has f levels each replicated n/f = g times and SF be the corresponding summation matrix that is direct product of I and Js. Then trace(MF) =f. Proof: • Now trace(MF) =g-1trace(SF). • But SF is the direct product of I and Js. • However, trace(AB) =trace(A) trace(B) and the traces of I and J matrices are equal to their orders. • As the product of the orders of the I and Js for any S must be n, trace(SF) =n and so trace(MF) =g-1trace(SF) =n/g=f. Statistical Modelling Chapter XI
XII.D Expected mean squares for an ANOVA • As has been previously stated, the E[MSq]s are just the average or mean value of the MSqs under sampling from a population whose Y variables behave as described by the linear model. • i.e. the true mean value of a Mean Square • It depends on the model parameters • To derive the expected values, we note that the general form of an MSq is a quadratic form divided by a df, Y'QY/n. • So we first establish an expression for the expectation of any quadratic form. Statistical Modelling Chapter XI
Expectation of a quadratic form • Theorem XII.1: Let Y be an n 1 vector of random variables with E[Y]=y and var[Y] =V, where y is a n 1 vector of expected values and V is an n n matrix. • Let A an n n matrix of real numbers. • Then E[Y'AY] =trace(AV) + y'Ay. Statistical Modelling Chapter XI
Proof of theorem XII.1 • Firstly note that, as Y'AY is a scalar, Y'AY=trace(Y'AY) • Also, recall that trace of matrix products are cyclically commutative. • Thus, E[Y'AY] = E[trace(Y'AY)] = E[trace(AYY')]. • Now, for an n n matrix Z • Also, lemma XI.5 (E[Y] results) states that E[AY] =AE[Y] and this can be extended to E[AYY'] =AE[YY']. • Hence, E[trace(AYY')] =trace(E[AYY']) =trace(AE[YY']). • Now we require an expression for E[YY'] which we obtain by considering definition I.7 of V= E[(Y-y) (Y-y)']. • From this Statistical Modelling Chapter XI
Proof of theorem XII.1 (continued) • Now, E[y] =y as the elements of y are population quantities and are constants with respect to expectation. • Thus E[Yy'] = E[Y]y'=yy'= E[yY]. • Hence, V= E[YY'] - E[Yy'] - E[yY] + yy'= E[YY'] -yy' so that E[YY'] =V + yy'. • This leads to Statistical Modelling Chapter XI
Theorem for E[MSq] • Theorem XII.2: Let Y be an n 1 vector of random variables with E[Y]=y and var[Y] =V, where y is a n 1 vector of expected values and V is an n n matrix. • Let Y'QY/n be the mean square where Q is an n n symmetric, idempotent matrix and n is the degrees of freedom of the sums of squares. • Then E[Y'QY/n] = (trace(QV) + y'Qy)/n. • Proof: Since n is a constant, E[Y'QY/n] = E[Y'QY]/n and the result follows straightforwardly using theorem XII.1. • So to derive E[MSq] for a particular source under a specific model, one substitutes • the Q matrix for the source and • the y and V for the model into the expression E[Y'QY/n] = (trace(QV) + y'Qy)/n given by theorem XII.2. Statistical Modelling Chapter XI
XII.E The distribution of the F statistics for an ANOVA • Each F statistic in an ANOVA is used to assess whether or not a null hypothesis is likely to be true by determining if the value of our test statistic is unlikely when H0 is true. • To do this we need to establish the sampling distribution of the test statistic when the null hypothesis is true. • Test statistic is of the following general form • It is the ratio of two MSqs each of which is a quadratic form divided by its df. • We will in the following three theorems: • Give the distribution of a quadratic form. • Establish the relationship between several quadratic forms. • Obtain the distribution of the ratio of two independent quadratic forms. Statistical Modelling Chapter XI
Probability distribution function for Distribution of a quadratic form • Theorem XII.3: Let A be an n n symmetric matrix of rank n and Y be an n 1 normally distributed random vector with E[AY] =0, var[Y] =V and E[Y'AY/n] =l. • Then (1/ l)Y'AY follows a chi-squared distribution with n degrees of freedom if and only if A is idempotent. • Proof: not given • The chi-square probability distribution function for the random variable U is: Statistical Modelling Chapter XI
Relationships between idempotents • Theorem XII.4: Let Y be an n 1 vector of random variables with E[Y]=Xq and var[Y] =V. • Let Y'A1Y, Y'A2Y, …, Y'AhY be a collection of h quadratic forms where, for each i= 1, 2, …, h, • Ai is symmetric, of rank ni, • E[AiY] =0 and • E[Y'AiY/ni] =li. • If any two of the following three statements are true (any two implies the other), • All Ai are idempotent • is idempotent • AiAj=0, ij • then not only does (1/li)Y'AiY, for each i, follows a chi-squared distribution with ni degrees of freedom as Theorem XII.3 establishes, but • Y'AiY are independent for ij and • where ndenotes the rank of . • Proof: not given Statistical Modelling Chapter XI
Distribution of the ratio of two independent quadratic forms • Theorem XII.5: Let U1 and U2 be two random variables distributed as chi-squares with n1 and n2 degrees of freedom. • Then, provided U1 and U2, are independent, the random variable • is distributed as Snedecor's F with n1 and n2 degrees of freedom. • Proof: not given Statistical Modelling Chapter XI
F distribution for W Probability distribution function for F3,46 Statistical Modelling Chapter XI
XII.F Application of theory for ANOVA-based hypothesis test to an example a)The sources for an ANOVA b)The sums of squares for an ANOVA c)Degrees of freedom of the sums of squares for an ANOVA d)Expected mean squares for an ANOVA e)The distribution of the F statistics for an ANOVA Statistical Modelling Chapter XI
Example XII.1 Randomized Complete Block Design a) The sources for an ANOVA • For the RCBD, the unrandomized factors are Blocks and Units and the randomized factors are Treatments. • The unrandomized structure formula is b Blocks/t Units and the randomized structure formula t Treatments. • The unrandomized sources are Blocks and Units[Blocks] and the randomized source is Treatments. • The sources for the ANOVA table for this experiment are: Statistical Modelling Chapter XI
Example XII.1 Randomized Complete Block Design (continued) b) The sums of squares for an ANOVA • Generalized factors • unrandomized: (G), Blocks and BlocksUnits and • randomized: (G) and Treatments. • So Qs are • unrandomized: QG, QB and QBU and • randomized: QG and QT. • Since each of these is derived from a structure formula consisting of only nesting and crossing operators, they each form a CSMOI. Statistical Modelling Chapter XI
Example XII.1 Randomized Complete Block Design (continued) b) The sums of squares for an ANOVA (continued) • The Hasse diagrams of generalized-factor marginalities, that includes expressions for the Qs in terms of the Ms, are as follows: Statistical Modelling Chapter XI
MBU=IbIt, and ; • and . Example XII.1 Randomized Complete Block Design (continued) b) The sums of squares for an ANOVA (continued) • If the data is ordered in standard order for Blocks and Treatments and if Units are given the same numbering as Treatments (this doesn't affect the analysis), rule XII.1 (S, M as direct products) yields: • Latter requires inclusion of a factor for replicates, here Blocks; this factor never occurs in its generalized factors. • So the ANOVA table with SSqs added is: Statistical Modelling Chapter XI
where MBU=IbIt, , and Example XII.1 Randomized Complete Block Design (continued) c) Degrees of freedom of the SSqs for an ANOVA • The following theorem establishes that the degrees of freedom of the sums of squares are as given in the analysis of variance table. • Theorem XII.6: Let quadratic forms for the SSqs be • Their degrees of freedom are n-1, b-1, b(t-1), t-1 and (b-1)(t-1), respectively, • where n is the no. of observations, b is the no. of blocks and t is the no. of treatments. Statistical Modelling Chapter XI
It remains to demonstrate that QU and are symmetric and idempotent. Proof of theorem XII.6 (continued) • First we establish that all Q matrices are symmetric and idempotent so that we can utilise lemma XII.2 to conclude that the ranks of the Q matrices are equal to their traces. • Now QB, QBU and QT are symmetric and idempotent as they are in the complete sets of mutually-orthogonal idempotents. Statistical Modelling Chapter XI
Proof of theorem XII.6 (continued) • Firstly, from lemma I.1 (transpose properties), (cA + dB)'=cA' + dB' and we have from lemma XII.1 that the Ms are symmetric and idempotent so that • Now MBU=In so that • That is QU is symmetric and idempotent. Statistical Modelling Chapter XI
Proof of theorem XII.6 (continued) • as each M is symmetric. • Further, • Secondly, Statistical Modelling Chapter XI
Proof of theorem XII.6 (continued) • Now • and so Statistical Modelling Chapter XI
Since, from lemma XII.3 (traces) • we have that Proof of theorem XII.6 (continued) • Consequently lemma XII.2 (rank=trace) applies to the Qs so that the ranks of all the Q matrices are equal to their trace. • We also note that using lemma XII.4 (trace(MF) =f) we have that Statistical Modelling Chapter XI
ANOVA table with degrees of freedom added Statistical Modelling Chapter XI
d) Expected mean squares for an ANOVA • We now prove the E[MSq]s for the case of Blocks random. • But first a useful lemma. • Lemma XII.5: Let • Then Statistical Modelling Chapter XI
Proof of lemma XII.5 • First we have • so that Statistical Modelling Chapter XI
Let • Then, the expected mean squares are • where , , tj is the jth element of the t-vector t, b is the number of blocks and t is the number of treatments. Theorem XII.7 Statistical Modelling Chapter XI
First note that we can write Proof of theorem XII.7 • For E[SSB/(b- 1)], we use theorem XII.2 (E[Y'QY/n] = (trace(QV) + y'Qy)/n) and that it can be shown that QBMBU=QB and QBMB=QB to obtain the following expressions: Statistical Modelling Chapter XI
Now from theorem XII.6 (RCDD dfs) we have that trace(QB) = (b- 1) and from lemma XII.5 • The proof that is left as an exercise for you. Proof of theorem XII.7 (continued) • Hence, the expected mean square is Statistical Modelling Chapter XI
Now, for , again we use theorem XII.2 (E[MSq]s) and also that it can be shown that Proof of theorem XII.7 (continued) • so that we have Statistical Modelling Chapter XI
Now from theorem XII.6 (RCBD dfs) we have that and from lemma XII.5 so that Proof of theorem XII.7 (continued) as claimed. Statistical Modelling Chapter XI
ANOVA table with E[MSq]s added Statistical Modelling Chapter XI
Note that the model • under the null hypothesis of no treatment effects is E[Y]=XGm and • while under the null hypothesis of no block variation is E[Y]=XTt and e) The distribution of the F statistics for an ANOVA • We next derive the sampling distribution of the F-statistics for testing treatment and block differences. • The following theorems involve establishing the sampling distribution of the F-statistics under these models. Statistical Modelling Chapter XI
where • Then, the ratio of these two mean squares, given by is distributed as a Snedecor's F with (t-1) and (b-1)(t-1) degrees of freedom. Theorem XII.8 • Let Statistical Modelling Chapter XI
and • under the expectation and variation models above, so that theorem XII.3 (distnY'AY) can be invoked to conclude that follow chi-square distributions; • the quadratic forms and are independent as in theorem XII.4 (several Y'AiYs); Proof of theorem XII.8 • We have to show that • then theorem XII.5 (F distribution) can be invoked to obtain the distribution of the F test statistic. Statistical Modelling Chapter XI
Proof of theorem XII.8 (continued) • Firstly, Statistical Modelling Chapter XI
Proof of theorem XII.8 (continued) • Also, using theorem XII.2 (E[MSq]s) and • as QTMBU=QTIn=QT • QTMB= (MT-MG)MB=MG-MG=0 (MBMT=MTMB=MBMG=MGMB=MGMT=MTMG=MG from the proof of theorem XII.6 {RCBD dfs}) • QTXG=0 (see above in this proof), and • trace(QT) =t -1 (from theorem XII.6 {RCBD dfs}), • we have Statistical Modelling Chapter XI
Now, using theorem XII.2 (E[MSq]s), and as • (MBMT=MTMB=MBMG=MGMB=MGMT=MTMG=MG from the proof of theorem XII.6 {RCBD dfs}) • (see above in this proof), and • (from theorem XII.6 {RCBD dfs}), • we have Proof of theorem XII.8 (continued) Statistical Modelling Chapter XI
Secondly, to show that and are independent quadratic forms we have to show that QT and meet two of the three conditions outlined in theorem XII.4 (several Y'AiYs). • As outlined in the proof of theorem XII.6 (RCBD dfs), QT and are idempotent so that condition 1 is met. • For condition 3, we require that . • Now, Proof of theorem XII.8 (continued) Statistical Modelling Chapter XI
Thirdly, theorem XII.5 (F distribution) means that, as are distributed as independent chi-squares with degrees of freedom (t- 1) and (b- 1)(t- 1), respectively, then Proof of theorem XII.8 (continued) follows an F distribution with (t- 1) and (b- 1)(t- 1) degrees of freedom. Statistical Modelling Chapter XI
Theorem XII.9 • Let • where • Then, the ratio of these two mean squares, given by is distributed as a Snedecor's F with (b-1) and (b-1)(t-1) degrees of freedom. • Proof: parallels that for the treatment differences. Statistical Modelling Chapter XI
XII.G Exercises • Ex. 12-1 requires the proofs of some properties of Q and M matrices for the RCBD. • Ex. 12-2 asks you to derive expressions for Q and M matrices for a CRD and to prove that the Residual operator is symmetric and derive its df. • Ex. 12-3 involves the proof of the E[MSq] for the RCBD. • Ex. 12-4 asks you to prove that the ratio of the Rows and Residual Msq for a Latin square is distributed as Snedecor's F under the null hypothesis. Statistical Modelling Chapter XI