240 likes | 349 Views
Ch11: Comparing 2 Samples. 11.1: INTRO : This chapter deals with analyzing continuous measurements. Later, some experimental design ideas will be introduced. Chapter #13 will be devoted to qualitative data analysis. 11.2 : Comparing Two Independent Samples.
E N D
Ch11: Comparing 2 Samples 11.1: INTRO: This chapter deals with analyzing continuous measurements. Later, some experimental design ideas will be introduced. Chapter #13 will be devoted to qualitative data analysis.
11.2: Comparing Two Independent Samples In medical study, one sample X of subjects may be assigned to a control (placebo) treatment and another sample Y to a particular (group) treatment. This section deals with independent samples and later sections with dependent & paired samples.
11.2.1: Methods based on Normal Distributions Assumptions:
11.2.1 : (cont’d) Test Procedures for Normal Populations: Null Hypothesis: Test Statistic: There 3 common alternative hypotheses. 2 of which are one-sided ( ) and one is two-sided ( ). Revisit my handouts about CI and HT for references
11.2.2 : Power calculation The power of the 2-sample t-test depends on: • (real difference) • The larger , the greater the power • (level of significance) • The larger , the more powerful the test • (population standard deviation) • The smaller , the larger the power • n and m (sample sizes) • The larger n and m, the greater the power
11.2.2 : Power calculation (cont’d) Assume that n=m (same sample size) are large enough to test at level , with test statistic based on , where are given. The rejection region (RR) of such a test is: The power of a test is the probability of rejecting the null hypothesis when it is false. That is,
Application: what n is needed? As the difference moves away from zero, one of the terms will be negligible with respect to the other. Problem: want to be able to detect a difference of with probability 0.9 and ? Solution:
11.2.3: The Mann-Whitney Test(a nonparametric method) Known as the Wilcoxon RST (Rank Sum Test). Assume that m + n experimental units are to be assigned (at random) to a treatment group and a control group. In this specific context, n (remaining m) units are randomly chosen and assigned to the ctrl (to thetrt). We are interested in testing the null hypothesis that the treatment has NO EFFECT. Then, if the null is true, then any difference in the outcomes under the 2 conditions is due to the randomization (i.e. solely by chance).
The Mann-Whitney Test: (cont’d) The MW-test statistic is calculated as follows: • Group all m + n observations together and Rank them in order of increasing size (no ties) • Calculate the sum of the ranks of those observations that came from the ctrl group. • Reject null if the sum is too small or too large Example: ranks are bold and shown in parentheses R = 3 + 4 = 7 (ctrl) and R = 1 + 2 = 3 (trt)
The Mann-Whitney Test: (cont’d) Question: Does this discrepancy provide convincing evidence of a systematic difference between trt & ctrl, or could it be just due by chance? Answer: null hypothesis trt had no effect Under the null, every assignment (total: 4!=24) of ranks to observations happens equally likely. In particular, each of the assignments of ranks to the ctrl group (shown below) is equally likely:
The Mann-Whitney Test: (cont’d) The null distribution of R is the discrete r.v. R: From this table, ; that is to say that this discrepancy would occur one time out of 6 by chance. Similar computations can be carried out for any sample sizes m and n and can be even extended to testing: Read page 404 (textbook).
The Mann-Whitney Test:Another approach Suppose that the X’s are sampled from F and the Y’s are sampled from G. The Mann-Whitney test can be derived from a different point of view than what was seen earlier. We would like to estimate the probability that an observation from F is smaller than an independent observation from G which is as a measure of the treatment, where X and Y are independently distributed with distribution functions F and G. An estimate of can be obtained by comparing all n values of X to all m values of Y and by calculating the proportion of the comparisons for which X is less than Y.
11.3: Comparing Paired Samples Paired Design vs Unpaired design:
11.3: (cont’d) Unpaired Design:
11.3: (cont’d) Whatif ?
Pros & Cons Paired vs Independent Samples: Here are 2 competiting sampling schemes: Paired Samples: n pairs (2n measurements) Independent Samples: 2n observations (m=n) They both give the common form: But, the SE estimates and the df for t are different:
Pros & Cons Paired vs Independent Samples: For a same SE estimate, a loss of DF (degrees of freedom) gives a larger value for the t-test. (example: ) A loss of DF for the t-test produces: • C.I. LargerConfidence Intervals • H.T.Loss ofPower to detect real differences in the population means. Such loss of DF for Paired Samples is compensated by a smaller variance Var(X—Y) of Paired Samples with respect to Independent Samples.
11.3.1: Parametric Methods on the Normal Distribution for Paired Data
11.3.2: Nonparametric Method for Paired Data: Sign Rank Test (SRT) The Wilcoxon SRT is computed as follows: • Rank the absolute values of the differences (no ties) with • To get the signed ranks, just restore the signs of the to the ranks. • Calculate , the sum of those ranks that have positive (+) signs. Example: Let be -2, 4, 3, 2, -1, 5 -1(r1), -2(r2) ,+2(r3) ,+3(r4) ,+4(r5) ,+5(r6) 4 + obs.
Wilcoxon SRT (cont’d): Theorem A: Under the null hypothesis that the are independent and symmetrically distributed about zero, Proof:
11.4: Experimental Design Some basic principles of DOE (Design of Experiment) are introduced here. Experimental Design can be viewed as a sequence of linked studies under some conditions. Read case studies 11.4.1 thru 11.4.8