1 / 30

Biostat 200 Lecture 7

Biostat 200 Lecture 7. Hypothesis tests so far. T-test of one mean: Null hypothesis µ=µ 0 Test of one proportion: Null hypothesis p=p 0 Paired t-test: Null hypothesis µ 1 =µ 2 Independent samples t-test: Null hypothesis µ 1 =µ 2 Assume equal or unequal variances

Download Presentation

Biostat 200 Lecture 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biostat 200Lecture 7

  2. Hypothesis tests so far • T-test of one mean: Null hypothesis µ=µ0 • Test of one proportion: Null hypothesis p=p0 • Paired t-test: Null hypothesis µ1=µ2 • Independent samples t-test: Null hypothesis µ1=µ2 • Assume equal or unequal variances • Proportion test for two independent samples: Null hypothesis p1 = p2

  3. Comparison of several means: analysis of variance • Why is it called analysis of variance? • The test compares the between-group variability • How different are the groups from the overall mean to the within-group variability • Why can’t we just do t-tests on the pairs of groups? • Multiple comparisons problem • P(do not reject H0 | H0 ) on one test = 1-  • P(do not reject H0 | H0 ) on n tests = (1- )n • P(reject H0 | H0 ) on at least one test = 1-(1-)n • If =0.05 and n=4 then this is 0.185 Pagano and Gavreau, Chapter 12

  4. Comparison of several means: analysis of variance • We calculate the ratio of: • The between group variability • The variability around the overall (or grand) mean • to the overall within group variability Pagano and Gavreau, Chapter 12

  5. Comparison of several means: analysis of variance • We calculate the ratio of: • The between group variability • The variability around the overall (or grand) mean • to the within group variability • A weighted average of the variances within each group k= the number of groups being compared n1, n2, nk = the number of observations in each group Pagano and Gavreau, Chapter 12

  6. Comparison of several means: analysis of variance • The test statistic is • We compare our F statistic to the F-distribution, with k-1 and n-k degrees of freedom • k=the number of means being compared • n=the total number of observations Pagano and Gavreau, Chapter 12

  7. F-distribution Pagano and Gavreau, Chapter 12

  8. F distribution

  9. ANOVA example • CD4 count by drinking category Pagano and Gavreau, Chapter 12

  10. ANOVA example tabstat cd4count, by(last_alc_cat) s(n mean sd min median max) Summary for variables: cd4count by categories of: last_alc_cat (last time took alcohol- never, past, current) last_alc_cat | N mean sd min p50 max ----------------+------------------------------------------------------------ Abstainer | 94 278.3191 235.4192 1 213.5 1038 Past drinker | 47 237.1915 238.4878 0 183 1010 Current drinker | 126 331.1587 266.1824 0 263.5 1368 ----------------+------------------------------------------------------------ Total | 267 296.015 252.6171 0 235 1368 ----------------------------------------------------------------------------- Pagano and Gavreau, Chapter 12

  11. ANOVA example • CD4 count, by alcohol consumption category oneway var groupvar oneway cd4count last_alc_cat Analysis of Variance Source SS df MS F Prob > F ------------------------------------------------------------------------ Between groups 347685.413 2 173842.706 2.76 0.0651 Within groups 16627208.5 264 62981.8505 ------------------------------------------------------------------------ Total 16974893.9 266 63815.3908 Bartlett's test for equal variances: chi2(2) = 1.8505 Prob>chi2 = 0.396 Pagano and Gavreau, Chapter 12

  12. Multiple comparisons • If we reject H0 , we might want to know which means differed from each other • But as noted before, if you test all combinations, you increase your chance of rejecting the null • To be conservative, we reduce the level of , that is we will reject the p-value at a level smaller than the original  • The Bonferoni methods divides  by the number of possible pairs of tests

  13. Multiple comparisons • Use a t-test, but use the within group variance that weights over all the groups (not just the 2 being examined) • The test statistic for each pair of means is: • and the degrees of freedom is n-k where n is the total number of observations and k is the total number of groups (another difference with the ttest for 2 means) • Reject if the p-value is <* • There are lots of other methods of dealing with the multiple comparisons issue

  14. Multiple comparisons . oneway cd4count last_alc_cat, bonferroni Analysis of Variance Source SS df MS F Prob > F ------------------------------------------------------------------------ Between groups 347685.413 2 173842.706 2.76 0.0651 Within groups 16627208.5 264 62981.8505 ------------------------------------------------------------------------ Total 16974893.9 266 63815.3908 Bartlett's test for equal variances: chi2(2) = 1.8505 Prob>chi2 = 0.396 Comparison of cd4count by last time took alcohol- never, past, current (Bonferroni) Row Mean-| Col Mean | Abstaine Past dri ---------+---------------------- Past dri | -41.1277 | 1.000 | Current | 52.8396 93.9672 | 0.371 0.088 Difference between the 2 means p-value for the difference

  15. Nonparametric tests • The hypothesis tests that use the z-statistic (i.e. when σ , the population standard deviation, is known) assume that the underlying distribution of the parameter we are estimating (sample mean, sample proportion) is approximately normal. This will be true under the CLT if n is large enough. • However, we usually do not know σ, or if the data originally came from a normal distribution. If are data are very skewed, we need to be wary of this assumption. • Nonparametric techniques make fewer assumptions about the underlying distributions – they only assume the populations being compared have the same basic shape but do not assume an underlying distribution • The 3 step procedure is the same: hypothesis, test, reject or fail to reject • We will discuss nonparametric tests that might be used instead of the “parametric” tests we previously discussed Pagano and Gavreau, Chapter 13

  16. Test assumptions • The hypothesis tests that use the z-statistic (i.e. when σ , the population standard deviation, is known) assume that the underlying distribution of the parameter we are estimating (sample mean, sample proportion) is approximately normal. This will be true under the CLT if n is large enough. • However, we usually do not know σ, and we use s2 and compare our test statistic to the t-distribution. In theory the underlying distribution of the data must be normal, but in practicality, if n is fairly large and there are no extreme outliers, the t-test is valid. • If the data are not normally distributed, the t-test is not the most powerful test to use. • E.g. outliers will inflate the sample variance, decreasing the test statistic and the chances of rejecting the null. • Independence of your observations is more critical than normality. • The 3 step procedure for nonparametric testing is the same: hypothesis, test, reject or fail to reject Pagano and Gavreau, Chapter 13

  17. Nonparametric tests for paired observations • The Sign test • For paired or matched observations (analogous to the paired t-test) • H0 : median1 = median2 • Most useful if the sample size is small or the distribution of differences is very skewed • The differences between the pairs are given a sign: + if a positive difference – if a negative difference nothing if the difference=0 • Count the number of +s , denoted by D Pagano and Gavreau, Chapter 13

  18. Nonparametric tests for paired observations • Under H0, ½ the differences will be +s and ½ will be –s • This is equivalent to saying that the each difference is a Bernoulli random variable, that is, each is+ or – with probability p=.5 • Then the total number of + s (D) is a binomial random variable with p=0.5 and with n trials • So then the p value for the hypothesis test is the probability of observing D + differences if the true distribution is binomial with parameters and p=0.5 • You could use the binomialtail function for a one-sided hypothesis • di binomialtail(n,D,.5) Pagano and Gavreau, Chapter 13

  19. Nonparametric tests for paired observations • Under H0, ½ the differences will be +s and ½ will be –s • This is equivalent to saying that the each difference is a Bernoulli random variable, that is, each is+ or – with probability p=.5 • Then the total number of + s (D) is a binomial random variable with p=0.5 and with n trials. • Binomial mean = np ; Binomial SD = np(1-p) • So mean = .5n and SD=.25n SD • Then D ~ N(.5n, .25n)Because D using the normal approximation, and z ~ N(0,1) where z is: Pagano and Gavreau, Chapter 13

  20. Nonparametric tests for paired observations Pagano and Gavreau, Chapter 13

  21. signtest var1=var2 . signtest wt1=wt2 Sign test sign | observed expected -------------+------------------------ positive | 3 5 negative | 7 5 zero | 0 0 -------------+------------------------ all | 10 10 One-sided tests: Ho: median of wt1 - wt2 = 0 vs. Ha: median of wt1 - wt2 > 0 Pr(#positive >= 3) = Binomial(n = 10, x >= 3, p = 0.5) = 0.9453 Ho: median of wt1 - wt2 = 0 vs. Ha: median of wt1 - wt2 < 0 Pr(#negative >= 7) = Binomial(n = 10, x >= 7, p = 0.5) = 0.1719 Two-sided test: Ho: median of wt1 - wt2 = 0 vs. Ha: median of wt1 - wt2 != 0 Pr(#positive >= 7 or #negative >= 7) = min(1, 2*Binomial(n = 10, x >= 7, p = 0.5)) = 0.3438 Uses the larger of the number of positive or negative signed pairs Pagano and Gavreau, Chapter 13

  22. Nonparametric tests for paired observations • The sign test does not account for the magnitude of the difference • The Wilcoxon Signed-Rank Test does • Samples from paired populations – analogous to the paired t-test • The entire sample is ranked, and the sums of the ranks compared for the two groups • Ties are given an average rank • The smaller sum of the ranks follows a normal distribution with mean n*(n+1)/4 and standard deviation under the null assumption of no difference in medians For n<12, use the exact distribution, table A.6 Pagano and Gavreau, Chapter 13

  23. signrank var1 = var2 . signrank wt1=wt2 Wilcoxon signed-rank test sign | obs sum ranks expected -------------+--------------------------------- positive | 3 17.5 27.5 negative | 7 37.5 27.5 zero | 0 0 0 -------------+--------------------------------- all | 10 55 55 unadjusted variance 96.25 adjustment for ties -0.38 adjustment for zeros 0.00 ---------- adjusted variance 95.88 Ho: wt1 = wt2 z = -1.021 Prob > |z| = 0.3071

  24. Nonparametric tests for independent samples • The Wilcoxon Rank Sum Test • Also called the Mann-Whitney U test (slightly different calculation) • Null hypothesis is that the distributions of the two groups are the same • Samples from independent populations – analogous to the t-test • The entire sample is ranked, and the sums of the ranks compared for the two groups • An algorithm to deal with ties • The test statistic is compared to the normal distribution • If the sample sizes are small (<10), exact distributions are needed – Table A.7 • Can be extended to multiple groups (Kruskal-Wallis test) Pagano and Gavreau, Chapter 13

  25. ranksum var, by(byvar) . ranksum bmi, by(sex) Two-sample Wilcoxon rank-sum (Mann-Whitney) test sex | obs rank sum expected -------------+--------------------------------- male | 292 63531.5 77964 female | 241 78779.5 64347 -------------+--------------------------------- combined | 533 142311 142311 unadjusted variance 3131554.00 adjustment for ties -104.98 ---------- adjusted variance 3131449.02 Ho: bmi(sex==male) = bmi(sex==female) z = -8.156 Prob > |z| = 0.0000

  26. Nonparametric tests for independent samples kwallis var , by(byvar) . kwallis bmi, by(racegrp) Kruskal-Wallis equality-of-populations rank test +-----------------------------------+ | racegrp | Obs | Rank Sum | |------------------+-----+----------| | White, Caucasian | 312 | 85827.00 | | Asian/PI | 158 | 40872.00 | | Other | 65 | 16681.00 | +-----------------------------------+ chi-squared = 1.581 with 2 d.f. probability = 0.4536 chi-squared with ties = 1.581 with 2 d.f. probability = 0.4536 Pagano and Gavreau, Chapter 13

  27. Parametric vs. non-parametric (distribution free) tests • Non parametric tests: • No normality requirement • Do require that the underlying distributions being compared have the same basic shape • Ranks are less sensitive to outliers • Can be used for ordinal data • If the underlying distributions are approximately normal, then parametric tests are more powerful

  28. Statistical hypothesis tests

  29. For next time • Read Pagano and Gauvreau • Pagano and Gavreau Chapters 12-13 (review) • Pagano and Gavreau Chapter 15

More Related