620 likes | 698 Views
Nonparametrics.Zip (a compressed version of nonparametrics) Tom Hettmansperger Department of Statistics, Penn State University. References: Higgins (2004) Intro to Modern Nonpar Stat Hollander and Wolfe (1999) Nonpar Stat Methods Arnold Notes
E N D
Nonparametrics.Zip(a compressed version of nonparametrics)Tom HettmanspergerDepartment of Statistics, Penn State University References: • Higgins (2004) Intro to Modern Nonpar Stat • Hollander and Wolfe (1999) Nonpar Stat Methods • Arnold Notes • Johnson, Morrell, and Schick (1992) Two-Sample Nonparametric Estimation and Confidence Intervals Under Truncation, Biometrics, 48, 1043-1056. • Beers, Flynn, and Gebhardt (1990) Measures of Location and Scale for Velocities in Cluster of Galaxies-A Robust Approach. Astron J, 100, 32-46. • Website: http://www.stat.wmich.edu/slab/RGLM/
Robustness and a little philosophy • Robustness: Insensitivity to assumptions • Structural Robustness of Statistical Procedures • Statistical Robustness of Statistical Procedures
Structural Robustness(Developed in 1970s) Influence: How does a statistical procedure respond to a single outlying observation as it moves farther from the center of the data. Want: Methods with bounded influence. Breakdown Point: What proportion of the data must be contaminated in order to move the procedure beyond any bound. Want: Methods with positive breakdown point. Beers et. al. is an excellent reference.
Statistical Robustness(Developed in 1960s) • Hypothesis Tests: • Level Robustness in which the significance level is not sensitive • to model assumptions. • Power Robustness in which the statistical power of a test to • detect important alternative hypotheses is not sensitive to • model assumptions. • Estimators: • Variance Robustness in which the variance (precision) of an • estimator is not sensitive to model assumptions. • Not sensitive to model assumptions means that the property • remains good throughout a neighborhood of the assumed • model
Examples • 1. The sample mean is not structurally robust and is not variance robust. • The sample median is structurally robust and is variance robust. • The t-test is level robust (asymptotically) but is not structurally robust nor • power robust. • 4. The sign test is structurally and statistically robust. Caveat: it is not very • powerful at the normal model. • Trimmed means are structurally and variance robust. • The sample variance is not robust, neither structurally nor variance. • The interquartile range is structurally robust.
Recall that the sample mean minimizes Replace the quadratic by r(x) which does not increase like a quadratic. Then minimize: The result is an M-Estimator which is structurally robust and variance robust. See Beers et. al.
The nonparametric tests described here are often called distribution free because their significance levels do not depend on the underlying model assumption. Hence, they are level robust. They are also power robust and structurally robust. The estimators that are associated with the tests are structurally and variance robust. Opinion: The importance of nonparametric methods resides in their robustness not the distribution free property (level robustness) of nonparametric tests.
Robust Data Summaries • Graphical Displays • Inference: Confidence Intervals and Hypothesis Tests Location, Spread, Shape CI-Boxplots (notched boxplots) Histograms, dotplots, kernel density estimates.
Absolute MagnitudePlanetary NebulaeMilky Way Abs Mag (n = 81) -5.140 -6.700 -6.970 -7.190 -7.273 -7.365 -7.509 -7.633 -7.741 -8.000 -8.079 -8.359 -8.478 -8.558 -8.662 -8.730 -8.759 -8.825 …
Null Hyp: Pop distribution, F(x) is normal The Kolmogorov-Smirnov Statistic The Anderson-Darling Statistic
Anatomy of a 95% CI-Boxplot • Box formed by quartiles and median • IQR (interquartile range) Q3 – Q1 • Whiskers extend from the end of the box to the farthest point within 1.5xIQR. For a normal benchmark distribution, IQR=1.348Stdev and 1.5xIQR=2Stdev. Outliers beyond the whiskers are more than 2.7 stdevs from the median. For a normal distribution this should happen about .7% of the time. Pseudo Stdev = .75xIQR
Estimation of scale or variation. Recall the sample standard deviation is not robust. From the boxplot we can find the interquartile range. Define a pseudo standarddeviation by .75IQR. This is a consistent estimate of the population standard deviation in a normal population. It is robust. The MAD is even more robust. Define the MAD by Med|x – med(x)|. Further define another psueudo standard deviation by 1.5MAD. Again this is calibrated to a normal population.
Suppose we enter 5.14 rather than -5.14 in the data set. The table shows the effect on the non robust stdev and the robust IQR and MAD.
Additional Remarks: The median is a robust measure of location. It is not affected by outliers. It is efficient when the population has heavier tails than a normal population. The sign test is also robust and insensitive to outliers. It is efficient when the tails are heavier than those of a normal population. Similarly for the confidence interval. In addition, the test and the confidence interval are distribution free and do not depend on the shape of the underlying population to determine critical values or confidence coefficients. They are only 64% efficient relative to the mean and t-test when the population is normal. If the population is symmetric then the Wilcoxon Signed Rank statistic can be used, and it is robust against outliers and 95% efficient relative to the t-test.
Two-Sample Comparisons • 85% CI-Boxplots • Mann-Whitney-Wilcoxon Rank Sum Statistic • Estimate of difference in locations • Test of difference in locations • Confidence Interval for difference in locations • Levene’s Rank Statistic for differences in scale • or variance.
Why 85% Confidence Intervals? We have the following test of Rule: reject the null hyp if the 85% confidence intervals do not overlap. The significance level is close to 5% provided the ratio of sample sizes is less than 3.
Mann-Whitney-Wilcoxon Statistic: The sign statistic on the pairwise differences. Unlike the sign test (64% efficiency for normal population, the MWW test has 95.5% efficiency for a normal population. And it is robust against outliers in either sample.
Mann-Whitney Test and CI: App Mag, Abs Mag N Median App Mag (M-31) 360 14.540 Abs Mag (MW) 81 -10.557 Point estimate for d is 24.900 95.0 Percent CI for d is (24.530,25.256) W = 94140.0 Test of d=0 vs d not equal 0 is significant at 0.0000 What is W?
What about spread or scale differences between the two populations? Below we shift the MW observations to the right by 24.9 to line up with M-31. Variable StDev IQR PseudoStdev MW 1.804 2.420 1.815 M-31 1.195 1.489 1.117
Levene’s Rank Test Compute |Y – Med(Y)| and |X – Med(X)|, called absolute deviations. Apply MWW to the absolute deviations. (Rank the absolute deviations) The test rejects equal spreads in the two populations when difference in average ranks of the absolute deviations is too large. Idea: After we have centered the data, then if the null hypothesis of no difference in spreads is true, all permutations of the combined data are roughly equally likely. (Permutation Principle) So randomly select a large set of the permutations say B permutations. Assign the first n to the Y sample and the remaining m to the X sample and compute MMW on the absolute deviations. The approximate p-value is #MMW > original MMW divided by B.
Difference of rank mean abso devs 51.9793 So we easily reject the null hypothesis of no difference in spreads and conclude that the two populations have significantly different spreads.
One Sample Methods k-Sample Methods Two Sample Methods
Variable Mean StDev Median .75IQR Skew Kurtosis Messier 31 22.685 0.969 23.028 1.069 -0.67 -0.67 Messier 81 24.298 0.274 24.371 0.336 -0.49 -0.68 NGC 3379 26.139 0.267 26.230 0.317 -0.64 -0.48 NGC 4494 26.654 0.225 26.659 0.252 -0.36 -0.55 NGC 4382 26.905 0.201 26.974 0.208 -1.06 1.08 All one-sample and two-sample methods can be applied one at a time or two at a time. Plots, summaries, inferences. We begin k-sample methods by asking if the location differences between the NGC nebulae are statistically significant. We will briefly discuss issues of truncation.
Kruskal-Wallis Test on NGC sub N Median Ave Rank Z 1 45 26.23 29.6 -9.39 2 101 26.66 104.5 0.36 3 59 26.97 156.4 8.19 Overall 205 103.0 KW = 116.70 DF = 2 P = 0.000 This test can be followed by multiple comparisons. For example, if we assign a family error rate of .09, then we would conduct 3 MWW tests, each at a level of .03. (Bonferroni)
What to do about truncation. • See a statistician • Read the Johnson, Morrell, and Schick reference. and then • see a statistician. • Here is the problem: Suppose we want to estimate the difference in locations • between two populations: F(x) and G(y) = F(y – d). • But (with right truncation at a) the observations come from Suppose d > 0 and so we want to shift the X-sample to the right toward the truncation point. As we shift the Xs, some will pass the truncation point and will be eliminated from the data set. This changes the sample sizes and requires adjustment when computing the corresponding MWW to see if it is equal to its expectation. See the reference for details.