300 likes | 418 Views
Data Analysis and the Shackles of Statistical Tradition. Larry Weldon Statistics and Actuarial Science SFU. Why change is needed? . Computer revolution Calculation revolution (1960 +) Communication revolution (1980 +) Data Storage expansion (2000 +) Inexpensive Statistical Software
E N D
Data Analysis and the Shackles of Statistical Tradition Larry Weldon Statistics and Actuarial Science SFU
Why change is needed? • Computer revolution • Calculation revolution (1960 +) • Communication revolution (1980 +) • Data Storage expansion (2000 +) • Inexpensive Statistical Software • Open source (e.g. R, Excel, …)
Some Authoritative Opinions “The question … is whether the 21st century statistics discipline should be equated so strongly to the traditional core topics as they are now.” Jon Kettenring, 1997, ASA Pres “A very limited view of statistics is that it is practiced by statisticians. … The wide view has far greater promise of a widespread influence of the intellectual content of the field of data science.” W.S. Cleveland (1993)
To come … • Examples of anachronisms of traditional parametric inference • Use of parametric models for simulation • Limitations of traditional stats theory • Suggestions for broader toolkit
Major Implications? • Less need for parametric fits & inference • More use of simulation, resampling and graphics • More use for communication of results to non-specialists • Re-examination of traditional approach
Ex 1: A time series Polynom Model? Arma Model?
Ex 1: A time series Non-par Smooth e.g. Loess
Ex 2. Unbiasedness Criterion • Being exactly right, on average! • Better to be a close often? • E.G. Estimation of 2 MMSE estimator?
MMSE Estimator? • Does MSE really tell us what we want to know about our estimator of VARiance? • What is distribution of signed error of estimate of VAR?
Typical Error or Whole Dist’n? • MSE measures typical error. • Distribution of error is more informative & easy to report. • Whole distributions often do not need parametric summary! Use Graph.
Ex 3. Does Variance measure Variation? • E.g. Variance of Yield in Bushels Squared?
Analysis of Variance: SST=SSR+SSE How does it compare with Analysis of SD ? Is R-squared a ratio of useful units? Is “64% of variance” as useful as “80% of SD”?
Anova Table • DF Sum Sq Mean Sq F value Pr(>F) • block 5 343.29 68.66 4.4467 0.015939 * • N 1 189.28 189.28 12.2587 0.004372 ** • P 1 8.40 8.40 0.5441 0.474904 • K 1 95.20 95.20 6.1657 0.028795 * • N:P 1 21.28 21.28 1.3783 0.263165 • N:K 1 33.14 33.14 2.1460 0.168648 • P:K 1 0.48 0.48 0.0312 0.862752 • Residuals 12 185.29 15.44 Enough?
Analysis of Variance? • Data analysts need to know squared units are weird! • Arithmetic simplicity does not justify descriptive complexity
Ex 4: Are P-values useful? • Irrelevant except in marginal cases • Ambiguous in marginal cases • Fixed error rate - not useful • arbitrary for decision making • arbitrary for scientific exploration • A measure of credibility of H0(needed?)
P-value and Power • Need fixed alpha to compute power? • How do we decide on sample size if not fixed alpha? • Anticipate precision relative to the feature of interest
Ex. 5 Role of Simple Parametric Models? For simulation of complex systems e.g. • Stock market • Weather • Environmental degradation • Aging phenomena (Survival) • Queues • Traffic • Etc. Go to R
Common Sense? • How does it fit with stat culture? • Stat as the tool of Inference Police. • Never assume something is simple • Never jump to conclusions • Never assume naive thinking will help • Are students afraid to use their own “common sense”? • Important Role: Stat asDiscovery Tools
Enlightened Common Sense? • Know the dangers • Use informed judgment • Do not expect “objective” analysis! • Information extraction from data is a Subjective process
Classical Inference? • Tests of Hypothesis? • Confidence Intervals? • Parametric Inference? • Difficult to explain to non-statisticians • Unsuccessful in portraying what statisticians can do • Maybe we rely to much on these data tools
What is more useful? • Graphs • For data analysis • For data summary • For result communication, especially for non-par smoothing • Simulation • Resampling, Bootstrapping • Building demos of complex phenomena • Testing if apparent effects are real
Conclusion Software has drastically expanded • What analysts can do • How analysts can do it • Which analysts can do it • The way results are reported Statisticians have to expand their toolkit and communicate with the masses!
Comments? Thank you for listening.
Some Questions • Do data analysts really learn useful info from parametric inference (often)? • Are graphs respectable vehicles to demonstrate results (without parametric inference)? • Are simulation & resampling more useful tools than classical inference? • What really is “basic stats”?
Final Quote • “All of this leads me to suggest that there is a very realistic possibility that statistics will cease to exist. It may flow out through its primordial roots back into substantive areas where it will be developed, in a piece-meal fashion as in its past, by an army of statistical users rather than statistical scientists. It is incumbent on all of us to resist this process of dissolution, to resist defining our subject out of existence. We can begin by not defining our subject too narrowly.” Jim Zidek 1986