480 likes | 598 Views
Outline. Outline: Motivations Smoothing Degree of Freedom. Motivation: Why nonparametric ?. Simple Linear Regression: E(Y|X)=α+βX assume that the mean of Y is a linear function of X (+)easy in computation, description, interpretation, …etc. (-) limit of uses.
E N D
Outline Outline: • Motivations • Smoothing • Degree of Freedom
Motivation: Why nonparametric? Simple Linear Regression: E(Y|X)=α+βX • assume that the mean of Y is a linear function of X • (+)easy in computation, description, interpretation, …etc. • (-) limit of uses
Note that the hat matrix in LSE of regression symmetric and idempotent constant preserving i.e. S1=1 = # of linearly independent predictors in a model = # of parameters in a model
If the dependence of E(Y) on Xis far from linear, • one can extend straight-line regression by adding terms like X2 to the model • but it is difficult to guess the most appropriate function form just from looking at the data.
Example: Diabetes data 1. Diabetes data (Sockett et al., 1987): a study of the factors affecting patterns of insulin-dependent diabetes mellitus in children. • Response: logarithm (C-peptide concentration at diagnosis) • Predictors: age and base deficit.
What is smoothers? A tool for summarizing the trend of a response Y as a function of one or more predictor measurements X1,X2,…,Xp.
Idea of smoothers Simplest Smoothers occur in the case of a categorical predictor, Example: sex (male, female), Example: color (red, blue, green) To smooth Y simply average the values of Y in each category
How about non-categorical predictor? • usually lack replicates at each predictor value • mimic category averaging through “local averaging”i.e. average the Y values in neighborhoods around each target value
Two main uses of smoothers • Description: to enhance the visual appearance of the scatterplot of Y vs. X. • Estimate the dependence of the mean of Y on the predictor
Two main decisions to be made in scatterplot smoothing • how to averaging the response values in each neighborhood ?(which brand of smoother?) • how big to take the neighborhoods ? (smoothing parameters=?)
Scatterplot Smoothing Notations: • y=(y1, y2,…,yn)T • x=( x1, x2,…,xn)T with x1< x2<…<xn • Def: s(x0)=S(y|x=x0)
Some scatterplot smoothers :1. Bin smoothers • Choose cut points • Def : the indices of data points in each region. • (-): estimate is not smooth (jumps at each cut points).
2.Running-mean smoothers (moving average) • Choose asymmetric nearest neighborhood • define the running mean • (+):simple • (-):don’t work well (wiggly), severely biased near the end points
3.Running-line smoothers Def: where and are the LSE for the data points in (-): jagged =>weighted LSE
4. Kernel smoothers Def: where d(t) is a smooth even function decreasing in |t|, =bandwidth, C0 chosen so that the weights sum to 1 Example. Gaussian kernel Example. Epanechnikov kernel Example. the minimum variance kernel
5.Running medians smoothers Def: • make the smoother resistant to outliers in the data • nonlinear smoother
6. Regression splines ! The regions are separated by a sequences of knots ! Piecewise polynomial e.g. piecewise cubic polynomial joint smoothly at these knots ps. more knots more flexible
6a. piecewise-cubic spline (1) s is cubic polynomial in any subintervals (2) s has the two continuous derivates (3) s has a third derivative that is step function with jumps at knots
(Continue) its parametric expression where a+ denotes the positive part of a • it can be rewritten as a linear combination of K+4 basis function • de Boor (1978): B-spline basis functions
6b. Nature spline Def: Regression spline (see 6a) + boundary regions
7. Cubic smoothing splines Find f that minimize the penalized residual sum of square • first term: closeness to the data • second term: penalize curvature in the function • : (1) large values produce smoother curve • (2) small values produce wiggly curve
8. Locally-weighted running-line smoothers (loess) Cleveland (1979) • define N(x0)=k nearest neighbors of x0 • Using tri-cube weight function in WLSE
Smoothers for multiple predictors • multiple-predictor smoothers: example: kernel (see figure) (-):difficulty of interpretation and computation • Additive model • semi-parametric model
“Curse of dimensionality” Neighborhoods with a fixed number of points become less local as the dimensions increase (Bellman, 1961) • For p=1 and span=.1 should length .1. • For p=10 the side length need to be .8.
additive model Additive: Y = f 1(X1)+...+fp (X2 ) + e • The selection, estimation are usually based on the smoothing, backfitting, BRUTO, ACE, Projector, etc. (Hastie, 1990)
Backfitting (see HT 90) • BRUTO Algorithm (see HT 90) is a forward model selection procedure using a modified GCV, defined latter, to choose the significant variables and their smoothing parameters.
(Smoothing in details) assume where , X independent with
The bias-variance trade-off Example. running-mean
To expand f in Taylor series assuming data are equally spaced with , and ignoring R
Automatic selection of smoothing parameters (1)Average mean-squared error (2)Average predictive squared error where is a new observation at Xi.
Some estimates of PSE:1. CV Cross Validation (CV) where indicates the fit at xi, computed by leaving out the ith data point.
Fact: Since
2. Average squared residual(ASR) • is not a good estimate of PSE
linear smoothers def1: def2: where is called smoother matrix. (free of y) e.g. running-mean, running-line, smoothing spline, kernel, loess and regression spline
Cross Validation (CV) constant preserving: weights Sij=>Sij /(1-Sii) => => =>
Degree of freedom of a smoother Why need df? (here?) The same data set and computational power of modern computers are used routinely in the formulation, selection, estimation, diagnostic and prediction of statistical model.
EDF (Ye, 1998) Idea: A modeling / forecast procedure said to be stable if small changes in Y produces small changes in the fitted values .
More precisely (EDF) for , we would like to have, where is a small matrix. => can be viewed as the slope of the straight line
Data Perturbation Procedure For an integer m > 1 (the Monte Carlo sample size), generateδ1, ...,δm as i.i.d. N(0, t2In) where t > 0 and In is the n×n identity matrix. • Use the “perturbed” data Y +δj, to refit • For i =1,2, ..., n, the slope of the LS line fitted to ( (Yi +δij), δij), j=1, ..., m, gives an estimate of hii.
An application Table 1. MSE & SD of five models fitted to lynx data About SD: Fit the same class of models to the first 100 obs., keeping the last 100 for out-of-sample predictions. SD = the standard deviation of the multi-step ahead prediction errors.