290 likes | 302 Views
This lecture continues the discussion on regression analysis, covering topics such as analysis of variance, standard errors and confidence intervals, prediction intervals, and examination of residuals. Supplementary readings recommended.
E N D
LECTURE 4 REGRESSION (CONTINUED) Analysis of Variance; Standard Errors & Confidence Intervals; Prediction Intervals; Examination of Residuals Supplementary Readings: Wilks, chapters 6,9; Bevington, P.R., Robinson, D.K., Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, 1992.
Define: Recall from last time… We call these residuals What should we require of them?
GAUSSIAN Recall from last time… What should we require of them?
Recall from last time… Analysis of Variance (“ANOVA”)? 2(n=5) Gaussian data
is guaranteed by linear regression procedure Analysis of Variance (“ANOVA”) Why “n-2”?
Analysis of Variance (“ANOVA”) Define:
Analysis of Variance (“ANOVA”) Define: 1 and n-2 degrees of freedom
Analysis of Variance (“ANOVA”) Source df SS MS F-test Total n-1 SST Regression 1 SSR MSR=SSR MSR/MSE Residual n-2 SSE MSE=se2 1 and n-2 degrees of freedom
Analysis of Variance (“ANOVA”) for Simple Linear Regression Source df SS MS F-test Total n-1 SST Regression 1 SSR MSR=SSR MSR/MSE Residual n-2 SSE MSE=se2 We’ll discuss ANOVA further in the next lecture (“multivariate regression”)
‘Goodness of Fit’ For simple linear regression
‘Goodness of Fit’ Outside the “support” of the regression, in general,
‘Goodness of Fit’ Outside the “support” of the regression, in general,
‘Goodness of Fit’ Reliability Bias
Analysis of Variance (“ANOVA”) Under Gaussian assumptions, the estimates from linear regression of the parameter a and b represent unbiased estimates of means of a Gaussian distribution Where the standard errors in the regression parameters are:
Confidence Intervals The estimated regression slope ‘b’ is likely to be within some range of the true ‘b’
Confidence Intervals This naturally defines a t test for the presence of a trend:
Prediction Intervals MSE in a predicted value or, (‘Prediction Error’) is larger than the nominal MSE, increasing as the predictand value departs from the mean Note that sy approaches se as the ‘training’ sample becomes large
Linear Correlation ‘r’ suffers from sampling error both in the regression slope and the estimates of variance…
Linear Correlation ‘r’ suffers from sampling error both in the regression slope and the estimates of variance…
Examining Residuals Heteroscedasticity A trend in residual variance violates the assumption of Gaussian residuals…
Examining Residuals Heteroscedasticity Often a simple transformation of the original data will yield more closely Gaussian residuals…
Examining Residuals Leverage Points can still be a problem!
Examining Residuals Autocorrelation Durbin-Watson Statistic
Suppose we have the simple (‘first order autoregressive’) model For example: Examining Residuals Autocorrelation Then we can still use all of the results based on Gaussian statistics, but with the modified sample size:
Suppose we have the simple (‘first order autoregressive’) model Examining Residuals Autocorrelation Then we can still use all of the results based on Gaussian statistics, but with the modified sample size: Different for tests of variance
Suppose we have the simple (‘first order autoregressive’) model Examining Residuals Autocorrelation Then we can still use all of the results based on Gaussian statistics, but with the modified sample size: Different again for correlations
Suppose we have the simple (‘first order autoregressive’) model Examining Residuals We can remove the serial correlation through