300 likes | 671 Views
chapter twelve. Multicollinearity: What Happens if Explanatory Variables are Correlated?. Multicollinearity. Perfect multicollinearity is very rare One explanatory variable is an exact linear combination of one or more other variables Near or very high multicollinearity
E N D
chapter twelve Multicollinearity:What Happens if Explanatory Variables are Correlated?
Multicollinearity • Perfect multicollinearity is very rare • One explanatory variable is an exact linear combination of one or more other variables • Near or very high multicollinearity • Several explanatory variables are approximately linearly related • Several explanatory variables may be highly correlated • Can occur frequently in applications
Theoretical Consequences • In the absence of perfect collinearity, OLS estimators are still BLUE, BUT: • Without repeated samples, estimates for a single sample may not be close to the true values • Minimum variance does not mean that the variance is small • Variables that are not correlated in the population may be correlated in a sample • Individual effects of explanatory variables cannot be isolated when the variables are highly correlated in the sample
Practical Consequences • Variances and standard errors of OLS estimates are inflated: precision of estimates is reduced • Wider confidence intervals • Insignificant t-ratios • High R2 value but few significant t-ratios • OLS estimators and std. errors are very sensitive to small changes in the data: unstable • Wrong signs for regression coefficients (caution: not necessarily due to collinearity) • Difficulty in assessing the individual contribution of explanatory variables to the explained sum of squares or R2
Detection of Multicollinearity • A feature of a sample, not the population • No test for multicollinearity • A question of degree and not of kind • Indicators • High R2, but few significant t-ratios • High pairwise correlations among explanatory variables • Regress each explanatory variable against all the other explanatory variables, examine R2 and F-test • No indicator is always reliable
Is Multicollinearity Always Bad? • If the objective is to obtain reliable estimates of the model parameters, then multicollinearity may be a problem • Large standard errors make estimates imprecise • If the objective is predict or forecast the future mean value of the dependent variable, multicollinearity may not be bad • If the collinear relationships in the sample are expected to continue in the future
Remedial Measures • Drop variables from the model • Acquire additional data or a new sample • Rethink the model • Choose another functional form • Transform the variables (real vs. nominal; per capita vs. aggregate) • An important variable may be omitted: get more data • Use prior information on parameter values • Example: Sham litigation trend regression • Trend3 vs. Trend6
chapter thirteen Heteroscedasticity:What Happens if the Error Variance is Nonconstant?
Figure 13-1 (a) Homoscedasticity; (b) heteroscedasticity.
Heteroscedasticity • E(ui2) = σi2, note subscript on σ2, unequal variance • Usually found in cross-sectional data and not in time-series data • Members in a cross-section may vary in size (families, firms, industries; cities, counties, states) • There may be a scale effect • In a time-series, variables tend to be of the same order of magnitude as data are for the same entity over time
Consequences of Heteroscedasticity • OLS estimators are • Still linear • Still unbiased • NOT minimum variance • Variances of OLS estimators are biased • May be positive bias (overestimate) or negative (underestimate) • Hypothesis tests using t and F distributions are unreliable
Detection of Heteroscedasticity • Regress Y on the X’s and plot the residuals against each X variable or the estimated value of Y (Y-hat). See Fig. 13-6. • Park Test • regress Y against the X’s and get the residuals ei • estimate ln(ei2) = B1 + B2lnXi +vi for each X (or Yhat) • If B2 statistically significant (t-test), there is heteroscedasticity • If B2 is not significant, homoscedasticity
Figure 13-6 Hypothetical patterns of e2.
Detection (cont’d) • Glejser Test • Regress Y on X’s and get residuals ei • Estimate |ei| = B1 + B2Xi + vi (or √X or 1/X) • B2 significant indicates heteroscedasticity • White’s Test • Y = B1 + B2X2 +B2X3 + ui get residuals • ei2 = A1+ A2X2+ A3X3+ A4X22 + A5X32+A6X2X3 • (n·R2) ~ χ2 with (k-1) d.f. • χ2 significant indicates heteroscedasticity
Remedies: Transformations • If true σi2 is known: weighted least squares • Yi/σi = B1(1/σi) + B2(Xi/σi) + ui/σi is homoscedastic • If E(ui2) = σi2 Xi (Fig. 13-8) • Yi/√Xi = B1(1/√Xi) + B2(Xi/√Xi) + ui/√Xi • If E(ui2) = σi2 Xi2 (Fig. 13-9) • Yi/Xi = B1(1/Xi) + B2(Xi/Xi) + ui/Xi • Respecify the model (log transformation, etc.) • White’s Correction for Standard Errors
Figure 13-8 Error variance proportional to X.
Figure 13-9 Error variance proportional to X2.
chapter fourteen Autocorrelation:What Happens if ErrorTerms are Correlated?
Figure 14-1 Patterns of autocorrelation.
Figure 14-2 (a) Positive autocorrelation; (b) negative autocorrelation.
Consequences of Autocorrelation • E(uiuj) ≠ 0 correlation of observations ordered in time (time-series) or space (cross-section) • OLS is linear and unbiased, but not efficient • Estimated variances of OLS estimators are biased; often std. errors are underestimated, t-values inflated • F and t tests are unreliable. • R2 is unreliable.
Detection • Graphical Method • Plot et against time (time sequence plot; Fig 14-3) • Plot et against et-1 (Fig. 14-4) • Durbin-Watson d Test • d = (∑(et –et-1)2/(∑et2) • d ≈ 2(1 – ρ), ρ = (∑etet-1)/(∑et2) • Coefficient of autocorrelation, -1 <ρ< 1 • 0 <d< 4 • Example: Sham Litigation, Trend 3
Figure 14-3 Residuals from the regression (14.4).
Figure 14-4 Residuals et against et-1 from the regression (14.4).
Durbin-Watson Decision Rules • Compute d from residuals of OLS regression • Find critical dL and dU from D-W tables • 0 < d < dL, positive autocorrelation • dL< d < dU, zone of indecision • dU < d < 4 – dU, no autocorrelation • 4 – dU< d < 4 – dL, zone of indecision • 4 – dL < d < 4, negative autocorrelation • See Fig. 14-5
Figure 14-5 The Durbin-Watson d statistic.
Remedial Measures • For Yt = B1 + B2Xt + ut suppose the error follows the AR(1) scheme: • ut = ρut-1 + vt, -1 <ρ< 1 • Transform regression to remove autocorrelation • (Yt – ρYt-1) = B1(1- ρ) + B2(Xt - ρXt-1) + vt • Or Yt* = B1* + B2Xt* + vt • B1* and B2 are called generalized least squares estimators (GLS) • Transformation generalizes for AR(2), AR(3), etc.
Estimating ρ • From Durbin-Watson d • ρ ≈ 1 – (d/2) • From OLS residuals • estimate et = ρet-1 + vt for “large” samples • ρ = (∑etet-1)/(∑et2) • If ρ = 1, the First Difference Method • Yt – Yt-1 = B2(Xt – Xt-1) + vt • ΔYt = B2ΔXt + vt, Δ is the first difference operator