1 / 28

Applied Econometrics

Applied Econometrics. 7. Estimating the Variance of the Least Squares Estimator. Context. The true variance of b|X is ?2(X?X)-1 . We consider how to use the sample data to estimate this matrix. The ultimate objectives are to form interval estimates for regression slopes and to test hypotheses about them. Both require estimates of the variability of the distribution. We then examine a factor which affects how "large" this variance is, multicollinearity..

jolie
Download Presentation

Applied Econometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Applied Econometrics William Greene Department of Economics Stern School of Business

    2. Applied Econometrics 7. Estimating the Variance of the Least Squares Estimator

    3. Context The true variance of b|X is ?2(X?X)-1 . We consider how to use the sample data to estimate this matrix. The ultimate objectives are to form interval estimates for regression slopes and to test hypotheses about them. Both require estimates of the variability of the distribution. We then examine a factor which affects how "large" this variance is, multicollinearity.

    4. Estimating ?2 Using the residuals instead of the disturbances: The natural estimator: e?e/n as a sample surrogate for ???/n Imperfect observation of ?i = ei + (? - b)?xi Downward bias of e?e/n. We obtain the result E[e?e|X] = (n-K)?2

    5. Expectation of e’e

    6. Method 1:

    7. Estimating s2 The unbiased estimator is s2 = e?e/(n-K). “Degrees of freedom correction” Therefore, the unbiased estimator is s2 = e?e/(n-K) = ??M?/(n-K).

    8. Method 2: Some Matrix Algebra

    9. Decomposing M

    10. Example: Characteristic Roots of a Correlation Matrix

    11. Gasoline Data

    12. X’X and its Roots

    13. Var[b|X] Estimating the Covariance Matrix for b|X The true covariance matrix is ?2 (X’X)-1 The natural estimator is s2(X’X)-1 “Standard errors” of the individual coefficients are the square roots of the diagonal elements.

    14. Regression Results

    16. Bootstrapping Some assumptions that underlie it - the sampling mechanism Method: 1. Estimate using full sample: --> b 2. Repeat R times: Draw n observations from the n, with replacement Estimate ? with b(r). 3. Estimate variance with V = (1/R)?r [b(r) - b][b(r) - b]’

    17. Bootstrap Application --> matr;bboot=init(3,21,0.)$ Store results here --> name;x=one,y,pg$ Define X --> regr;lhs=g;rhs=x$ Compute b --> calc;i=0$ Counter --> Proc Define procedure --> regr;lhs=g;rhs=x$ … Regression --> matr;{i=i+1};bboot(*,i)=b$... Store b(r) --> Endproc Ends procedure --> exec;n=20;bootstrap=b$ 20 bootstrap reps --> matr;list;bboot' $ Display results

    18. Results of Bootstrap Procedure

    19. Bootstrap Replications

    20. OLS vs. Least Absolute Deviations

    21. Multicollinearity Not “short rank,” which is a deficiency in the model. A characteristic of the data set which affects the covariance matrix. Regardless, ? is unbiased. Consider one of the unbiased coefficient estimators of ?k. E[bk] = ?k Var[b] = ?2(X’X)-1 . The variance of bk is the kth diagonal element of ?2(X’X)-1 . We can isolate this with the result in your text. Let [ [X,z] be [Other xs, xk] = [X1,x2] (a convenient notation for the results in the text). We need the residual maker, M1. The general result is that the diagonal element we seek is [x2?M1x2]-1 , which we know is the reciprocal of the sum of squared residuals in the regression of x2 on X1. The remainder of the derivation of the result we seek is in your text. The estimated variance of bk is

    22. Variance of a Least Squares Coefficient Estimator Estimated Var[bk] =

    23. Gasoline Market

    24. Gasoline Market

    25. Multicollinearity Clearly, the greater the fit of the regression in the regression of x2 on X1, the greater is the variance. In the limit, a perfect fit produces an infinite variance. There is no “cure” for collinearity. Estimating something else is not helpful (principal components, for example). There are “measures” of multicollinearity, such as the condition number of X. Best approach: Be cognizant of it. Understand its implications for estimation. What is better: Include a variable that causes collinearity, or drop the variable and suffer from a biased estimator? Mean squared error would be the basis for comparison. Some generalities.

    26. Specification and Functional Form:1 Nonlinearity

    27. Log Income Equation

    28. Specification and Functional Form:2 Interaction Effect

    29. Interaction Effect

More Related