230 likes | 382 Views
Economics 105: Statistics. RAP is due via email at 5:15 last day of exams. Please save as a PDF file first. And email the Excel file separately. . I know! We can save the model, but not until Eco205. . Holy endogeneity , Batman!. Violations of GM Assumptions. Assumption Violation.
E N D
Economics 105: Statistics RAP is due via email at 5:15 last day of exams. Please save as a PDF file first. And email the Excel file separately.
I know! We can save the model, but not until Eco205. Holy endogeneity, Batman! Violations of GM Assumptions Assumption Violation Wrong functional form Omit Relevant Variable (Include Irrelevant Var) Errors in Variables Sample selection bias, Simultaneity bias Heteroskedasticerrors Homoskedasticerrors No serial correlation of errors There exists serial correlation in errors Model is linear in parameters, the betas (4) i.i.d. sample of data (5)
Violation of (3) • Error in period t is a function of error in prior period alone: first-order autocorrelation, denoted AR(1) for “autoregressive” process • Usual assumptions apply to new error term • is positive serial correlation • is negative serial correlation Nature of Serial Correlation
Error in period t can be a function of error in more than one prior period • Second-order serial correlation • Higher orders generated analogously • Seasonally-based serial correlation Nature of Serial Correlation
The error term in the regression captures • Measurement error • Omitted variables, that are uncorrelated with the included explanatory variables (hopefully) • Frequently factors omitted from the model are correlated over time • Persistence of shocks • Effects of random shocks (e.g., earthquake, war, labor strike) often carry over through more than one time period • Inertia • times series for GNP, (un)employment, output, prices, interest rates, etc. follow cycles, so that successive observations are related Causes of Serial Correlation
3. Lags • Past actions have a strong effect on current ones • Consumption last period predicts consumption this period • 4. Misspecified model, incorrect functional form • 5. Spatial serial correlation • In cross-sectional data on regions, a random shock in one region can cause the outcome of interest to change in adjacent regions • “Keeping up with the Joneses” Causes of Serial Correlation
Consequences for OLS Estimates • Using an OLS estimator when the errors are autocorrelated results in unbiased estimators • However, the standard errors are estimated incorrectly • Whether the standard errors are overstated or understated depends on the nature of the autocorrelation • For positive AR(1), standard errors are too small! • Any hypothesis tests conducted could yield erroneous results • For positive AR(1), may conclude estimated coefficients ARE significantly different from 0 when we shouldn’t ! • OLS is no longer BLUE • A pattern exists in the errors • Suggesting an estimator that exploited this would be more efficient
Graphical Detection of Serial Correlation
Graphical no obvious pattern—the errors seem random. Sometimes, however, the errors follow a pattern—they are correlated across observations, creating a situation in which the observations are not independent with one another. Detection of Serial Correlation
Detection of Serial Correlation Here the residuals do not seem random, but rather seem to follow a pattern.
Detection: The Durbin-Watson Test • Provides a way to test H0: = 0 • It is a test for the presence of first-order serial correlation • The alternative hypothesis can be • 0 • > 0: positive serial correlation • Most likely alternative in economics • < 0: negative serial correlation • DW Test statistic is d
Detection: The Durbin-Watson Test • To test for positive serial correlation with the Durbin-Watson statistic, under the null we expect d to be near 2 • The smaller d, the more likely the alternative hypothesis The sampling distribution of d depends on the values of the explanatory variables. Since every problem has a different set of explanatory variables, Durbin and Watson derived upper and lower limits for the critical value of the test.
Detection: The Durbin-Watson Test • Durbin and Watson derived upper and lower limits such that d1d* du • They developed the following decision rule
Detection: The Durbin-Watson Test • To test for negative serial correlation the decision rule is • Can use a two-tailed test if there is no strong prior belief about whether there is positive or negative serial correlation—the decision rule is
Serial Correlation Table of critical values for Durbin-Watson statistic (table E11, page 833 in BLK textbook) http://hadm.sph.sc.edu/courses/J716/Dw.html
Serial Correlation Example What is the effect of the price of oil on the number of wells drilled in the U.S.?
Serial Correlation Example What is the effect of the price of oil on the number of wells drilled in the U.S.?
Serial Correlation Example Analyze residual plots … but be careful …
Serial Correlation Example Remember what serial correlation is … • This plot only “works” if obs number is in same order as the unit of time
Serial Correlation Example Same graph when plot versus “year” • Graphical evidence of serial correlation
Serial Correlation Example Calculate DW test statistic Compare to critical value at chosen sig level dlower or dupper for 1 X-var & n = 62 not in table dlower for 1 X-var & n = 60 is 1.55, dupper = 1.62 • Since .192 < 1.55, reject H0: = 0 in favor of H1: > 0 at α=5%
Do’s and Don’ts Do interpret coefficients carefully by keeping in mind the units of X and of Y Do discuss separately – and not conflate – statistical significanceand economic magnitude, i.e., the size of the estimated effect (of X on Y) Do not say one variable is “more significant” or “more important” than another because it has a smaller p-value p-values are measures of evidence (against H0) p-values do not give us info about the magnitude of the effect (i.e., the “effect size”)
Do’s and Don’ts Do not say one variable is “more significant” or “more important” than another because is twice as big as remember the ceteris paribus interpretation don’t compare the magnitudes of coefficients unless they are measured in the same units Do not assume that two estimated coefficients are different from one another if one is statistically significant and the other isn’t Gelman & Stern (2006), “The Difference Between ‘Significant’ and ‘Not Significant’ is not Itself Statistically Significant,” American Statistician, vol. 60, no. 4