440 likes | 626 Views
Lecture # 9. Studenmund (2006) Chapter 9. Autocorrelation. Serial Correlation. Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation Correcting autocorrelation. Time Series Data. Time series process of economic variables
E N D
Lecture #9 Studenmund (2006) Chapter 9 Autocorrelation Serial Correlation • Objectives • The nature of autocorrelation • The consequences of autocorrelation • Testing the existence of autocorrelation • Correcting autocorrelation
Time Series Data Time series process of economic variables e.g., GDP, M1, interest rate, exchange rate, imports, exports, inflation rate, etc. Realization An observed time series data set generated from a time series process Remark: Age is not a realization of time series process. Time trend is not a time series process too.
Decomposition of time series Trend Xt random Cyclical or seasonal time Xt = Trend + seasonal + random
Static Models Ct = 0 + 1Ydt + t Subscript “t” indicates time. The regression is a contemporaneous relationship, i.e., how does current consumption (C) be affected by current Yd? Example: Static Phillips curve model inflatt = 0 + 1unemployt + t inflat: inflation rate unemploy: unemployment rate
Effect at time t Effect at time t+2 Effect at time t+q Effect at time t+1 …. Economic action at time t Ct =0+0Ydt+t Ct+1=0+0Ydt+1+1Ydt+tCt=0 +0Ydt+1Ydt-1+t Finite Distributed Lag Models Forward Distributed Lag Effect (with order q) …. Ct+q=0+1Ydt+q+…+1Ydt+tCt=0+1Ydt+…+1Ydt-q+t
Economic action at time t Effect at time t-1 Effect at time t-2 Effect at time t-3 Effect at time t-q …. Backward Distributed Lag Effect Yt= 0+0Zt+1Zt-1+2Zt-2+…+2Zt-q+t Initial state: zt = zt-1 = zt-2 = c
C = 0 + 0Ydt + 1Ydt-1 + 2Ydt-2 + t Long-run propensity (LRP) = (0 + 1 + 2) Permanent unit change in C for 1 unit permanent (long-run) change in Yd. Distributed Lag model in general: Ct = 0 + 0Ydt + 1Ydt-1 +…+ qYdt-q + other factors + t LRP (or long run multiplier) = 0 + 1 +..+ q
Time Trends Linear time trend Yt = 0 + 1t + t Constant absolute change Exponential time trend ln(Yt) = 0 + 1t + tConstant growth rate Quadratic time trend Yt = 0 + 1t + 2t2 + t Accelerate change For advances on time series analysis and modeling , welcome to take ECON 3670
Yt = 0 + 1 X1t + t t = 1,……,T If Cov (t, s) = E (t s) 0 where t s and if t = t-1+ ut where -1 < < 1 ( : RHO) and ut ~ iid (0, u2)(white noise) This scheme is called first-order autocorrelation and denotes as AR(1) Definition: First-order of Autocorrelation, AR(1) Autoregressive : The regression of t can be explained by itself lagged one period. (RHO) : the first-order autocorrelation coefficient or ‘coefficient of autocovariance’
Example of serial correlation: Year Consumptiont = 0 + 1 Incomet + errort Error term represents other factors that affect consumption 1990 230 320 u1990 … … … …. ... … … …. 2002 558 714 u2002 2003 699 822 u2003 2004 881 907 u2004 2005 925 1003 u2005 2006 984 1174 u2006 2007 1072 1246 u2007 TaxPay2006 TaxPay2007 The current year Tax Pay may be determined by previous year rate ut ~ iid(0, u2) TaxPay2007 = TaxPay2006 + u2007 t = t-1 + ut
High order autocorrelation ………………………………………………. Ift = 1t-1 + 2t-2 + …… + nt-n + ut it is AR(n),nth-order autoregressive Autocorrelation AR(1) : -1 << 1 Cov (tt-1) > 0 => 0 << 1positive AR(1) Cov (tt-1) < 0 => -1 << 0negative AR(1) Ift = 1t-1 + ut it is AR(1),first-order autoregressive Ift = 1t-1 + 2t-2 + ut it is AR(2),second-order autoregressive Ift = 1t-1 + 2t-2 + 3t-3+ ut it is AR(2), third-order autoregressive
i i Positive autocorrelation Positive autocorrelation ^ ^ x x x x x x x x x time time 0 0 x x x x x x x x i ^ Cyclical: Positive autocorrelation x x 0 time x x x x x x x x x x x x x x The current error term tends to have the same sign as the previous one.
Negative autocorrelation i i ^ ^ x x x x x time x x x x x x x x x No autocorrelation x x x x x x 0 x x x x x time x x x x x x x x x x x x x x x The current error term tends to have the opposite sign from the previous. The current error term tends to be randomly appeared from the previous.
The meaning of :The error term t at time t is a linear combination of the current and past disturbance. The further the period is in the past, the smaller is the weight of that error term (t-1) in determining t 0 < < 1 -1 < < 0 The past is equal importance to the current. = 1 The past is more importance than the current. > 1
The estimated coefficients arestill unbiased. • E(k) = k ^ ^ 2. The variances of the k is no longer the smallest ^ 3. The standard error of the estimated coefficient,Se(k) becomes large The consequences of serial correlation: BLUE Therefore, when AR(1) is existing in the regression, The estimation will not be “BLUE”
Two variable regression model: Yt = 0 + 1X1t + t x y The OLS estimator of 1, ===> 1 = ^ xt2 2 ^ then Var (1) = If E(tt-1) = 0 xt2 2 22 xtxt+1 xtxt+2 ^ + …. Var (1)AR1= + + 2 xt2 xt2 xt2 xt2 -1 < < 1 ^ ^ If 0, autocorrelation, thanVar(1)AR1> Var(1) ^ ^ If = 0, zero autocorrelation, than Var(1)AR1 = Var(1) Example: If E(tt-1) 0, and t = t-1 + ut ,then The AR(1) variance is not the smallest
Autoregressive scheme: t = t-1 + ut ==>t =[ t-2 + ut-1] + ut ==> t-1 = t-2 + ut-1 t =2t-2 +ut-1 + ut ==> t-2 = t-3 + ut-2 => t = 2[ t-3 + ut-2] +ut-1 + ut t = 3t-3 + 2 ut-2 + ut-1 + ut 2 E(tt-1) = 1 - 2 E(tt-2) = 2 E(tt-3) = 22 ……………. E(tt-k) = k-12 It means the more periods in the past, the less effect on current period k-1 becomes smaller and smaller
How to detect autocorrelation ? DW* or d*
5% level of significance, k = 1, n=24 dL = 1.27 du = 1.45 k is the number of independent variables (excluding the intercept) DW* = 0.9107 DW*<dL
Durbin-Watson Autocorrelation test Check DW Statistic Table(At 5% level of significance, k’ = 1, n=24) dL = 1.27 du = 1.45 H0 : no autocorrelation = 0 H1 : yes, autocorrelation exists. or > 0 positive autocorrelation Reject H0 region dL du 0 1.27 1.45 2 DW* 0.9107 From OLS regression result: where d or DW* = 0.9107
OLS : Y = 0 + 1 X2 + …… + k Xk + t obtain t , DW-statistic(d) ^ Assuming AR(1) process: t = t-1 + ut I. H0 : ≤ 0 no positive autocorrelation H1 : > 0 yes, positive autocorrelation -1 < < 1 DW* Compare d* and dL, du (critical values) if d* < dL ==> reject H0 if d* > du ==> not reject H0 if dL d* du ==> this test is inconclusive Durbin-Watson test
T (t - t-1)2 ^ ^ ^ DW = 2 (1 - ) t=2 T t2 ^ d d (d) t=1 ^ d 2(1-r) 2 2 d≈ 2 (1- ) ==> ≈ 1 - ==> ≈ 1- ^ ^ Since -1 1 ^ ^ implies 0 d 4 (4-dU) (4-dL) dL du 2.73 2.55 4 0 1.27 1.45 2 Durbin-Watson test(Cont.)
we use (4-d) (whendis greater than2) if (4 - d) < dL or 4 - dL < d < 4 ==> reject H0 (4-dU) (4 - dL) dL du 2.73 2.55 4 0 1.27 1.45 2 Durbin-Watson test(Cont.) II. H0 : ≥0 no negative autocorrelation H1 : < 0 yes, negative autocorrelation if dL (4 - d) du or 4 - dud 4 - dL ==> inconclusive if dL (4 - d) du or 4 - du >d > 2 ==> not reject H0
If d < dL or d > 4 - dL ==> reject H0 If dLd du or 4 - du d 4 - dL ==> inconclusive Durbin-Watson test(Cont.) II. H0 : =0 No autocorrelation H1 : 0 two-tailed test for auto correlation either positive or negative AR(1) If du < d < 4 - du ==> not reject H0
For example : UMt = 23.1 - 0.078 CAPt - 0.146 CAPt-1 + 0.043Tt ^ (15.6) (2.0) (3.7) (10.3) _ R2 = 0.78 F = 78.9 = 0.677 SSR = 29.3 DW = 0.23 n = 68 ^ Observed (excluding intercept) (i) K = 3(number of independent variable) (ii) n = 68 , = 0.01 significance level 0.05 (iii) dL = 1.525 , du = 1.703 0.05 dL = 1.372 , du = 1.546 0.01 Reject H0, positive autocorrelation exists
H0 : = 0 positive autocorrelation H1 : > 0 H0 : = 0 negative autocorrelation H1 : < 0 reject H0 reject H0 not reject not reject inconclusive inconclusive DW (d) dL du 2 4-du 4-dL 4 0 1% & 5% Critical values 1.3721.525 1.5461.703 2.45 2.297 2.63 2.475 0.23
4. Not include the lagged dependent variable, Yt = 0+ 1 Xt1 + 2 Xt2 + …… + kXtk + Yt-1 + t (autoregressive model) Y X 5. No missing observation 1970 100 15 1980 235 20 81 N.A. N.A. 82 N.A. N.A. 93 253 37 94 281 41 95 ... ... ... missing ... ... The assumptions underlying the d(DW) statistics : 1. Intercept term must be included. 2. X’s are nonstochastic 3. Only test AR(1) : t = t-1 + utwhereut ~ iid (0, u2)
Test Procedures: (1) Run OLS and obtain the residuals t. ^ (2) Run t against all the regressors in the model plus the additional regressors, t-1, t-2, t-3,…, t-p. t = 0 + 1 Xt + t-1 + t-2 + t-3 + … + t-p + u Obtain the R2 value from this regression. ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ (5) If BG > 2p, reject Ho, it means there is a higher-order autocorrelation If BG < 2p, not reject Ho, it means there is a no higher-order autocorrelation Lagrange Multiplier (LM) Testor called Durbin’s m test Or Breusch-Godfrey (BG) test of higher-order autocorrelation (3) compute the BG-statistic:(n-p)R2 (4) compare the BG-statistic to the 2p (p is # of degree-order)
Yt = 0 + 1 Xt + t Yt-1 = 0 + 1 Xt-1 + t-1assume = 1 ==> Yt - Yt-1 = 0 - 0 + 1 (Xt - Xt-1) + (t - t-1) ==> Yt = 1DXt + t D no intercept 2. Add a trend (T) Yt = 0 + 1 Xt + 2T + t Yt-1 = 0 + 1 Xt-1 + 2 (T -1) + t-1 ==> (Yt - Yt-1) = (0 - 0) + 1 (Xt - Xt-1) + 2 [T- (T -1)] + (t - t-1) ==> DYt = 1 DXt +2*1 + ’t ==> Yt = 2* + 1DXt +’t D If 1* > 0 => an upward trend in Y ^ ^ (2 > 0) Remedy: 1. First-difference transformation
(1). Run OLS on Yt = 0 + 1 Xt + t Generalized Least Squares (GLS) method and obtains t ^ (2). Run OLS on t = t-1 + ut ^ ^ Where u~(0, ) and obtains ^ ^ (3). Use the to transform the variables : Yt = 0 + 1 Xt + t ^ Yt* = Yt - Yt-1 ^ -) Yt-1 = 0 + 1 Xt-1 + t-1 ^ ^ ^ ^ Xt* = Xt - Xt-1 (Yt - Yt-1)= 0(1-) +1(Xt - Xt-1) + (t -t-1) ^ ^ ^ ^ (4). Run OLS on Yt* = 0* + 1* Xt* + ut 3. Cochrane-Orcutt Two-step procedure (CORC)
(6). Run OLS (1 - ) DW2 ^ ^ t* = t-1* + ut’ ^ ^ 2 ^ ^ and obtains which is the second-round estimated ^ ^ (7). Use the to transform the variable ^ ^ Yt** = Yt - Yt-1 Yt = 0 + 1 Xt + t ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Xt** = Xt - Xt-1 Yt-1 = 0 + 1Xt-1 + t-1 4. Cochrane-Orcutt Iterative Procedure (5). If DW test shows that the autocorrelation still existing, than it needs to iterate the procedures from (4). Obtains the t*
(8). Run OLS on Yt** = 0** + 1** Xt** + t** ^ ^ ^ ^ ^ Where is (Yt - Yt-1) = 0 (1 - ) + 1 (Xt - Xt-1) + (t - t-1) ^ ^ ^ (9). Check on the DW3 -statistic, if the autocorrelation is still existing, than go into third-round procedures and so on. Until the estimated ’s differs a little ^ ^ ^ ( - < 0.01). ^ ^ Cochrane-Orcutt Iterative procedure(Cont.)
Low DW statistic Obtain the Residuals (Usually after you run regression, the residuals will be immediately stored in this icon Example: Studenmund (2006) Exercise 14 and Table 9.1, pp.342-344 (1)
(2) Give a new name for the residual series Run regression of the current residual on the lagged residual Obtain the estimated ρ(“rho”) ^
(3) Transform the Y* and X* New series are created, but each first observation is lost.
(4) Run the transformed regression Obtain the estimated result which is improved
(5)~(9) The is the EVIEWS’ Command to run the iterative procedure The Cochrane-Orcutt Iterative procedure in the EVIEWS
This is the estimated ρ Each variable is transformed The DW is improved The result of the Iterative procedure
Generalized least Squares (GLS) GLS => Yt* = 0* + 1* Xt* + ut 5. Prais-Winsten transformation Yt = 0 + 1 Xt + t t = 1,……,T (1) Assume AR(1) : t = t-1 + ut -1 < < 1 Yt-1 = 0 + 1 Xt-1 + t-1(2) (1) - (2) => (Yt - Yt-1) = 0 (1 - ) + 1 (Xt - Xt-1) + (t - t-1)
Y1* = 1 - 2 (Y1) ^ but Y2* = Y2 - Y1 ; X2* = X2 - X1 ^ ^ X1* = 1 - 2 (X1) ^ Edit the figure here To restore the first observation Y3* = Y3 - Y2 ; X3* = X3 - X2 ^ ^ …... …... …... …... …... …... Yt* = Yt - Yt-1 ; Xt* = Xt - Xt-1 ^ ^ To avoid the loss of the first observation, the first observationof Y1* and X1* should be transformed as :
Yt = 0 + 1 Xt + t Since (Yt - Yt-1) = 0 (1 - ) + 1 (Xt - Xt-1) + ut => Yt = 0* + 1 Xt - 1 Xt-1 + Yt-1 + ut where 0 = 0 (1 - ) ^ ^ I. Run OLS => this specification and 1 = 1 ^ ^ Yt = 0* + 1* Xt - 2* Xt-1 + 3* Yt-1 + ut Obtain 3*as an estimated(RHO) ^ ^ II. Transforming the variables : Yt* = Yt - 3* Yt-1 as Yt* = Yt - Yt-1 and Xt* = Xt - 3* Xt-1 as Xt* = Xt - Xt-1 ^ ^ ^ ^ III. Run OLS on model : Yt* = 0 + 1 Xt* + ’t 6. Durbin’s Two-step method :
Including this lagged term of Y Obtain the estimated ρ(“rho”) ^
Limitation of Durbin-Watson Test: Yt = 0 + 1 X1t + 2 X2t + …… + k Xk.t + 1 Yt-1 +t DW statistic will often be closed to 2 or DW does not converge to2 (1 - ) DW is not reliable ^ n Durbin-h Test: ^ Compute h* = 1 - n*Var (1) ^ Compare h* to Z where Zc ~ N (0,1) normal distribution If |h*| > Zc => reject H0 : = 0 (no autocorrelation) Lagged Dependent Variable and Autocorrelation
n Durbin-h Test: ^ Compute h* = h* = 4.458 > Z 1 - n*Var (1) ^ Therefore reject H0 : = 0 (no autocorrelation)