180 likes | 340 Views
Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY (s-t) U is gaussian if {Y(t)} gaussian. Some useful stochastic models. Purely random / white noise (i.i.d.) (often mean assumed 0) c YY (u) = cov(Y(t+u),Y(t)} = σ Y 2 if u = 0
E N D
Algebra U = ∑ a(t) Y(t) E{U} = c Y ∑ a(t) cov{U,V} = ∑ a(s) b(t) c YY(s-t) U is gaussian if {Y(t)} gaussian
Some useful stochastic models Purely random / white noise (i.i.d.) (often mean assumed 0) cYY(u) = cov(Y(t+u),Y(t)} = σY2 if u = 0 = 0 if u ≠ 0 ρYY(u) = 1, u=0 = 0, u ≠ 0 A building block
Random walk Y(t) = Y(t-1) + Z(t), Y(0) = 0 Y(t) = ∑i=1t Z(i) E{Y(t)} = t μZ var{Y(t)} = t σZ2 Not stationary, but ∆Y(t) = Y(t) – Y(t-1) = Z(t)
Moving average, MA(q) Y(t) = β(0)Z(t) + β(1)Z(t-1) +…+ β(q)Z(t-q) If E{Z(t)} = 0, E{Y(t)} = 0 cYY(u) = 0, u > q = σZ2 ∑ t=0q-kβ(t) β(t+u) u=0,1,…,q = cYY(-u) stationary MA(1). ρYY(u) = 1 u = 0 = β(1)/(1+ β(1) 2), k = ±1 = 0 otherwise
Backward shift operator remember translation operator TuY(t)=Y(t+u) BjY(t) = Y(t-j) Linear process. Need convergence condition, e.g. |i | or |i |2 <
autoregressive process, AR(p) first-order, AR(1) Markov (**) Linear process invertible For convergence in probability/stationarity
a.c.f. of ar(1) from previous slide (**) ρYY p.a.c.f. using normal or linear definitions corr{Y(t),Y(t-m)|Y(t-1),...,Y(t-m+1)} = 0 for m p when Y is AR(p) Proof. via multiple regression
In general case, Useful for prediction
ρYY Yule-Walker equations for AR(p). Sometimes used for estimation Correlate, with Xt-k, each side of
ARMA(p,q) (B)Yt = (B)Zt
ARIMA(p,d,q). Xt = Xt - Xt-1 2Xt = Xt - 2Xt-1 + Xt-2 arima.mle() fits by mle assuming Gaussian noise
Armax. (B)Yt = β(B)Xt + (B)Zt arima.mle(…,xreg,…) State space. st = Ft(st-1 , zt ) Yt = Ht(st , Zt ) could include X
Next i.i.d. → mixing stationary process Mixing has a variety of definitions e.g. normal case, ∑ |cYY(u)| < ∞, e.g.Cryer and Chan (2008) CLT mY = cYT = Y-bar = ∑ t=1T Y(t)/T Normal with E{mY} = cY var{mY} = ∑ s=1T ∑ t=1T c YY(s-t) ≈ T ∑ u c YY(u) = T σYY if white noise
OLS. Y(t) = α + βt + N(t) b = β + ∑ (t - tbar)N(t) /∑ (t - tbar)2 = β + ∑ u(t) N(t) E(b) = β Var(b) = ∑ ∑ us ut cNN(s-t)
Cumulants. cum(Y1,Y2, ...,Yk ) Extends mean, variance, covariance cum(Y) = E{Y} cum(Y,Y) = Var{Y} cum(X,Y) = Cov(X,Y) DRB (1975)
Proof of ordinary CLT. ST = Y(1) + … + Y(T) cumk(ST) = T κk additivity and imdependence cumk(ST/√T) = T–k/2 cumk(ST) = O( T T–k/2 ) → 0 for k > 2 as T → ∞ normal cumulants of order > 2 are 0 normal is determined by its moments (ST - Tμ)/√ T tends in distribution to N(0,σ2)
Stationary series cumulant functions. cum{Y(t+u1 ), …,Y(t+u k-1 ),Y(t) } = ck(t+u 1 , … ,t+u k-1 ,t) = ck(u1 , .., uk-1) k = 2, 3,, 4 ,… cumulant mixing. ∑ u |ck(u1 , ..,uk-1)| < ∞ u = (u1 , .., uk-1)