480 likes | 499 Views
State Space Models. Let { x t : t T } and { y t : t T } denote two vector valued time series that satisfy the system of equations:. y t = A t x t + v t (The observation equation) x t = B t x t- 1 + u t (The state equation).
E N D
Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt (The observation equation) xt = Btxt-1+ ut (The state equation) The time series { yt:t T} is said to have state-space representation.
Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying: • E(ut) = E(vt) = 0. • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s. • E(ututˊ) = Suand E(vtvtˊ) = Sv. • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.
Example: One might be tracking an object with several radar stations. The process {xt:t T} gives the position of the object at time t. The process { yt:t T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t T}, from the observations, {yt:t T} , made by the several radar stations
Example: Many of the models we have considered to date can be thought of a State-Space models Autoregressive model of order p:
Define Then Observation equation and State equation
Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:
Suppose that the n possible observations taken at each state are
Let and Note
Let So that The State Equation with
Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal
Since then and Thus
We have defined Hence Let
Then The Observation Equation with and
Hence with these definitions the state sequence of a Hidden Markov Model satisfies: The State Equation with and The observation sequence satisfies: The Observation Equation with and
We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT. We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s. We will consider estimation of xt in terms of • y1, y2, y3, … , yt-1(the prediction problem) • y1, y2, y3, … , yt (the filtering problem) • y1, y2, y3, … , yT (t < T, the smoothing problem)
For any vector x define: where is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys. The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes
Remark: The best predictor is the unique vector of the form: Where C0, C1, C2, … ,Cs, are selected so that:
Remark Let u and v, be two random vectors than is the optimal linear predictor of u based on v if
Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt (The observation equation) xt = Btxt-1+ ut (The state equation) The time series { yt:t T} is said to have state-space representation.
Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying: • E(ut) = E(vt) = 0. • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s. • E(ututˊ) = Suand E(vtvtˊ) = Sv. • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.
Kalman Filtering: Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt xt = Bxt-1+ ut Let and
Then where One also assumes that the initial vector x0 has mean mand covariance matrix S an that
Summary: The Kalman equations 1. 2. 3. 4. 5. with and
Proof: Now hence proving (4) Note
Let Let Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:
Hence (5) where and Now
Also hence (2)
Thus (4) (5) where (2) Also
Hence (3) The proof that (1) will be left as an exercise.
Example: Suppose we have an AR(2) time series What is observe is the time series {ut|t T} and {vt|t T} are white noise time series with standard deviations suand sv.
This model can be expressed as a state-space model by defining: then
The equation: can be written Note:
The Kalman equations 1. 2. 3. 4. 5. Let
Kalman Filtering (smoothing): Now consider finding These can be found by successive backward recursions for t = T, T – 1, … , 2, 1 where
The backward recursions 2. 1. 3. In the example: - calculated in forward recursion