190 likes | 601 Views
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]. Unscented Kalman filter. If the dynamics or observation models are nonlinear, linearization is an option ( extended Kalman filter )
E N D
Short Introduction to Particle FilteringbyArthur Pece[ follows my Introduction to Kalman filtering ]
Unscented Kalman filter • If the dynamics or observation models are nonlinear, linearization is an option (extended Kalman filter) • A better option is to take samples in state space and propagate all samples through the nonlinear equations • In the unscented Kalman filter, regular samples are taken d standard deviations apart, in all directions of state space
Unscented transformation • The n unscented samples are propagated through the dynamical/observation equations to obtain n new samples • The mean and covariance of these new samples are taken as the mean and covariance of the new state/observation
Unscented Kalman equations • Prediction : xi(t) = Dxi(t-1) + Cu(t-1) x(t) = S xi(t) / n A(t) = S (xi- x) (xi- x) T/n + N • Update: yi = Fxi ; y = S yi / n innovation: v = y - y innov. cov: W = S (yi- y) (yi- y) T / n + R Kalman gain, posterior mean, posterior cov: as in standard Kalman filter
Unscented Kalman vs. particle • The unscented Kalman filter still uses a Gaussian approximation, which is not always satisfactory • In particle filtering, no approximation is used at any stage: densities are always approximated by a set of finite samples
Particle filtering • The posterior pdf is approximated by a finite set of particles • Each sample (particle) consists of a state, a weight, and possibly other information (a covariance for instance) • Each particle is propagated through the dynamical equations with noise (from a random number generator) • Each particle has its own predicted observation
ConDensation filter • 3 steps: prediction/sampling, observation/weighting, re-sampling • Sampling: for each particle, use the dynamical equations to predict the current state from the previous state, then add noise • Weighting: set the weight of each particle proportional to the likelihood of its state and normalize the weights to unit sum ---> The weighted density of particles is equal to the product of prior density (from the sampling density) and likelihood (from the weights)
ConDensation resampling • It can be proven that, by iterating the sampling and weighting procedure alone, the weights almost surely diverge until one particle has unit weight and all other particles have zero weight • The solution is re-sampling: draw new particles from each of the old particles, with probability equal to the weight of the old particle
General particle filter • 3 steps: sampling, weighting, re-sampling • Sampling: for each particle, sample the new state from a proposal density conditional on the previous state and the observation The main difference between particle filters is the proposal density • Weighting: set the particle weight proportional to the ratio of proposal density and posterior pdf, and normalize weights • Re-sampling: as before
Kalman particle filter • 3 steps: sampling, weighting, re-sampling • Main difference from ConDensation: - in ConDensation, the proposal density is the prior pdf - in the KPF, the proposal density is the posterior pdf obtained from a Kalman model
Kalman particle • A Kalman particle behaves as a Kalman filter in the sampling stage, except for random noise added at the end • The relationship between a Kalman filter and a KPF is the same as that between a Gaussian and a mixture of Gaussians
Summary • Two approaches to tracking: Kalman filter and particle filter • The unscented Kalman filter is an extension of the Kalman filter • The Kalman particle filter uses the Kalman filter as a sub-routine of a particle filter