1 / 21

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. -Arnaud Doucet, Nando de Freitas et al, UAI 2000-. outline. Introduction Problem Formulation Importance Sampling and Rao-Blackwellisation Rao-Blackwellisation Particle Filter Example Conclusion. Introduction.

scott
Download Presentation

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks -Arnaud Doucet, Nando de Freitas et al, UAI 2000-

  2. outline • Introduction • Problem Formulation • Importance Sampling and Rao-Blackwellisation • Rao-Blackwellisation Particle Filter • Example • Conclusion

  3. Introduction • Famous state estimaton algorithm, The Kalman filter and the HMM filter, are only applicable to linear-Gaussian models and if state space is so large, the computatuion cost becomes too expensive. • Sequential Monte Carlo methods(Particle Filtering) have been introduced (Handschine and Mayne,1969) to handle large state model.

  4. Particle Filtering(PF) = “condensation” = “sequential Monte Carlo” = “survival of the fittest” • PF can treat any type of probability distribution,nonlinearity and non-stationarity. • PF are powerful sampling based inference/learning algorithms for DBNs

  5. Drawback of PF • Inefficent in high-dimensional spaces (Variance becomes so large) • Solution • Rao-Balckwellisation, that is, sample a subset of the variables allowing the remainder to be integrated out exactly. The resulting estimates can be shown to have lower variance. • Rao-Blackwell Theorem

  6. Problem Formulation • Model : general state space model/DBN with hidden variables and observed variables • Objective: • or filtering density • To solve this problem,one need approximation schemes because of intractable integrals

  7. Additive assumption in this paper: • Divide hidden variables into two groups, • Conditional posterior distribution is analytically tractable • We only need to focus on estimating Which lies in a space of reduced dimension

  8. 3.Importance Sampling and Rao-Blackwellisation • Monte Carlo integration

  9. But it’s impossible to sample efficiently from the “target” posterior distribution . • Importance Sampling Method (Alternative way) Weight function Importance function

  10. Normalized Importance weight Point mass approximation

  11. In case, we can marginalize out analytically

  12. Example

  13. We can estimate with a reduced variance

  14. 4.Rao-Blackwellisation Particle Filters

  15. 4.1Implementation Issues • Sequential Importance Sampling • Restrict importance function • We can obtain recursive formulas and obtain “incremental weight” is given by

  16. Choice of importance Distribution • Simplest choice is to just sample from the prior, => it can be inefficent, since it ignores the most recent evidence, . • “optimal” importance distribution :Minimizing the variance of the importance weight.

  17. But it is often too expensive.Several Deterministic approximations to the optimal distribution have been proposed, see for example(de Freitas 1999,Doucet 1998) • Selection step • Using Resampling : elimate samples with low importance weight and multiply samples with high importance weight. ( ex: residual sampling, stratified sampling, multinomial sampling)

  18. Examples: On-Line Regression and Model Selection with Neural Network Number of basis function • Goal : • It is paossible to simulate and to compute coefficent analytically using Kalman filters. • This is because the output of the neural network is linear in

  19. Conclusions and Extensions • Successful application • Conditionaliiy linear Gaussian state-space models • Conditionally finite state-space HMMs • Possible extensions • Dynamic models for counting observations • Dynamic models with a time-varying unknown covariance matrix for the dynamic noise • Calsses of the exponential family state space models etc..

More Related