100 likes | 122 Views
This paper introduces the concept of Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network, a numerical approximation scheme used to estimate the conditional posterior distribution of hidden variables in a DBN. The algorithm uses importance sampling and marginalization techniques to reduce the size of the space that needs to be sampled, improving computational efficiency. The method is demonstrated through a robot localization and map learning problem.
E N D
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell
Introduction • Sampling in high dimension • Model has tractable substructure • Analytically marginalized out, conditional on certain other nodes being imputed • Using Kalman filter, HMM filter, junction tree algorithm for general DBNs • Reduce size of the space over we need to sample • Rao-Blackwellisation • marginalize out some of the variables
Problem Formulation • general state space model/DBN with hidden variables and observed variables . • is a Markov process of initial distribution • Transition equation • Observations • Estimate • recursion • not analytically, numerical approximation scheme
Cont’d • Divide hidden variables into two groups and • Conditional posterior distribution is analytically tractable. • Focus on estimating (reduced dimension) • Decomposition of posterior from chain rule • Marginal distribution
Importance sampling and RAO-Blackwellisation • Sample N i.i.d. random samples(particles) according to • Empirical estimate • Expected value of any function of hidden variables
Cont’d • Strong law of large numbers • converges almost surely towards as • Central limit theorem
Importance Sampling • impossible to sample efficiently from target • Importance distribution q • Easy to sample • p>0 implies q>0
Cont’d • The case where one can marginalize out analytically, propose alternative estimate for • Alternative importance sampling estimate of • To reach a given precision, will require a reduced number N of samples over
Rao-Blackwellised particle filters • Sequential importance sampling step • For i=1,…,N sample: and set: • For i=1,…,N evaluate importance weights up to a normalizing constant: • For i=1,…,N normalize importance weights: • Selection step • multiply/suppress samples with high/low importance weights to obtain random samples approximately distributed • MCMC step • Apply a markov transition kernel with invariant distribution given by to obtain to obtain
Robot localization and MAP building • Problem of concurrent localization and map learning • Location • Color of each grid cell • Observation • Basic idea of algorithm • Sample with PF • Marginalize out since they are conditionally independent given