370 likes | 398 Views
Kalman/Particle Filters Tutorial. Haris Baltzakis, November 2004. Problem Statement. Examples A mobile robot moving within its environment A vision based system tracking cars in a highway Common characteristics A state that changes dynamically State cannot be observed directly
E N D
Kalman/Particle Filters Tutorial Haris Baltzakis, November 2004
Problem Statement • Examples • A mobile robot moving within its environment • A vision based system tracking cars in a highway • Common characteristics • A state that changes dynamically • State cannot be observed directly • Uncertainty due to noise in: state/way state changes/observations
A Dynamic System • Most commonly - Available: • Initial State • Observations • System (motion) Model • Measurement (observation) Model
Compute the hidden state from observations Filters: Terminology from signal processing Can be considered as a data processing algorithm. Computer Algorithms Classification: Discrete time - Continuous time Sensor fusion Robustness to noise Wanted: each filter to be optimal in some sense. Filters
Example : Navigating Robot with odometry Input Motion model according to odometry or INS. Observation model according to sensor measurements. • Localization -> inference task • Mapping -> learning task
Bayesian Estimation Bayesian estimation: Attempt to construct the posterior distribution of the state given all measurements. Inference task (localization)Compute the probability that the system is at state z at time t given all observations up to time t Note: state only depends on previous state (first order markov assumption)
Recursive Bayes Filter • Bayes Filter • Two steps: Prediction Step - Update step • Advantages over batch processing • Online computation - Faster - Less memory - Easy adaptation • Example: two states: A,B
Continuous representation Gaussian distributions Kalman filters (Kalman60) Discrete representation HMM Solve numerically Grid (Dynamic) Grid based approaches (e.gMarkov localization - Burgard98) Samples Particle Filters (e.g.Monte Carlo localization - Fox99) Recursive Bayes FilterImplementations How is the prior distribution represented? How is the posterior distribution calculated?
Example: State Representations for Robot Localization Grid Based approaches (Markov localization) Particle Filters (Monte Carlolocalization) Kalman Tracking
Example: Localization – Grid Based • Initialize Grid(Uniformly or according to prior knowledge) • At each time step: • For each grid cell • Use observation model to compute • Use motion model and probabilities to compute • Normalize
Kalman Filters - Equations A: State transition matrix (n x n) C: Measurement matrix (m x n) w: Process noise (єRn), v: Measurement noise(єRm) Process dynamics (motion model) measurements (observation model) Where :
Kalman Filters - Update Predict Compute Gain Compute Innovation Update
Kalman Filter - Example Predict
Kalman Filter - Example Predict
Kalman Filter - Example Predict Compute Innovation Compute Gain
Kalman Filter – Example Predict Compute Innovation Compute Gain Update
Kalman Filter – Example Predict
Non-Linear Case Kalman Filter assumes that system and measurement processes are linear Extended Kalman Filter -> linearized Case
Example:Localization – EKF • Initialize State • Gaussian distribution centered according to prior knowledge – large variance • At each time step: • Use previous state and motion model to predict new state (mean of Gaussian changes - variance grows) • Compare observations with what you expected to see from the predicted state – Compute Kalman Innovation/Gain • Use Kalman Gain to update prediction
Extended Kalman Filter Project State estimates forward (prediction step) Predict measurements Compute Kalman Innovation Compute Kalman Gain Update Initial Prediction
Synchro-drive robot Model range, drift and turn errors EKF – Examplemotion model for mobile robot
Particle Filters • Often models are non-linear and noise in non gausian. • Use particles to represent the distribution • “Survival of the fittest” Motion model Proposal distribution Observation model (=weight)
Particle Filters SIS-R algorithm • Initialize particles randomly (Uniformly or according to prior knowledge) • At each time step: • For each particle: • Use motion model to predict new pose (sample from transition priors) • Use observation model to assign a weight to each particle (posterior/proposal) • Create A new set of equally weighted particles by sampling the distribution of the weighted particles produced in the previous step. Sequential importance sampling Selection:Re-sampling
Particle Filters – Example 1 Use motion model to predict new pose (move each particle by sampling from the transition prior)
Particle Filters – Example 1 Use measurement model to compute weights (weight:observation probability)
Particle Filters – Example 1 Resample
Particle Filters – Example 2 Initialize particles uniformly
Continuous State Approaches • Perform very accurately if the inputs are precise (performance is optimal with respect to any criterion in the linear case). • Computational efficiency. • Requirement that the initial state is known. • Inability to recover from catastrophic failures • Inability to track Multiple Hypotheses the state (Gaussians have only one mode)
Discrete State Approaches • Ability (to some degree) to operate even when its initial pose is unknown (start from uniform distribution). • Ability to deal with noisy measurements. • Ability to represent ambiguities (multi modal distributions). • Computational time scales heavily with the number of possible states (dimensionality of the grid, number of samples, size of the map). • Accuracy is limited by the size of the grid cells/number of particles-sampling method. • Required number of particles is unknown
Thanks! Thanks for your attention!