1.3k likes | 1.48k Views
Probabilistic Techniques for Mobile Robot Navigation. Wolfram Burgard University of Freiburg Department of Computer Science Autonomous Intelligent Systems http://www.informatik.uni-freiburg.de/~burgard/.
E N D
Probabilistic Techniquesfor Mobile Robot Navigation Wolfram Burgard University of Freiburg Department of Computer Science Autonomous Intelligent Systems http://www.informatik.uni-freiburg.de/~burgard/ Special Thanks to Frank Dellaert, Dieter Fox, Giorgio Grisetti, Dirk Haehnel, Cyrill Stachniss, Sebastian Thrun, …
“Humanoids” Overcoming the uncanny valley [Courtesy by Hiroshi Ishiguro]
Humanoid Robots [Courtesy by Sven Behnke]
DARPA Grand Challenge [Courtesy by Sebastian Thrun]
Robot Projects: Interactive Tour-guides Rhino: Albert: Minerva:
Robot Projects: Acting in the Three-dimensional World Herbert: Zora: Groundhog:
Range Data Nature of Data Odometry Data
Probabilistic Techniques in Robotics • Perception = state estimation • Action = utility maximization Key Question • How to scale to higher-dimensional spaces
Axioms of Probability Theory Pr(A) denotes probability that proposition A is true.
Normalization Algorithm:
Simple Example of State Estimation • Suppose a robot obtains measurement z • What is P(open|z)?
count frequencies! Causal vs. Diagnostic Reasoning • P(open|z) is diagnostic. • P(z|open) is causal. • Often causal knowledge is easier to obtain. • Bayes rule allows us to use causal knowledge:
Example • P(z|open) = 0.6 P(z|open) = 0.3 • P(open) = P(open) = 0.5 • z raises the probability that the door is open.
Combining Evidence • Suppose our robot obtains another observation z2. • How can we integrate this new information? • More generally, how can we estimateP(x| z1...zn )?
Recursive Bayesian Updating Markov assumption: znis independent of z1,...,zn-1if we know x.
Example: Second Measurement • P(z2|open) = 0.5 P(z2|open) = 0.6 • P(open|z1)=2/3 • z2lowers the probability that the door is open.
Actions • Often the world is dynamic since • actions carried out by the robot, • actions carried out by other agents, • or just the time passing by change the world. • How can we incorporate such actions?
Typical Actions • The robot turns its wheels to move • The robot uses its manipulator to grasp an object • Plants grow over time… • Actions are never carried out with absolute certainty. • In contrast to measurements, actions generally increase the uncertainty.
Modeling Actions • To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) • This term specifies the pdf that executing u changes the state from x’ to x.
State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases.
Integrating the Outcome of Actions Continuous case: Discrete case:
Bayes Filters: Framework • Given: • Stream of observations z and action data u: • Sensor modelP(z|x). • Action modelP(x|u,x’). • Prior probability of the system state P(x). • Wanted: • Estimate of the state X of a dynamical system. • The posterior of the state is also called Belief:
Markov Assumption Underlying Assumptions • Static world • Independent noise • Perfect model, no approximation errors
z = observation u = action x = state Bayes Filters Bayes Markov Total prob. Markov Markov
Bayes Filter Algorithm • Algorithm Bayes_filter( Bel(x),d ): • h=0 • Ifd is a perceptual data item z then • For all x do • For all x do • Else ifd is an action data item uthen • For all x do • ReturnBel’(x)
Bayes Filters are Familiar! • Kalman filters • Discrete filters • Particle filters • Hidden Markov models • Dynamic Bayesian networks • Partially Observable Markov Decision Processes (POMDPs)
Summary • Bayes rule allows us to compute probabilities that are hard to assess otherwise. • Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. • Bayes filters are a probabilistic tool for estimating the state of dynamic systems.
Dimensions of Mobile Robot Navigation SLAM localization mapping integrated approaches active localization exploration motion control
p(x|u,x’) s’ s’ a a observation x laser data p(z|x) p(o|s,m) Localization with Bayes Filters
What is the Right Representation? • Kalman filters • Multi-hypothesis tracking • Grid-based representations • Topological approaches • Particle filters
Monte-Carlo Localization • Set of N samples {<x1,w1>, … <xN,wN>} containing a state xand animportance weight w • Initialize sample set according to prior knowledge • For each motion u do: • Sampling: Generate from each sample a new pose according to the motion model • For each observation z do: • Importance sampling: weigh each sample with the likelihood • Re-sampling: Draw N new samples from the sample set according to the importance weights
Particle Filters Represent density by random samples Estimation of non-Gaussian, nonlinear processes Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]
1. Draw from 2. Draw from 3. Importance factor for Particle Filter Algorithm 4. Re-sample
Typical Measurement Errors of an Range Measurements • Beams reflected by obstacles • Beams reflected by persons / caused by crosstalk • Random measurements • Maximum range measurements
Proximity Measurement • Measurement can be caused by … • a known obstacle. • cross-talk. • an unexpected obstacle (people, furniture, …). • missing all obstacles (total reflection, glass, …). • Noise is due to uncertainty … • in measuring distance to known obstacle. • in position of known obstacles. • in position of additional obstacles. • whether obstacle is missed.
zexp zexp 0 zmax 0 zmax Beam-based Proximity Model Measurement noise Unexpected obstacles