250 likes | 501 Views
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]. Ph.D. Student, Chang- Ryeol Lee June 26, 2013. Contents. Introduction What is Visual Odometry (VO)? Why VO? Terminology Brief history of VO Preliminary One-point RANSAN Proposed method Experimental results.
E N D
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009] Ph.D. Student, Chang-Ryeol Lee June 26, 2013
Contents • Introduction • What is Visual Odometry (VO)? • Why VO? • Terminology • Brief history of VO • Preliminary • One-point RANSAN • Proposed method • Experimental results
Introduction: what is Visual Odometry (VO)? VO is the process of incrementally estimating the pose of the vehicle by examining the changes that motion induces on the images of its onboard cameras output input Image sequence (or video stream)from one or more cameras attached to a moving vehicle Camera trajectory (3D structure is a plus):
Introduction: why VO? • Contrary to wheel odometry, VO is not affected by wheel slip in uneven terrain or other adverse conditions. • More accurate trajectory estimates compared to wheel odometry(relative position error 0.1% − 2%) • VO can be used as a complement to • wheel odometry • GPS • inertial measurement units (IMUs) • laser odometry • In GPS-denied environments, such as underwater and aerial, VO has utmost importance
Introduction: terminology • SFM vs. VO • VO is a particular case of SFM • VO focuses on estimating the 3D motion of the camerasequentially (as a new frame arrives) and in real time. • Bundle adjustment can be used (but it’s optional) to refine the local estimate of the trajectory • Sometimes SFM is used as a synonym of VO
Introduction: history of VO • 1996: The term VO was coined by Srinivasan to define motion orientation in honey bees. • 1980: First known stereo VO real-time implementation on a robot by Moraveck PhD thesis (NASA/JPL) for Mars rovers using a sliding camera. Moravec invented a predecessor of Harris detector, known as Moravec detector • 1980 to 2000: The VO research was dominated by NASA/JPL in preparation of 2004 Mars mission (see papers from Matthies, Olson, etc. From JPL) • 2004: VO used on a robot on another planet: Mars rovers Spirit and Opportunity
Introduction: history of VO • 2004: VO was revived in the academic environment by Nister «Visual Odometry» paper. The term VO became popular. • 2004: Based on loopy belief propagation • 2004: Using omnidirectional camera • 2006-2007: Focus on large-scale issue in outdoor environments • 2007: Landmark handling for improving accuracy
Introduction: problem • RANSAC for robust model estimation (usually) • Nearly degenerate case in RANSAC • Correct matches for fundamental matrix computation are small. • Matches on a dominant plane are result in homography.
Introduction: problem • Reason to occur nearly degenerate case • Bad lighting condition • Ground surfaces with low texture • Motion blur • Result: different inliers
Preliminary: three-point VO • Procedure • 3D points generation by triangulation in first stereo image. 2. Track features of a next frame. Pose estimation of next frame by P3P algorithm with RANSAC. First frame First frame Second frame
Preliminary: three-point VO • Triangulate all new feature matches. 4. Repeat from Step 2 First frame Second frame
Proposed method • Key idea • Small changes in the camera translation do not influence points which are far away. ⇒ Separate feature points, two-step model estimation • Contributions • More robust than 3-point VO (nearly degenerate case handling) • Faster than 3-point VO (efficiency)
Proposed method • Procedure • Perform sparse stereo and putative matching. • Separate features based on disparity. • Recover rotation with two-point RANSAC. • Recover translation with one-point RANSAC.
Proposed method • Sparse stereo and putative matching • Calibrated and rectified images • Sparse stereo 1. Feature extraction. 2. Matching in scan line. • Putative matching 1. Prediction of vehicle motion by * odometry * previous motion * stationary assumption. 2. Template matching
Proposed method • Separate features based on disparity • Threshold is based on vehicle speed. , where b is baseline, f is focal length , where {,} are prediction of vehicle motion , where are maximum allowed pixel error. (0.1~0.5) • Translation -> Threshold -> Close feature points • Translation -> Threshold -> Far feature points
Proposed method • Separate features based on disparity • In the case that either far or close feature points only exist. • Only far feature points • Translation is zero or small • Use a minimum number of the closest putative matches • Only close feature points • There is no such case since we assume that camera translation is small.
Proposed method • Rotation: two-point RANSAC • Far feature points are not influenced by camera translation. • We regard points for rotation estimation as points at infinity. • Points at infinity have 0 disparity (same points in left, right image) -> Rotation estimation is based on the direction of points at infinity -> Monocular approach (use only right or left images)
Proposed method • Rotation: two-point RANSAC • Each measurement contribute 2 constraints • Cost function: reprojection error • Unknown: rotation 3DOF * is the number of points • Require 2 points at least
Proposed method • Translation: One-point RANSAC • Intuitively, the difference of each 3D points from a single match in two frames is camera translation -> stereo approach (use stereo images)
Proposed method • Translation: One-point RANSAC • Each measurement contribute 3 constraints • By minimizing re-projection error • Unknown: translation 3DOF * is the number of points • Require 1 point at least
Experimental results • Robustness
Experimental results • Speed and accuracy
Experimental results • Speed and accuracy
Reference [1] D. Scaramuzza, F. Fraundorfer. “Visual Odometry [Tutorial]” Robotics & Automation Magazine, IEEE, Vol. 18, No. 4. December 2011. [2]Kaess, M., Ni, K., & Dellaert, F. “Flow Separation for Fast and Robust Stereo Odometry”. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2009. [3] D. Nister, O. Naroditsky, J. Bergen. “Visual odometry”. Computer Vision and Pattern Recognition, 2004.