220 likes | 250 Views
Explore the combination of geometric and image-based models for mobile robotics navigation, covering acquisition, rendering, and applications in mapping, localization, and tracking. Discover the benefits compared to traditional methods and the potential for predictive display in navigation scenarios.
E N D
Image-Based Modelswith Applications inRobot Navigation Dana Cobzas Supervisor: Hong Zhang
3D Modeling in Computer Graphics Aquisition Rendering Range sensors Modelers New view Geometric model + texture Real scene [Pollefeys & van Gool] • Graphics model:3D detailed geometric model of a scene • Goal:rendering new views
Mapping in Mobile Robotics Map Building sensors Localization/ Tracking Navigation environment Map Robot • Navigation map: representation of the navigation space • Goal: tracking/localizing the robot
Same objective: How to model existing scenes? • Traditional geometry-based approaches: • = geometric model + surface model + light model • - Modeling complex real scenes is slow • - Achieving photorealism is difficult • - Rendering cost is related to scene complexity • + Easy to combine with traditional graphics • Alternative approach: image-based modeling: • = non-geometric model from images • - Difficult to acquire real scenes • - Difficult to integrate with traditional graphics • + Achieving photo-realism is easier if starting from real photos • + Rendering cost is independent on scene complexity In this work we combine the advantages of both for mobile robotics localization and predictive display
This thesis Investigates the applicability of IBMR techniques in mobile robotics. Questions addressed: • Is it possible to use an IBM as navigation map for mobile robotics? • Do they provide desired accuracy for the specific applications – localization and tracking? • What advantages do they offer compared to traditional geometric-based models?
Approach • Solution: • Reconstructed geometric model combined with image information • 2 models • Model1: calibrated: panorama with depth • Model2: uncalibrated: geometric model with dynamic texture • Applications in localization/tracking and predictive display
Model1: Overview • Standard panorama: - no parallax, reprojection from the same viewpoint • Solution – adding depth/disparity information: • Using two panoramic images for stereo • Depth from standard planar image stereo • Depth from laser range-finder
Depth from stereo • Trinocular Vision System • (Point Gray Research) Cylindrical image-based panoramic models + depth map
Depth from laser range-finder • CCD camera • Laser rangefinder • Pan unit 180 degrees panoramic mosaic Corresponding range data (spherical representation) Data from different sensors: requires data registration
Model 1: Applications • Input: image+depth • Features:planar patches • vertical lines Absolute localization: • Input: intensity image • Assumes: approximate pose • Features: vertical lines Incremental localization: Predictive display:
Model 2: Overview Tracking Geometric model Rendering Dynamic texture Input images Model Applications
Geometric structure Tracked features poses structure Structure from motion algorithm
Dynamic texture I 1 Re-projected geometry Texture Variability basis Input Images I t
3D SSD Tracking • Goal: determine camera motion (rot+transl) from image differences • Assumes: sparse geometric model of the scene • Features: planar patches 3D Model current motion differential motion past motion initial motion past warp current warp differential warp
Tracking and predictive display • Goal: track robot 3D pose along a trajectory • Input: geometric model (acquired from images) and initial pose • Features: planar patches
Thesis contributions Contrast calibrated and uncalibrated methods for capturing scene geometry and appearance from images: panoramic model with depth data (calibrated) geometric model with dynamic texture (uncalibrated) Demonstrate the use of the models as navigation maps with applications in mobile robotics absolute localization incremental localization model-based tracking predictive display
Thesis questions • Is it possible to use an image-based model as navigation map for mobile robotics? • A combination of geometric and image-based model can be used as navigation map. • What advantages do they offer compared to traditional geometric based models? • The image information is used to solve data association problem. • Model renderings are used for predicting robot location for a remote user. • Do they provide desired accuracy for the specific applications – localization, tracking? • The geometric model (reconstructed from images) is used for localization/tracking algorithms. The accuracy of the algorithm depends on the accuracy of the reconstructed model. • The model accuracy can also be improved during navigation as different levels of accuracy are needed depending on the location (large space/narrow space) – future work.
Comparison with current approaches Mobile Robotics Map + Image information for data association + Complete model that can be rendered – closer to human perception - Concurrent localization and matching (SLAM-Durrant-Whyte) - Invariant features (light, occlusion) (SIFT-Lowe) - Uncertainty in feature location (localization algorithms) Graphics Model (dynamic texture model-hybrid image+geometric model) + Easy acquisition: non-calibrated camera (raysets, geometric models) + Photorealism (geometric models) + Traditional rendering using the geometric model (raysets) • Automatic feature detection for tracking – larger scenes • Denser geometric model (relief texture) • Light-invariance (geometric models, photogrammetry)
Future work Mobile Robotics Map • Improve map during navigation • Different ‘map resolutions’ depending on robot pose • Incorporate uncertainty in robot pose and features • Light, occlusion invariant features • Predictive display: control robot’s motion by ‘pointing’ o ‘dragging’ in image space Graphics Model (dynamic texture) • Automatic feature detection for tracking • Light-invariant model • Compose multiple models into a scene based on intuitive geometric constraints • Detailed geometry (range information from images)