250 likes | 492 Views
Prof. Christopher Rasmussen cer@cis.udel.edu Lab web page: vision.cis.udel.edu. November 10, 2004. Research in the DV lab. Tracking, segmentation Model-building, mapping, and learning Cue combination and selection Auto-calibration of sensors Current projects:
E N D
Prof. Christopher Rasmussen cer@cis.udel.edu Lab web page: vision.cis.udel.edu November 10, 2004
Research in the DV lab • Tracking, segmentation • Model-building, mapping, and learning • Cue combination and selection • Auto-calibration of sensors • Current projects: • Road following, architectural modeling
Road Following: Background • Edge-based methods: Fit curves to lane lines or road borders • [Taylor et al., 1996; Southall & Taylor, 2001; Apostoloff & Zelinsky, 2003] • Region-based methods: Segment image based on discriminating charac- teristic such as color or texture • [Crisman & Thorpe, 1991; Zhang & Nagel, 1994; Rasmussen, 2002; Apostoloff & Zelinsky, 2003] from Apostoloff & Zelinsky, 2003
Problematic Scenes for Standard Approaches Grand Challenge sample terrain Antarctic “ice highway” No good contrast or edges, but organizing feature is vanishing point, which indicates road direction
Results: Curve Tracking Integrate vanishing point directions to get points along curves parallel to (but not necessarily on) road
~1.5 inches Panoramic camera v2.0a
Correspondence-based Mosaicing • Minimum of 4 corresponding points in two images sufficient to define transformation warping one into other • Can be done manually or automatically
Correspondence-based Mosaicing Translation only
Road Shape Estimation (3 cameras) • Road edge tracking • Estimate quadratic curvature via Kalman filter with Sobel edge measurements
Motion-based Mosaicing • It’s possible to make mosaics of cameras with non-overlapping fields of view provided we have sequences from them (Irani et al., 2001) • Overlapping pixels are wasted pixels • We’re working on approaches for ncameras > 2
Motivation: DARPA Grand Challenge • Organized by DARPA (the U. S. Defense Advanced Research Projects Agency) • A robot road race through the desert from Barstow, CA to Las Vegas, NV on March 13, 2004 • Prize for the winning team: $1 million (nobody won) • Running again next October with $2 million prize
Problem: How to Use Roads as Cues? Bob’s track relative to course corridors (No road following) We’re working on integrating camera views from vehicle with aerial photos
Merging Structure into Local Map • Integrate raw depth measurements from several successive frames using vehicle inertial estimates • Combine with camera information • We’re working on calibration techniques courtesy of A. Zelinsky
Laser-Camera Registration Range image (180 x 32) 90° horiz. x 15° vert. Video frame (360 x 240) Registered laser, camera
3-D Building Models from Images courtesy of F. van den Heuvel Show VRML model
Robot Platform for Mapping Project Wireless ethernet GPS antenna Analog video capture card PTZ camera Onboard computer Not shown: electronic compass, tilt sensor
View Planning • Where to take the photos from? • Hard constraints: Need overlapping fields of view for stereo correspondences • Soft constraints: Balance accuracy of estimated 3-D model, quality of appearance (texture maps) with acquisition, computation time • Based on camera field of view, height of building, placement of occluding objects like trees and other buildings
Path Planning • How to get a robot from point A to point B? • Criteria: Distance, difficulty, uncertainty
Path Planning GPS-referenced CAD map of campus buildings is available Aerial photos contain information about paths, vegetation as well as buildings
Obstacle Avoidance How to detect trash cans, people, walls, bushes, trees, etc. and smoothly combine detours around them with global path planned from map and executed with GPS?
Segmentation of Road Images Using Different Cues Texture Color +T+L Laser C+T+L