470 likes | 663 Views
Real-Time Vision on a Mobile Robot Platform. Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin smohan@ece . utexas . edu. Motivation. Computer vision challenging . “State-of-the-art” approaches not applicable to real systems.
E N D
Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin smohan@ece.utexas.edu
Motivation • Computer vision challenging. • “State-of-the-art” approaches not applicable to real systems. • Computational and/or memoryconstraints. • Focus: efficient algorithms that work in real-time on mobile robots.
Overview • Complete vision system developed on a mobile robot. • Challenges to address: • Color Segmentation. • Object recognition. • Line detection. • Illumination invariance. • On-board processing– computational and memory constraints.
Test Platform – Sony ERS7 • 20 degrees of freedom. • Primary sensor – CMOS camera. • IR, touch sensors, accelerometers. • Wireless LAN. • Soccer on 4.5x3m field – play humans by 2050!
The Aibo Vision System – I/O • Input: Image pixels in YCbCr Color space. • Frame rate: 30 fps. • Resolution: 208 x 160. • Output: Distances and angles to objects. • Constraints: • On-board processing: 576 MHz. • Rapidly varying camera positions.
Vision System – Phase 1: Segmentation. • ColorSegmentation: • Hand-label discrete colors. • Intermediate color maps. • NNr weighted average – Master color cube. • 128x128x128 color map – 2MB.
Vision System – Phase 1: Segmentation. • Use perceptually motivated color space – LAB . • Offline training in LAB – generate equivalent YCbCr cube.
Vision System – Phase 1: Segmentation. • Use perceptually motivated color space – LAB. • Offline training in LAB – generateequivalent YCbCr cube. • Reduce problem to table lookup. • Robust performance with shadows,highlights. • YCbCr – 82%, LAB – 91%.
Some Problems… • Sensitive to illumination. • Frequent re-training. • Robot needs to detect and adapt to change. • Off-board color labeling – time consuming. • Autonomous color learning possible…
Vision System – Phase 2: Blobs. • Run-Length encoding. • Starting point, length in pixels. • Region Merging. • Combine run-lengths of same color. • Maintain properties: pixels, runs. • Bounding boxes. • Abstract representation – four corners. • Maintains properties for further analysis.
Vision System – Phase 2: Objects. • Object Recognition. • Heuristics on size, shape and color. • Previously stored bounding box properties. • Domain knowledge. • Remove spurious blobs. • Distances and angles: known geometry.
Vision System – Phase 3: Lines. • Popular approaches: Hough transform, Convolution kernels – computationally expensive. • Domain knowledge. • Scan lines – green-white transitions – candidate edge pixels.
Vision System – Phase 3: Lines. • Incremental least square fit for lines. • Efficient and easy to implement. • Reasonably robust to noise. • Lines provide orientation information. • Line Intersections can be used as markers. • Inputs to localization. • Ambiguity removed through prior position knowledge.
Some Problems… • Systems needs to be re-calibrated: • Illumination changes. • Natural light variations: day/night. • Re-calibration very time consuming. • More than an hour spent each time… • Cannot achieve overall goal – play humans. • That is not happening anytime soon, but still…
Illumination Sensitivity – Samples. • Trained under one illumination: • Under different illumination:
Illumination Invariance - Approach. • Three discrete illuminations – bright, intermediate, dark. • Training: • Performed offline. • Color map for each illumination. • Normalized RGB (rgb – use only rg) sample distributions for each illumination.
Illumination Invariance – Training. • Illumination: bright – color map
Illumination Invariance – Training. • Illumination: bright – map and distributions.
Illumination Invariance – Testing. • Testing - KLDivergence as a distance measure: • Robust to artifacts. • Performed on-board the robot, about once a second. • Parameter estimation described in the paper. • Works for conditions not trained for… • Paper has numerical results.
Some Related Work… • CMU vision system: Basic implementation. • James Bruce et al., IROS 2000 • German Team vision system: Scan Lines. • Rofer et al., RoboCup 2003 • Mean-shift: Color Segmentation. • Comaniciu and Peer: PAMI 2002
Conclusions • A complete real-time vision system – on board processing. • Implemented new/modified version of vision algorithms. • Good performance on challenging problems: segmentation, object recognitionand illumination invariance.
Future Work… • Autonomous color learning. • AAAI-05 paper available online. • Working in more general environments, outside the lab. • Automatic detection of and adaptation to illumination changes. • Still a long way to go to play humans .
Autonomous Color Learning – Video • More videos online • www.cs.utexas.edu/~AustinVilla/
THAT’S ALL FOLKS www.cs.utexas.edu/~AustinVilla/
Question – 1: So, what is new?? • Robust color space for segmentation. • Domain-specific object recognition + line detection. • Towards illumination invariance. • Complete vision system – closed loop. • Accept – cannot compare with other teams, but overall performance good at competitions…
Vision – 1: Why LAB?? • Robust color space for segmentation. • Perceptually motivated. • Tackles minor changes – shadows, highlights. • Used in robot rescue…
Vision – 2: Edge pixels + Least Squares?? • Conventional approaches time consuming. • Scan lines faster: • Reduces colors needing bounding boxes. • LS easier to implement – fast too. • Accept – have not compared with any other method…
Vision – 3: Normalized RGB ?? • YCbCr separates luminance – but not good for practice on Aibo. • Normalized RGB (rgb): • Reduces number of dimensions - storage. • More robust to minor variations. • Accept – have compared with YCbCr alone – LAB works but more storage and calculations…