210 likes | 344 Views
Visual Tracking on an Autonomous Self-contained Humanoid Robot. Mauro Rodrigues , Filipe Silva, Vítor Santos University of Aveiro. CLAWAR 2008 Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines
E N D
Visual Tracking on an Autonomous Self-contained Humanoid Robot Mauro Rodrigues, Filipe Silva, Vítor Santos University of Aveiro CLAWAR 2008 Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines 08 – 10 September 2008, Coimbra, Portugal
Outline • Overview • Objectives • Self-Contained Platform • Vision System • Experimental Results • Conclusions
OverviewHumanoid Platform • Humanoid Robot developed at University of Aveiro • Ambition is participation at Robocup • Platform composed of 22 DOF’s • Head on a PTU arrangement • Up to 70 cm height and a mass of 6,5 kg
OverviewDistributed Control Architecture • Master/Multi-Slave configuration on CAN Bus • Central Processing Unit: • Image processing and visual tracking • External computer interaction for monitorization, debug or tele-operation • Master • CPU/Slaves communication interface • Slaves • Interface with actuators and sensors
Objectives • Central Processing Unit Integration • Computational autonomy • Development environment • Vision System Development • Visual Tracking Approach • Detection and tracking of a moving target (ball)
Self-Contained Platform • CPU standard PCI-104 • AMD Geode LX-800 @ 500MHz • 512Mb RAM • SSD 1Gb • Video Signal Capture • PCMCIA FireWire board • Dual PCMCIA PC104 module • UniBrain Fire-i @ 30fps (640x480) Camera • Development Environment • Linux based • OpenCV
Vision System Acquisition Segmentation - H, S and V Components Object Location Pre-processing Mask
Vision System No ROI With ROI • Dynamic Region of Interest (ROI) • Reduced noise impact • Faster calculus
Vision SystemVisual Tracking Approach • Keep target close to image centre • Image based algorithm • Fixed gains proportional law, • , joint increment vector • , constant gain matrix • , error vector defined by the ball’s offset • Variable gains nonlinear law,
Experimental ResultsSelf-Contained Platform • Acquisition • libdc1394 based library • Acquisition @ 320x240 with down-sampling: ~24ms • Processing • Without dynamic ROI: 15ms • With dynamic ROI: 11ms Total = ~40ms 25Hz
Experimental ResultsVisual Tracking • Ball alignment • ~1s • Stationary error (~7 pixels)
Experimental ResultsVisual Tracking • Pan trackingwith fixed gains • Error increases in frontal area of the robot
Experimental ResultsVisual Tracking Fixed Gains Variable Gains • Pan tracking with variable gains • Frontal area error reduced
Experimental ResultsVisual Tracking • Tilt trackingwith variable gains • Error similar to the pan tracking • Trunk increases the error
Conclusions • Implemented architecture separates the high-level vision processing from the low-level actuators control • Dynamic Region of Interest guarantees a greater noise immunity and faster calculus • Low error location and alignment with stationary target, fast convergence • Tracking error reveals the need of a more sophisticated control • Autonomous Self-Contained Humanoid Platform • 25Hz average processing rate, sufficient to deal with fast-stimuli and other quick changing visual entries
Future Work • Validate ball detection through shape detection • Recognition of other elements, such as the ones present at the Robocup competition • Explore alternative techniques of Visual Servoing • Study the influence of the robot’s movement on the visual information and on the tracking system’s performance