230 likes | 330 Views
Research Topics For more information, check http://vigir.missouri.edu. Assembly 'on-the-fly'. Vision-Guided Automation without stopping the Assembly Lines. Kawasaki UX150 (Industrial Robot). Kawasaki UX150 (Industrial Robot). Appearance-Based Object Modelling.
E N D
Research Topics For more information, check http://vigir.missouri.edu
Assembly 'on-the-fly' Vision-Guided Automation without stopping the Assembly Lines
Appearance-BasedObject Modelling Object Recognition and Pose Estimation using Appearance Models
3D Modelling 3D Models from Structured-Light Scanners
Multi-view Stereopsis from Virtual Cameras (a), (b) and (c) quality of the 3D model as we increase the number of virtual cameras; (d) and (f) real objects; (e) and (g) corresponding 3D models with error < 1mm
Plant Phenotyping Real plant 3D Model used for analysis of their phenotype
Real-Time Tracking Tracking and Pose Estimation in Real-Time using Appearance-based and Geometric Models
Multi-Target Identification and Geo-Location in Airborne Video Motion detection, Tracking, and Geo-Location of Moving Targets – from airborne video (right: MU UAV)
Multi-Target Identification and Geo-Location in Airborne Video 1. Extract OF on feature points 2. Analyze histogram of OF 3. Identify BG flow 5. Target Detection 4. Subtract BG flow from the entire OF
Multi-Target Identification and Geo-Location in Airborne Video
C C - t d t I - t d I t P s Altitude Estimation and Target Geo-Location from Monocular Vision Inertial frame x Y Z h Figure 6: Feature points available on the ground Figure 7: Altitude estimation using stereo camera positions.
Human-Robot Interfaces Using Augmented and Virtual Reality Environments (“Holodeck”) New human-robot interfaces for teaching and tele-operating robots to perform tedious and hazardous tasks, e.g.: assembly, rescue missions, maintenance, deep seas/space exploration, etc…
The Holodeck View of the Holodeck at the University of Missouri-Columbia (ViGIR Lab) (Shown: four of the surrounding cameras and the “virtual mirror”)
Human Motion Capturefor Action Recognition • Step 1: Robust Silhouette Extraction Using Adaptive Local PCA Image Space Eigen Sub-Space
Human Motion Capturefor Action Recognition • Step 1: Robust Silhouette Extraction Using Adaptive Local PCA
Human Motion Capturefor Action Recognition • Step 2: 3D Human Motion Capture from 2D Images 2D Image Observation 3D Human Model (27 DOF)
Compact 3D Representation using Octrees and Motion Vectors Human motion is captured, analysed and partitioned into cubes, or nodes of an Octree.
Mobile Robot Navigation Homography-based Ground Plane Detection Fast Path Planning using GPUs for the calculation of Harmonic Fields
Virtual Machines Specializedin Image Processing Cellular Neural Network Virtual Machine Using Graphics Processors (GPUs) for Applications in Image Processing