510 likes | 685 Views
P roximity Computations between Noisy Point Clouds using Robust Classification. 1 J ia Pan, 2 Sachin Chitta , 1 Dinesh Manocha 1 UNC C hapel Hill 2 Willow Garage. http://gamma.cs.unc.edu/POINTC/. M ain Result.
E N D
Proximity Computations between Noisy Point Clouds using Robust Classification 1Jia Pan, 2Sachin Chitta, 1Dinesh Manocha 1UNC Chapel Hill 2Willow Garage http://gamma.cs.unc.edu/POINTC/
Main Result • Proximity computation and collision detection algorithm between noisy point cloud data • Computes collision probability instead of simple yes-no result • Important for safety and feasibility of robotics applications
Proximity and Collision Computations • Geometric reasoning with noisy point cloud data vs. mesh based representations • Integral part of motion planning and grasping algorithms • Contact computations for dynamic simulation
Motion Planning: Assumptions • Motion planning has a long history (30+ years) • Assumptions • Exact environment (mesh world) • Exact control • Exact localization • No joint limit and torque limit • No quality control • Output: • Collision-free path
Environment Uncertainty • Uncertainty can be large due to sensor error, poor sampling, physical representations, etc. • Need to model order to improve the safety and robustness of motion planning • Relative less work • Previous methods only consider 2D cases with specific assumptions on uncertainty
Sensors • Robot uses sensors to compute a representation of the physical world • But sensors are not perfect or may not be very accurate….
Robot Sensors: Data Collection Cameras: may have low resolution
Robot Sensors: Data Collection • Laser Scanners: may have limited range
Recent Trend: Depth Cameras Primesense Camcube Swissranger 4000 Kinect Pr2’s
Kinect http://graphics.stanford.edu/~mdfisher/Kinect.html
Kinect Reconstruction http://www.cs.washington.edu/ai/Mobile_Robotics/projects/rgbd-3d-mapping/ http://groups.csail.mit.edu/rrg/index.php?n=Main.VisualOdometryForGPS-DeniedFlight
Uncertainty From RGB-D Sensors • Sensors may have low resolution low resolution of point clouds • Kinect has relatively high resolution, but may still be not enough for objects far away • Sensors may be influenced by noise, especially in outdoor environments noise in point clouds • Sensors may have limited ranges (near range and far range) unknown areas
Handling Noisy Point Cloud • Planning, navigation and grasping • Scene reasoning • Noisy data • Real-time processing
Related Work • Collision Detection for meshes • Fast and robust • Not designed for (noisy) point-clouds • Motion planning with environment uncertainty • 2D polygons • Vehicle planning
Errors in Point Clouds Discretization (sampling) error
Errors in Point Clouds Position error
Point Cloud Collision Detection In-collision In-collision ?
Point Cloud Collision Detection In-collision In-collision ?
Point Cloud Collision Detection collision-free Collision-free ?
Point Cloud Collision Detection Collision-free Collision-free?
Handling Point Cloud Collision: Two Methods Mesh Collision Reconstruction Point Cloud Collision
Mesh Reconstruction => Collision Reconstruction Mesh Collision Reconstruction is more difficult then collision detection Solve an easier problem by conquering a more difficult one?
Mesh Reconstruction => Collision • Reconstruction process is not robust, and is sensitive to noise and high order features • Error in reconstructed result can be amplified by subsequent collision checking • Reconstruction process is slow (few seconds) • The final result is YES/NO answer, which is sensitive to noise.
Use Ideas from Classification • Two methods to classify two sets (Machine Learning) • Generative model • First estimate the joint distribution (more difficult!) • Then compute the classifier • Discriminative function • Directly compute the classifier • If we only need to classify, discriminative function usually is the better choice.
Our Solution • Return to the basic definition of collision-free • Two objects are collision-free if they are separable by a continuous surface and is in-collision when such surface does not exist.
Classification-based Collision Detection • Find a separating surface that separates two points clouds as much as possible
Collision Detection based on Robust Classification • We compute the optimal (i.e. minimize the separating error) separating surface using a SVM-like algorithm • Use supervised machine learning methods for geometric classification • Different from standard SVM: each training data point has noise – corresponds to robust classification in machine learning
Robust Classification Robust Classification: aware of noise Standard SVM
Per-point Collision Probability • Collision probability: the probability that one point is on the wrong side of separating surface. • Robust classification computes collision probability for each single point sample
Probabilistic Collision between Two Objects • For each object • Cluster the points and only keep one point in each cluster: compute collision probability for independent points • Overall object collision probability
Three Collision Cases Deep collision In-contact Large distance Collision probability near 1 Collision probability near 0.5 Collision probability near 0 Difficult: small noise will bring reverse of yes/no answer Easy Easy
Results: Small Noise Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations Very few configurations are in the difficult region
Results: Large Noise Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations More configurations are in the difficult region!
PR2 Robot Sensor Results Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations same distance to obstacle collision probability with wide spread.
Range of Collision Probability • Collision probability’s wide range: • It is a more complete description of collision state than distance or yes/no answer • Important for grasping or planning in constrained environment
Kinect Data Office data from Peter Henry, Dieter Fox @ RSE-lab UW
BVH Hierarchy Acceleration • Bounding volume hierarchy is widely used in mesh collision detection for acceleration • Basic idea: decompose objects into many cells and cull collision test between cells that are far away.
Application to Motion Planning • Use overall collision probability for the objects as a cost in motion planning algorithms to compute the trajectory with minimum collision probability
Other Applications • The per-point collision probability is more useful • Provides finer control in terms of handling environment uncertainty • Can use work space information to guide the planning procedure in order to avoid collision
Conclusions • A robust proximity computation and collision detection algorithm for noisy point cloud data • Problem reduced to robust classification • Initial results on point-cloud data from PR2 sensors
Future Work • Currently we directly use point clouds, which can only model the space with/without obstacles • Due to sensor range, part of the space is unknown • So we will apply our algorithm to the data structure that can encode ‘unknown’ space, such as the octomap in ROS • Also more useful for sensor with dense resolution, like kinect
Future Work • Currently implementation is on static models • Extend to dynamic environments • New objects added or deleted • Handle deformable objects and update sensor uncertainty • SVM has incremental variations that can handle dynamic data
Acknowledge • National Science Foundation • Army Research Office • Willow Garage