190 likes | 228 Views
Fast Object Segmentation Pipeline for Point Clouds Using Robot Operating System. Anjani Josyula , Bhaskar Anand , P Rajalakshmi. Introduction. LiDAR Point cloud : A 3-D representation of a scene created by sending laser pulse and receiving reflected.
E N D
Fast Object Segmentation Pipeline for Point Clouds Using Robot Operating System AnjaniJosyula, BhaskarAnand, P Rajalakshmi
Introduction LiDAR Point cloud : A 3-D representation of a scene created by sending laser pulse and receiving reflected • Useful for obstacle detection and avoidance in Intelligent Transportation Systems Challenge: Point cloud data is heavy and hence processing time for object segmentation algorithms becomes large Fig. 1: Commercially available Ouster and Velodyne LiDAR
Motivation Current approach to reduce run time: Decrease the point cloud size by downsampling (leads to loss of a portion of data) • Disadvantage: Not suitable for safety critical applications, as some crucial detail of object or an entire small object may be lost Object Segmentation-1. Ground Removal 2. Clustering • Run time can be reduced to a great extent by making steps and sub steps of segmentation to run in parallel
Previous Work Techniques to Reduce Segmentation Run Time Via Implementation Via Algorithm These techniques reduce run time via implementation of the algorithm These techniques reduce run time via the algorithm itself Use the CUDA framework & NVIDIA GPU to reduce run time Scan line based methods Brute force to parallellize clustering, registration etc using many threads Graph based methods – like KNN The proposed pipeline is also of this type but is independent of GPU use Methods using spatial and echo characteristics
Down Sampling & Slicing Nodelets Down sampling was done using a voxel grid filter and leaf size was varied between 0.21 & 0.03 Slicing is the division of the point cloud into parts based on distance from center (lidar position) The number of slices was determined by the height of the sensor (h), its vertical field of view (α0), vertical resolution (dα) and range(λN) The maximum angle αN = arctan(λN/h) The number of slices N = [(αN – α0)/dα] where [] denotes the floor function [1] The inner bounding radii for each slice is given by λi = h*tan(α0 + i*dα) where 0 ≤ i ≤ N [1] Fig. 2: The red circles show few innermost slices
Inter Quartile Range Nodelet Inter Quartile Range (IQR) reduces the number of points that must be tested to see if they belong to ground In the IQR method, the heights of all the points in each slice are arranged in ascending order The median of the lower half is q25 while the median of the upper half is q75 iqr = q75 – q25 Qmin = q25 – 0.5*iqr Qmax = q75 + 0.5*iqr Any point whose height lies between Qmin and Qmax is considered to be an inlier RANSAC is applied on the inliers while the outliers go directly to the clustering stage
RANSAC Nodelet This nodelet fits a plane to the inliers of iqr of each slice RANSAC attempts to fit a model , in this case a plane, to these points It classifies them as inliers provided they lie within some error threshold to this model The plane equation used is ax+by+cz=d The inliers of iqr are removed as they are ground points and the outliers are passed to the next stage – the clustering nodelet
Clustering & Super Clustering Nodelets The Euclidean clustering algorithm is used for clustering non-ground points For each point in a slice, points are searched for throughout a sphere of radius dthreshcentered at that point We get a number of clusters within each slice The clusters of all slices are then recombined based on the distances between their centroids If this distance is less than a certain threshold, they are recombined to form a supercluster
Results & Analysis This segmentation pipeline was used on the KITTI dataset collected using a 64 channel Velodyne lidar [2] It was also run on data collected at the Indian Institute of Technology, Hyderabad using a 16 channel Ouster lidar The KITTI dataset point clouds had over 100,000 points each while the Ouster dataset point clouds had over 65,000 points each The system used had an Intel Core i5 8th Gen processor with 2.8 GHz speed and 8GBRAM. The percentage reduction in the run time of the algorithm when run using the pipeline and sequentially was studied This percentage reduction was studied with changing down sampling leaf size (Table 1 and 2) As the leaf size reduced, i.e as the point cloud grew larger, the percentage reduction increased, meaning the pipeline is more useful for larger point clouds
Results and Analysis Table 1: Run time reduction on KITTI dataset
Results and Analysis Table 2: Run time reduction on data captured at IITH using Ouster LiDAR (OS-1)
Conclusion & Future Work This segmentation pipeline can be used to reduce segmentation run time for large point clouds. It can also be used in scenarios where the point cloud has highly uneven spatial density and down sampling is dangerous due to potential data loss It is very useful for data from LiDAR sensors with greater number of channels. Future work involves rectification of over and under segmentation in the pipeline
References • [1] Asvadi, Alireza, et al. ”3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes.” Robotics and Autonomous Systems 83 (2016): 299-311. • [2] Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012.
Wireless Networks LabIndian Institute of Technology, Hyderabad • The lab is headed by Dr. P. Rajalakshmi, Department of Electrical Engineering, IIT • Hyderabad • The projects under progress are • e-Health monitoring • Mobile Sensor Network Technologies • Compressive Sensing for Wireless Sensor Networks IoT for smarter Healthcare funded by Department of Information and Technology • Security for IoT • Agriculture Monitoring using Deep Learning • Smart Home (Power Monitoring, Control appliances through EEG Signals)
Acknowledgement This work was supported by the project M2Smart: Smart Cities for Emerging Countries based on Sensing, Network and Big Data Analysis of Multimodal Regional Transport System, JST/JICA SATREPS, Japan.