140 likes | 161 Views
A comprehensive overview of object detection using 3D Lidar points, including its basics, limitations of cameras, and the need for Lidar technology. Learn about data sets, training data transformation, Python implementation, and measurement of accuracy in this non-technical primer. Explore the Fully Convolutional Net structure, accuracy metrics, and techniques like DBSCAN clustering. Dive into Korea's team results in the Didi-Udacity Challenge and considerations for faster processing. Discover the importance of a bird's eye view for panoramic data and the need for speed in processing. Join the journey into the world of 3D Lidar object detection.
E N D
(Non Technical) Overview of Deep Learning Object Detection on 3D Lidar Points Han Bin Lee Seoul Robotics
Prelude • How it all got started • Udacity • 2017 Didi-Udacity Challenge
Object Detection overview • Limitation of Camera • Need for Lidar
Basic Structure of Lidar • 4 by n • x y z R + timestamp
Big Data Used • Kitti data set • Udacity data set • All labeled and annotated
Training Data Transformation • 2D panoramic transformation • Any points within annotated box was the ‘car’
Fully Convolutional Net • Fully Convolutional Network
Measurement of Accuracy • Intersection over union value
Some other Technique used • DBSCAN Cluster method
Team Korea’s Result • 10th out of 2000
After Thought • Data datadatadata • Limit of panoramic view – perhaps birds eye view was better • Data datadatadata • Need to be fast ~ 20hz