260 likes | 472 Views
Automatic Car Detection. Using Statistic-based Boosting Cascade. Weilong Yang, Wei Song, Zhigang Qiao, Michael Fang. System Overview. Haar -like feature generation Integral image AdaBoost (feature selection) Limitations Statistic-based boosting ( StatBoost ) Cascade structure
E N D
Automatic Car Detection Using Statistic-based Boosting Cascade Weilong Yang, Wei Song, Zhigang Qiao, Michael Fang
System Overview • Haar-like feature generation • Integral image • AdaBoost (feature selection) • Limitations • Statistic-based boosting (StatBoost) • Cascade structure • Performance evaluation
Features: • Training Images Set: • 500 negative samples (w/o cars) • 500 positive samples (with cars) • Training Image size: 40X100 pixels • imresize(X, 0.35), resize ratio, 0.35 • Feature Templates: • Rectangle features were used. • Total 9 types of features were used • Integral graph was used to accelerate the calculation of rectangle features.
Integral Image The sum within D can be computed as: 4 + 1 − (2 + 3).
Features in 14X35 windows Total 177660 features The feature templates can be laid at any position with any possible size in the 14X35 windows. One possible position and size of some type of feature define a feature. All the possible features form the total feature set of the 14X35 windows.
Key Concepts of AdaBoost • The objective is to learn a sequence of best weak classifiers and the best combining weights • Each weak classifier is constructed based on one feature (1D).
Weak Classifier 1 Weights Increased Weak Classifier 2 Weak classifier 3 Final classifier: linear combination of the weak classifiers AdaBoost (illustrated)
Computational Issue It takes WEEKS to train!
The Bottleneck Training of the weak classifier Figure courtesy to Minh-Tri Pham, 2007
Fast-StatBoost (1) • Train the weak classifiers using statistics • Assume the feature values of each class are of normal distribution Non-car Car Optimal threshold Feature value Figure courtesy to Minh-Tri Pham, 2007
Fast-StatBoost (2) Constant Time Figure courtesy to Minh-Tri Pham, 2007
The Cascade • Motivation • Increase detection performance • Reduce computation time • Key insight • Use simpler classifiers to reject the majority of sub-windows • Use more complex ones to achieve low false positive rates on the rest of them
The Cascade (illustrated) • Overall form: degenerate decision tree • Reflects the fact: within any single image an overwhelming majority of sub-windows are negative [1] All Sub-windows T T Further Processing T 1 2 3 F F F Rejected Sub-windows
A Cascade of Classifiers • False positive rate • Detection rate
Our Cascaded Car Detector • Three-layer cascade • Each layer has 4, 11, and 73 weak classifiers, respectively
Training Set (1) Number of car images: 500 (Courtesy to UIUC Image Database for Car Detection)
Training Set (2) Number of non-car images: 500 (Courtesy to UIUC Image Database for Car Detection)
First Three Features 1st Feature 2nd Feature 3rd Feature
The ROC curve Test results on 170 Images:
References • P. Viola and M. Jones, “Rapid Object Detection Using a Boosted Cascade of Simple Features,” In Proc. CVPR, 2001. • M. Pham, T. Cham, “Fast training and selection of Haar features using statistics in boosting-based face detection.” In Proc. ICCV 2007