240 likes | 254 Views
Explore probabilistic Hough transform, learning framework, experiments, and summary for object detection utilizing Hough transform with advanced techniques and spatial occurrence distributions.
E N D
CVPR 2009, Miami, Florida Object Detection Using a Max-Margin Hough Transform Subhransu Maji and Jitendra Malik University of California at Berkeley, Berkeley, CA-94720
Overview • Overview of probabilistic Hough transform • Learning framework • Experiments • Summary
Our Approach: Hough Transform • Popular for detecting parameterized shapes • Hough’59, Duda&Hart’72, Ballard’81,… • Local parts vote for object pose • Complexity : # parts * # votes • Can be significantly lower than brute force search over pose (for example sliding window detectors)
y y s s x x y y s s x x Spatial occurrence distributions Generalized to object detection • Use Hough space voting to find objects • Lowe’99, Leibe et.al.’04,’08, Opelt&Pinz’08 • Implicit Shape Model • Leibe et.al.’04,’08 Learning • Learn appearance codebook • Cluster over interest points on • training images • Learn spatial distributions • Match codebook to training images • Record matching positions on object • Centroid is given
Probabilistic Voting Matched Codebook Entries Detection Pipeline Interest Points KD Tree eg. SIFT,GB, Local Patches B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit shape model ‘ 2004
Probabilistic Hough Transform • C – Codebook • f – features, l - locations Detection Score Position Posterior Codeword Match Codeword likelihood Codeword likelihood
Learning Feature Weights • Given : • Appearance Codebook, C • Posterior distribution of object center for each codeword P(x|…) • To Do : • Learn codebook weights such that the Hough transform detector works well (i.e. better detection rates) • Contributions : • Show that these weights can be learned optimally using a max-margin framework. • Demonstrate that this leads to improved accuracy on various datasets
Learning Feature Weights : First Try 8 • Naïve Bayes weights: • Encourages relatively rare parts • However rare parts may not be good predictors of the object location • Need to jointly consider both priors and distribution of location centers.
Learning Feature Weights : Second Try • Location invariance assumption • Overall score is linear given the matched codebook entries Position Posterior Codeword Match Codeword likelihood Feature weights Activations
Max-Margin Training Training: Construct dictionary Record codeword distributions on training examples Compute “a” vectors on positive and negative training examples Learn codebook weights using by max-margin training activations class label {+1,-1} Our Contribution Standard ISM model (Leibe et.al.’04) non negative
Experiment Datasets 11 ETHZ Shape Dataset (Ferrari et al., ECCV 2006) 255 images, over 5 classes (Apple logo, Bottle, Giraffe, Mug, Swan) UIUC Single Scale Cars Dataset (Agarwal & Roth, ECCV 2002) 1050 training, 170 test images INRIA Horse Dataset (Jurie & Ferrari) 170 positive + 170 negative images (50 + 50 for training)
Experimental Results • Hough transform details • Interest points : Geometric Blur descriptors at sparse sample of edges (Berg&Malik’01) • Codebook constructed using k-means • Voting over position and aspect ratio • Search over scales • Correct detections (PASCAL criterion)
Learned Weights (ETHZ shape) Naïve Bayes Max-Margin Influenced by clutter (rare structures) Important Parts blue (low) , dark red (high)
Learned Weights (UIUC cars) Naïve Bayes Max-Margin Important Parts blue (low) , dark red (high)
Learned Weights (INRIA horses) Naïve Bayes Max-Margin Important Parts blue (low) , dark red (high)
Detection Results (ETHZ dataset) Recall @ 1.0 False Positives Per Window
Detection Results (INRIA Horses) Our Work
Detection Results (UIUC Cars) Our Work INRIA horses
Hough Voting + Verification Classifier 19 Recall @ 0.3 False Positives Per Image Implicit sampling over aspect-ratio better fitting bounding box ETHZ Shape Dataset IKSVM was run on top 30 windows + local search KAS – Ferrari et.al., PAMI’08 TPS-RPM – Ferrari et.al., CVPR’07
Hough Voting + Verification Classifier Our Work IKSVM was run on top 30 windows + local search
Hough Voting + Verification Classifier 1.7% improvement UIUC Single Scale Car Dataset IKSVM was run on top 10 windows + local search
Summary • Hough transform based detectors offer good detection performance and speed. • To get better performance one may learn • Discriminative dictionaries (two talks ago, Gall et.al.’09) • Weights on codewords (our work) • Our approach directly optimizes detection performance using a max-margin formulation • Any weak predictor of object center can be used is this framework • Eg. Regions (one talk ago, Gu et.al. CVPR’09)
Thank You Acknowledgements Work partially supported by: ARO MURI W911NF-06-1-0076 and ONR MURI N00014-06-1-0734 Computer Vision Group @ UC Berkeley Questions?
Backup Slide : Toy Example Rare but poor localization Rare and good localization