250 likes | 455 Views
Research. Asia. Kullback-Leibler Boosting. Ce Liu Heung-Yeung Shum Microsoft Research Asia. Projection function. Discriminating function. Coefficients. Identification function. A General Two-layer Classifiers. Intermediate. Input. Output. RBF. Polynomial.
E N D
Research Asia Kullback-Leibler Boosting Ce Liu Heung-Yeung Shum Microsoft Research Asia
Projection function Discriminating function Coefficients Identification function A General Two-layer Classifiers Intermediate Input Output
RBF Polynomial Issues under Two-layer Framework • How to choose the type of projection function? • How to choose the type of discriminating function? • How to learn the parameters from samples? Projection function Discriminating function Sigmoid
Our proposal • How to choose the type of projection function? • Kullback-Leibler linear feature • How to choose the type of discriminating function? • Histogram divergences • How to learn the parameters from samples? • Sample re-weighting (Boosting) Kullback-Leibler Boosting (KL Boosting)
Intuitions • Linear projection is robust and easy to compute • The histograms of two classes upon a projection are evidences for classification • The linear feature, on which the histograms of two classes differ most, should be selected • If the weight distribution of the sample set changes, the histogram changes as well • Increase weights for misclassified samples, and decrease weights for correctly classified samples
KLBoosting (1) • At the kth iteration • Kullback-Leibler Feature • Discriminating function • Reweighting
KLBoosting (2) • Two types of parameters to learn • KL features: • Combination coefficients: • Learning KL feature in low dimensions: MCMC • Learning weights to minimize training error • Optimization: brute-force search
+1 -1 -1 +1 -1 +1 +1 +1 -1 +1 -1 +1 Output classifier Flowchart Input: Initialize weights Learn KL feature Learn combining coefficients Update weights Recognition error small enough? N Y
A Simple Example KL Features Histograms Decision manifold
Kullback-Leibler Analysis (KLA) • A challenging task to find KL feature in image space • Sequential 1D optimization • Construct a feature bank • Build a set of the most promising features • Sequentially do 1D optimization along the promising features Conjecture: The global optimum of an objective function can be reached by searching along linear features as many as needed
Intuition of Sequential 1D Optimization Result of Sequential 1D Optimization Feature bank MCMC feature Promising feature set
Optimization in Image Space • Image is a random field, not a pure random variable • The local statistics can be captured by wavelets • 111×400 small-scale wavelets for the whole 20×20 patch • 80×100 large-scale wavelets for the inner 10×10 patch • Total 52,400 wavelets to compose a feature bank • 2,800 most promising wavelets selected Gaussian family wavelets Harr wavelets Feature bank
Face patterns Non-face patterns Data-driven KLA On each position of the 20*20 lattice, compute the histograms of the 111 wavelets and the KL divergences between face and non-face images. Large scale wavelets are used to capture the global statistics, on the 10*10 inner lattice Compose the KL feature by sequential 1D optimization Promising feature set (total 2,800 features) Feature bank (111 wavelets)
MCMC feature KL feature Best Harr wavelet KL=2.944 (Harr wavelet) KL=3.246 (MCMC feature) KL=10.967 (KL feature) Comparison with Other Features
Application: Face Detection • Experimental setup • 20×20 patch to represent face • 17,520 frontal faces • 1,339,856,947 non-faces from 2,484 images • 300 bins in histogram representation • A cascade of KLBoosting classifiers • In each classifier, keep false negative rate <0.01% and false alarm rate <35% • Totally 22 classifiers to form the cascade (450 features)
KL Features of Face Detector Face patterns Non-face patterns First 10 KL features Global semantics Frequency filters Local features Some other KL features
Summary • KLBoosting is an optimal classifier • Projection function: linear projection • Discrimination function: histogram divergence • Coefficients: optimized by minimizing training error • KLA: a data-driven approach to pursue KL features • Applications in face detection
Research Asia Thank you! Harry Shum Microsoft Research Asia hshum@microsoft.com