320 likes | 597 Views
{. {. +1 if h t (x) > q t -1 otherwise. face, if Y(x) > 0 non-face, otherwise. Detection =. y t (x) =. Viola/Jones: features. “ Rectangle filters ” Differences between sums of pixels in adjacent rectangles. Unique Features. Y(x)=∑ α t y t (x). Select 200 by Adaboost.
E N D
{ { +1 if ht(x) > qt -1 otherwise face, if Y(x) > 0 non-face, otherwise Detection = yt(x) = Viola/Jones: features “Rectangle filters” Differences between sums of pixels in adjacent rectangles Unique Features Y(x)=∑αtyt(x) Select 200 by Adaboost Robust Realtime Face Dection, IJCV 2004, Viola and Jonce
Integral Image (aka. summed area table) • Define the Integral Image • Any rectangular sum can be computed in constant time: • Rectangle features can be computed as differences between rectangles
Feature selection (AdaBoost) Given training data {xn,tn}, find {αt} for {yt(x)} by minimizing total error function: Ideal function error(z) = z>0?0:1, hard to optimize. Instead use error(z)=exp(-z) to make the optimization convex. Define Basic idea: first find f1(x) by minimizing E(f1) Then given fm-1(x), find fm(x) by searching for best αm and ym(x)
Feature selection (AdaBoost) wn(m)=exp(-tnfm-1(xn)) is high if fm-1(x) is correct for xn; is low otherwise. Next we want to find αm and ym(x) to minimize this weighted error function
Feature selection (AdaBoost) Recall tn in {1,+1} and ym(x) in {-1,+1}
Feature selection (AdaBoost) Find ym(x) to minimize Calculate weighted error rate for ym(x) Find αm to minimize
Feature selection (AdaBoost) Update weight wn(m+1)=exp(-tnfm (xn)) Note Only need to update weight for incorrectly classified data
Larger Scale Smallest Scale Viola/Jones: handling scale 50,000 Locations/Scales
Cascaded Classifier • first classifier: 100% detection, 50% false positives. • second classifier: 100% detection, 40% false positives • (20% cumulative) • using data from previous stage. • third classifier: 100% detection,10% false positive rate • (2% cumulative) • Put cheaper classifiers up front 50% 20% 2% IMAGE SUB-WINDOW 5 Features 20 Features FACE 1 Feature F F F NON-FACE NON-FACE NON-FACE
Viola/Jones results: Run-time: 15fps (384x288 pixel image on a 700 Mhz Pentium III)
Application Smart cameras: auto focus, red eye removal, auto color correction
Application Lexus LS600 Driver Monitor System
Pedestrian Detection: Chamfer matching Input Image Edge Detection Template Best Match Distance Transform Gavrila & Philomin ICCV 1999 Slides from K. Grauman and B. Leibe
Pedestrian Detection: Chamfer matching Hierarchy of templates Gavrila & Philomin ICCV 1999 Slides from K. Grauman and B. Leibe
Pedestrian Detection: HOG Feature Slides from Andrew Zisserman
Pedestrian Detection: HOG Feature HOG: Histogram of Gradients Dalal & Triggs, CVPR 2005 Slides from Andrew Zisserman
Pedestrian Detection: HOG Feature Map each grid cell in the input window to a gradient-orientation histogram weighted by gradient magnitude Code: http://pascal.inrialpes.fr/soft/olt Dalal & Triggs, CVPR 2005 Slides from K. Grauman and B. Leibe
Pedestrian Detection: HOG Feature Slides from Andrew Zisserman
Pedestrian Detection: HOG Feature Slides from Andrew Zisserman
Algorithm Slides from Andrew Zisserman
Model training using SVM • Given • Find • To minimize
Learned model Slides from Deva Ramanan
Meaning of negative weights wx>-b (w+-w-)x>-b w+x-w-x>-b Complete model should compete pedestrian/pillar/doorway Slides from Deva Ramanan
Faces and Pedestrians • Relatively easier, but can still be confusing Slide credit: Lana Lazebnik
In general • classify every pixel