140 likes | 158 Views
This paper discusses the design and performance of robust classifiers for computer vision tasks, focusing on the limitations of margin-based losses and the need for penalizing large positive margins. The authors propose a new robust loss function, called the Tangent loss, which is both margin-enforcing and Bayes-consistent. Experimental results demonstrate the effectiveness of the proposed loss function and the TangentBoost algorithm for robust classification in various challenging datasets.
E N D
On the design of robust classifiers for computer vision Hamed Masnadi-Shirazi Vijay Mahadevan Nuno Vasconcelos Statistical Visual Computing Lab University of California at San Diego
Computer Vision and Classification • Classification algorithms (SVMs, Boosting, etc.) minimize expected value of a loss f(v) which is • margin enforcing • Bayes consistent. • Such losses assign • a large penalty to points with negative margin • Small penalty to points of small positive margin • ~zero penalty to points of large positive margin
Computer Vision Datasets • Large margin losses do not overcome the unique challenges posed by computer vision problems. • One major difficulty is the prevalence of noise, outliers or class ambiguity • example: • patch based image classification is inherently outlier ridden • image labeled with class “street” • patches from many otherclasses
Robust Classifiers • Limitation:unbounded negative margin • Improvements: • linearly growing Logit loss (LogitBoost) [Friedman et al. 2000] 2. bounded Savage loss (SavageBoost) [Masnadi-Shirazi & Vasconcelos 2008] • but negative margins are not the whole problem
Penalizing Large Positive Margins • Linearly separable problem • uniform in the vertical, Gaussian, equal s2, μ=±3 in horizontal direction. • BDR: line x=0. • Impact of outlier @ (-2,0) • all existing loss decision boundaries move to x≈-2.3 • Tangent loss • penalizes both large positive and negative margins • discourages solutions that are “too correct”.
Risk Minimization: • Define: • Feature vector: • Class label: • Classifier: • Predictor: • Loss function: • Minimizing classification riskis equivalent to minimizing the conditional risk for all where x y sgn[f(x)]
Bayes Consistent Loss Functions Expectation Plug back Minimize ? Bayes Consistent ? Convexity
Probability Elicitation and Loss Design • Connection between risk minimization and probability elicitation [Masnadi-Shirazi & Vasconcelos NIPS08] • a new path for classifier design • 1) choose such that • 2) plug in • then • Principled derivation/design of novel Bayes consistent loss functions - strictly concave Cf*(h) - invertible ff* is guaranteed to be Bayes consistent!
Robust Loss Properties • The previous discussions suggest that a robust loss should have the following properties: 1. Bounded penalty for large negative margins: 2. Smaller bounded penalty for large positive margins 3. Margin enforcing:
Robust Loss Requirements • It can be shown that, under Bayes consistency, the three properties are satisfied if and only if • We seek • to design a loss function with the three properties • through selection of ff*and Cf*(h) that comply with the above requirements.
Tangent Loss • Existing links ff*do not comply: introduce Tangent link Tangent Link Tangent Loss Least Squares Min Risk
Experiments:Tracking • Two noisy video clips from [Mahadevan and Vasconcelos CVPR-09] • Method: Discriminant Saliency Tracker (DST) of [Mahadevan and Vasconcelos CVPR-09]. Maps frames to feature space where target is salient compared to background. TangentBoost is used to combine saliency maps in a discriminant manner.
Conclusion: • We argue that being “too correct” should be penalized for robust classification. • We derive a set of requirements that a robust (Bayes consistent) loss function should have. • We derive the Tangent loss that is robust and Bayes consistent. • We demonstrate that the TangentBoost algorithm has state of the art results on a variety of challenging data sets.