1 / 30

Classifiers

Classifiers. Given a feature representation for images, how do we learn a model for distinguishing features from different classes?. Decision boundary. Zebra. Non-zebra. Classifiers.

tieve
Download Presentation

Classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Classifiers • Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Decisionboundary Zebra Non-zebra

  2. Classifiers • Given a feature representation for images, how do we learn a model for distinguishing features from different classes? • Today: • Nearest neighbor classifiers • Linear classifiers: support vector machines • Later: • Boosting • Decision trees and forests • Deep neural networks (hopefully)

  3. Review: Nearest Neighbor Classifier

  4. Review: Nearest Neighbor Classifier • Assign label of nearest training data point to each test data point from Duda et al.

  5. Review: K-Nearest Neighbors • For a new point, find the k closest points from training data • Labels of the k points “vote” to classify k = 5

  6. Distance functions for bags of features • Euclidean distance: • L1 distance: • χ2 distance: • Histogram intersection (similarity): • Hellinger kernel (similarity):

  7. Review: Linear classifiers

  8. Review: Linear classifiers • Find linear function (hyperplane) to separate positive and negative examples Which hyperplaneis best?

  9. Support vector machines • Find hyperplane that maximizes the margin between the positive and negative examples C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  10. Support vector machines • Find hyperplane that maximizes the margin between the positive and negative examples For support vectors, Distance between point and hyperplane: Therefore, the margin is 2 / ||w|| Support vectors Margin C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  11. Finding the maximum margin hyperplane • Maximize margin 2 / ||w|| • Correctly classify all training data: • Quadratic optimization problem: C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  12. Finding the maximum margin hyperplane • Solution: Learned weight (nonzero only for support vectors) C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  13. Finding the maximum margin hyperplane • Solution:b = yi – w·xi for any support vector • Classification function (decision boundary): • Notice that it relies on an inner product between the testpoint x and the support vectors xi • Solving the optimization problem also involvescomputing the inner products xi· xjbetween all pairs oftraining points C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  14. What if the data is not linearly separable? • Separable: • Non-separable: • C: tradeoff constant, ξi: slack variable (positive) • Whenever margin is ≥ 1, ξi = 0 • Whenever margin is < 1,

  15. What if the data is not linearly separable? Maximize margin Minimize classification mistakes

  16. What if the data is not linearly separable? • Demo: http://cs.stanford.edu/people/karpathy/svmjs/demo +1 0 Margin -1

  17. x 0 x 0 x2 Nonlinear SVMs • Datasets that are linearly separable work out great: • But what if the dataset is just too hard? • We can map it to a higher-dimensional space: 0 x Slide credit: Andrew Moore

  18. Nonlinear SVMs • General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x→φ(x) Slide credit: Andrew Moore

  19. Nonlinear SVMs • The kernel trick: instead of explicitly computing the lifting transformation φ(x), define a kernel function K such thatK(x,y) = φ(x)· φ(y) • (to be valid, the kernel function must satisfy Mercer’s condition) • This gives a nonlinear decision boundary in the original feature space: C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

  20. x2 Nonlinear kernel: Example • Consider the mapping

  21. Polynomial kernel:

  22. Gaussian kernel • Also known as the radial basis function (RBF) kernel: • The corresponding mapping φ(x)is infinite-dimensional!

  23. Gaussian kernel SV’s

  24. Gaussian kernel • Also known as the radial basis function (RBF) kernel: • The corresponding mapping φ(x)is infinite-dimensional! • What is the role of parameter σ? • What if σ is close to zero? • What if σ is very large?

  25. Kernels for bags of features • Histogram intersection kernel: • Hellinger kernel: • Generalized Gaussian kernel: • D can be L1, Euclidean, χ2distance, etc. J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study, IJCV 2007

  26. Summary: SVMs for image classification • Pick an image representation (in our case, bag of features) • Pick a kernel function for that representation • Compute the matrix of kernel values between every pair of training examples • Feed the kernel matrix into your favorite SVM solver to obtain support vectors and weights • At test time: compute kernel values for your test example and each support vector, and combine them with the learned weights to get the value of the decision function

  27. What about multi-class SVMs? • Unfortunately, there is no “definitive” multi-class SVM formulation • In practice, we have to obtain a multi-class SVM by combining multiple two-class SVMs • One vs. others • Traning: learn an SVM for each class vs. the others • Testing: apply each SVM to test example and assign to it the class of the SVM that returns the highest decision value • One vs. one • Training: learn an SVM for each pair of classes • Testing: each learned SVM “votes” for a class to assign to the test example

  28. SVMs: Pros and cons • Pros • Many publicly available SVM packages:http://www.kernel-machines.org/software • Kernel-based framework is very powerful, flexible • SVMs work very well in practice, even with very small training sample sizes • Cons • No “direct” multi-class SVM, must combine two-class SVMs • Computation, memory • During training time, must compute matrix of kernel values for every pair of examples • Learning can take a very long time for large-scale problems

  29. SVMs for large-scale datasets • Efficient linear solvers • LIBLINEAR, PEGASOS • Explicit approximate embeddings: define an explicit mapping φ(x)such that φ(x)· φ(y) approximatesK(x,y)and train a linear SVM on top of that embedding • Random Fourier features for the Gaussian kernel (Rahimi and Recht, 2007) • Embeddings for additive kernels, e.g., histogram intersection (Maji et al., 2013, Vedaldi and Zisserman, 2012)

  30. Summary: Classifiers • Nearest-neighbor and k-nearest-neighbor classifiers • Support vector machines • Linear classifiers • Margin maximization • Non-separable case • The kernel trick • Multi-class SVMs • Large-scale SVMs • Of course, there are many other classifiers out there • Neural networks, boosting, decision trees/forests, …

More Related