1 / 24

Recognition: A machine learning approach

Recognition: A machine learning approach. Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba , Kristen Grauman , and Derek Hoiem. The machine learning framework. Apply a prediction function to a feature representation of the image to get the desired output: f( ) = “apple”

Download Presentation

Recognition: A machine learning approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recognition: A machine learning approach Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, Kristen Grauman, and Derek Hoiem

  2. The machine learning framework • Apply a prediction function to a feature representation of the image to get the desired output: f( ) = “apple” f( ) = “tomato” f( ) = “cow”

  3. The machine learning framework y = f(x) • Training: given a training set of labeled examples{(x1,y1), …, (xN,yN)}, estimate the prediction function f by minimizing the prediction error on the training set • Testing:apply f to a never before seen test examplex and output the predicted value y = f(x) output prediction function Image feature

  4. Steps Training Training Labels Training Images Image Features Training Learned model Learned model Testing Image Features Prediction Test Image Slide credit: D. Hoiem

  5. Features • Raw pixels • Histograms • GIST descriptors • …

  6. Classifiers: Nearest neighbor f(x) = label of the training example nearest to x • All we need is a distance function for our inputs • No training required! Training examples from class 2 Test example Training examples from class 1

  7. Classifiers: Linear • Find a linear function to separate the classes: f(x) = sgn(w  x + b)

  8. Recognition task and supervision • Images in the training set must be annotated with the “correct answer” that the model is expected to produce Contains a motorbike

  9. Fully supervised “Weakly” supervised Unsupervised Definition depends on task

  10. Generalization • How well does a learned model generalize from the data it was trained on to a new test set? Training set (labels known) Test set (labels unknown)

  11. Generalization • Components of generalization error • Bias: how much the average model over all training sets differ from the true model? • Error due to inaccurate assumptions/simplifications made by the model • Variance: how much models estimated from different training sets differ from each other • Underfitting: model is too “simple” to represent all the relevant class characteristics • High bias and low variance • High training error and high test error • Overfitting: model is too “complex” and fits irrelevant characteristics (noise) in the data • Low bias and high variance • Low training error and high test error

  12. Bias-variance tradeoff Underfitting Overfitting Error Test error Training error High Bias Low Variance Complexity Low Bias High Variance Slide credit: D. Hoiem

  13. Bias-variance tradeoff Few training examples Test Error Many training examples High Bias Low Variance Complexity Low Bias High Variance Slide credit: D. Hoiem

  14. Effect of Training Size Fixed prediction model Error Testing Generalization Error Training Number of Training Examples Slide credit: D. Hoiem

  15. Datasets • Circa 2001: 5 categories, 100s of images per category • Circa 2004: 101 categories • Today: up to thousands of categories, millions of images

  16. Caltech 101 & 256 http://www.vision.caltech.edu/Image_Datasets/Caltech101/ http://www.vision.caltech.edu/Image_Datasets/Caltech256/ Griffin, Holub, Perona, 2007 Fei-Fei, Fergus, Perona, 2004

  17. Caltech-101: Intraclass variability

  18. The PASCAL Visual Object Classes Challenge (2005-present) http://pascallin.ecs.soton.ac.uk/challenges/VOC/ Challenge classes: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle:aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor

  19. The PASCAL Visual Object Classes Challenge (2005-present) http://pascallin.ecs.soton.ac.uk/challenges/VOC/ • Main competitions • Classification: For each of the twenty classes, predicting presence/absence of an example of that class in the test image • Detection: Predicting the bounding box and label of each object from the twenty target classes in the test image

  20. The PASCAL Visual Object Classes Challenge (2005-present) http://pascallin.ecs.soton.ac.uk/challenges/VOC/ • “Taster” challenges • Segmentation: Generating pixel-wise segmentations giving the class of the object visible at each pixel, or "background" otherwise • Person layout: Predicting the bounding box and label of each part of a person (head, hands, feet)

  21. The PASCAL Visual Object Classes Challenge (2005-present) http://pascallin.ecs.soton.ac.uk/challenges/VOC/ • “Taster” challenges • Action classification

  22. LabelMe http://labelme.csail.mit.edu/ Russell, Torralba, Murphy, Freeman, 2008

  23. 80 Million Tiny Images http://people.csail.mit.edu/torralba/tinyimages/

  24. ImageNet http://www.image-net.org/

More Related