210 likes | 307 Views
Adding Unlabeled Samples to Categories by Learned Attributes. Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley. Constructing Training Set is Expensive. Human labeling is expensive Hard to find training samples for some categories
E N D
Adding Unlabeled Samples to Categories by Learned Attributes JonghyunChoi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley
Constructing Training Set is Expensive • Human labeling is expensive • Hard to find training samples for some categories • Heavy-tailed distribution of training set • Solutions • Semi-supervised learning • Active learning Number of training samples In SUN 09 Dataset Salakhutdinov, Torralba, Tenenbaum (CVPR’11) Assume that unlabeled data are drawn from the same distribution of labeled data Require humans in the loop Adding Unlabeled Data Without . modeling a distribution humans in the Loop
Long Tail Problem • Training sets can have lots of objects in one pose • f • But not another.
Expanding Training Set by Attribute • Given small number of labeled training samples • Expanding visual boundary of a category to build a better visual classifier From web Similar to specific sample images Large Unlabeled Image Pool Given initial training set Find by low-level feature Find by attributes Animal-like shape Added back whitish Color Dotted
Why Attribute? • Low-level features are • Specific to shape, color, pose of given seeds • Expanding the visual boundary to the known region • Add similar examples to the current set • Attributes are • More general mid-level description than low-level description (e.g., dotted) • Able to find visually unseen examples but maintaining traits of the current set
Why Attributes Again? • Not necessarily needed. • Training set should have at least one example of each pose. • Remainder are found by Exemplar method. (The strong point of this paper)
How to Add Samples by Attributes? • Given candidate attributes • Pre-learned by an auxiliary data • By learning • A discriminative set of attributes • Find samples confident in combinations of attributes
Formulation • A joint optimization for • Learning classifier in visual feature space (wca) • Learning classifier in attribute space (wcv) • With finding the samples (I) • Non-convex • Mixed integer program: NP-complete problem • Solution: Block coordinate-descent Learning a classifier on attribute space with a selection criterion Learning a classifier on visual feature space Mutual Exclusion continuous discrete Not convex
Formulation Detail Mutual Exclusion Classifier on visual feature space Classifier on attribute space Max-margin Classifier on visual feature space Attribute Mapper Max-margin Classifier on attribute space w/ Top-Lambda Selector Mutual Exclusion
How to Build the Attribute Mapper • Candidate attribute set • Labeling attribute by human is expensive • Automatic attribute space generation by [1] • Learn in offline with any labeled set available on the web • Attributes essentially reducefeature space into binarydecision boundaries. [1] Mohammad Rastegari, Ali Farhadi, David Forsyth, “Attribute Discovery via Predictable Discriminative Binary Codes”, ECCV 2012
Overview Diagram Build Attribute Space Find Useful Attributes Choose Confident Examples To Add Auxiliary data Project Project Initial Labeled-Samples Unlabeled Samples
Two Types of Attributes Selected by Categorical Attributes Animal-like shape • Categorical: common traits of a category Dotted … Initial Labeled Training Examples • Exemplar: specific traits of a sample Selected by Exemplar Attributes
Exemplar Attributes Given Training Samples • Exemplar-SVM [1] • Captures example-specific information • But requires lots of negative samples to build • Our approach • Difference on retrieval set obtained by leave-one-out classifier and full-set classifier Use all Except each Score high Modify the top-lambda-selector term in the formulation [1] Tomasz Malisiewicz, Abhinav Gupta, Alexei A. Efros, “Ensemble of Exemplar-SVMs for Object Detection and Beyond”, ICCV 2011
Experimental Results • Dataset • A subset of ImageNet (ILSVRC 2010) • Target set: 11 categories (initial training #: 10/cat., testing: 500/cat.) • Both distinctive and fine-grained categories • 6 vegetables (mashed potato, orange, lemon, green onion, acorn, coffee bean), 5 dog breeds (Golden Retriever, Yorkshire Terrier, Greyhound, Dalmatian, Miniature Poodle) • Unlabeled set: randomly chosen samples from entire 1,000 categories in ILSVRC • Auxiliary set: exclusive categories from target set Download available soon in http://umiacs.umd.edu/~jhchoi/addingbyattr
Comparison with Other Approaches • Red: using low-level features only • Blue: our approach • Categorical attribute only (Cat.) • Exemplar+categorical attributes (E+C) Decrease! Increase
Purity of Added Sample # added samples of same category total # added samples • Purity: • Not very pure but still improves mAP!
Saturation of Performance • Increase in mAP for method quickly saturates based on initial training set size. • Most useful for small training sets.
Exemplar Attributes • Compare our exemplar attribute learning with exemplar-SVM with large negative set
Summary • A new way of expanding a training set by attributes • A joint optimization formulation • Solve by block-coordinate descent • Using both categorical and exemplar attributes to find examples • Future work • Constraining expansion path • Attribute may mislead as we added more samples due to low purity