650 likes | 862 Views
Max-Margin Latent Variable Models. M. Pawan Kumar. Max-Margin Latent Variable Models. M. Pawan Kumar. Kevin Miller, Rafi Witten, Tim Tang, Danny Goodman, Haithem Turki , Dan Preston, Dan Selsam , Andrej Karpathy. Ben Packer. Daphne Koller. Computer Vision Data. Log (Size).
E N D
Max-Margin Latent Variable Models M. Pawan Kumar
Max-Margin Latent Variable Models M. Pawan Kumar Kevin Miller, Rafi Witten, Tim Tang, Danny Goodman, HaithemTurki, Dan Preston, Dan Selsam, Andrej Karpathy Ben Packer Daphne Koller
Computer Vision Data Log (Size) ~ 2000 Segmentation Information
Computer Vision Data Log (Size) ~ 12000 ~ 2000 Bounding Box Segmentation Information
Computer Vision Data > 14 M Log (Size) Image-Level ~ 12000 ~ 2000 Bounding Box Segmentation Information “Chair” “Car”
Computer Vision Data > 6 B Noisy Label > 14 M Log (Size) Image-Level ~ 12000 ~ 2000 Bounding Box Segmentation Information Learn with missing information (latent variables)
Outline • Two Types of Problems • Latent SVM (Background) • Self-Paced Learning • Max-Margin Min-Entropy Models • Discussion
Annotation Mismatch Learn to classify an image Image x h Annotation a = “Deer” Mismatch between desired and available annotations Exact value of latent variable is not “important”
Annotation Mismatch Learn to classify a DNA sequence Sequence x Latent Variables h Annotation a {+1, -1} Mismatch between desired and possible annotations Exact value of latent variable is not “important”
Output Mismatch Learn to segment an image Image x Output y
Output Mismatch Learn to segment an image (x, a) (a, h) Bird
Output Mismatch Learn to segment an image (x, a) (a, h) Cow Mismatch between desired output and available annotations Exact value of latent variable is important
Output Mismatch Learn to classify actions (x, y)
Output Mismatch Learn to classify actions x ha = +1 hb + “jumping”
Output Mismatch Learn to classify actions hb x ha = -1 + “jumping” Mismatch between desired output and available annotations Exact value of latent variable is important
Outline • Two Types of Problems • Latent SVM (Background) • Self-Paced Learning • Max-Margin Min-Entropy Models • Discussion
Latent SVM Andrews et al, 2001; Smola et al, 2005; Felzenszwalb et al, 2008; Yu and Joachims, 2009 Image x Features (x,a,h) h Parameters w Annotation a = “Deer” (a(w),h(w)) = maxa,h wT(x,a,h)
Parameter Learning Score of Ground-Truth Best Completion of > Score of All Other Outputs
Parameter Learning maxhwT(xi,ai,h) > wT(x,a,h)
Parameter Learning min ||w||2 + CΣiξi maxhwT(xi,ai,h) ≥ wT(x,a,h) - ξi + Δ(ai,a) Annotation Mismatch
Optimization hi* = argmaxhwT(xi,ai,h) Update Update wby solving a convex problem min ||w||2 + C∑i i wT(xi,ai,hi*) - wT(xi,a,h) ≥ (ai, a) - i Repeat until convergence
Outline • Two Types of Problems • Latent SVM (Background) • Self-Paced Learning • Max-Margin Min-Entropy Models • Discussion
Self-Paced Learning Kumar, Packer and Koller, NIPS 2010 Math is for losers !! 1 + 1 = 2 1/3 + 1/6 = 1/2 eiπ+1 = 0 FAILURE … BAD LOCAL MINIMUM
Self-Paced Learning Kumar, Packer and Koller, NIPS 2010 Euler was a Genius!! 1 + 1 = 2 1/3 + 1/6 = 1/2 eiπ+1 = 0 SUCCESS … GOOD LOCAL MINIMUM
Optimization hi* = argmaxhwT(xi,ai,h) Update Update wby solving a convex problem vi {0,1} min ||w||2 + C∑ii vi • - λ∑ivi wT(xi,ai,hi*) - wT(xi,a,h) ≥ (ai, a) - i λλμ Repeat until convergence
Image Classification Mammals Dataset 271 images, 6 classes 90/10 train/test split 5 folds
Image Classification Kumar, Packer and Koller, NIPS 2010 CCCP CCCP SPL SPL HOG-Based Model. Dalal and Triggs, 2005
Image Classification PASCAL VOC 2007 Dataset ~ 5000 images Car vs. Not-Car 50/50 train/test split 5 folds
Image Classification Witten, Miller, Kumar, Packer and Koller, In Preparation Objective HOG + Dense SIFT + Dense Color SIFT SPL+ – Different features choose different “easy” samples
Image Classification Witten, Miller, Kumar, Packer and Koller, In Preparation Mean Average Precision HOG + Dense SIFT + Dense Color SIFT SPL+ – Different features choose different “easy” samples
Motif Finding UniProbe Dataset ~ 40,000 sequences Binding vs. Not-Binding 50/50 train/test split 5 folds
Motif Finding Kumar, Packer and Koller, NIPS 2010 CCCP CCCP SPL SPL Motif + Markov Background Model. Yu and Joachims, 2009
Semantic Segmentation VOC Segmentation 2009 Stanford Background + Train - 1274 images Validation - 225 images Test - 750 images Train - 572 images Validation - 53 images Test - 90 images
Semantic Segmentation VOC Detection 2009 ImageNet + Bounding Box Data Image-Level Data Train - 1564 images Train - 1000 images
Semantic Segmentation Kumar, Turki, Preston and Koller, ICCV 2011 SPL SPL CCCP CCCP SUP SUP SUP – Supervised Learning (Segmentation Data Only) Region-based Model. Gould, Fulton and Koller, 2009
Action Classification PASCAL VOC 2011 + Bounding Box Data Noisy Data Train – 3000 instances Train - 10000 images Test – 3000 instances
Action Classification Packer, Kumar, Tang and Koller, In Preparation SPL CCCP SUP Poselet-based Model. Maji, Bourdev and Malik, 2011
Self-Paced Multiple Kernel Learning Kumar, Packer and Koller, In Preparation 1 + 1 = 2 Integers Rational Numbers 1/3 + 1/6 = 1/2 Imaginary Numbers eiπ+1 = 0 USE A FIXED MODEL
Self-Paced Multiple Kernel Learning Kumar, Packer and Koller, In Preparation 1 + 1 = 2 Integers Rational Numbers 1/3 + 1/6 = 1/2 Imaginary Numbers eiπ+1 = 0 ADAPT THE MODEL COMPLEXITY
Optimization hi* = argmaxhwT(xi,ai,h) Update and c Update wby solving a convex problem vi {0,1} ^ min ||w||2 + C∑ii vi • - λ∑ivi wT(xi,ai,hi*) - wT(xi,a,h) ≥ (ai, a) - i Kij= (xi,ai,hi)T(xj,aj,hj) K= ΣkckKk λλμ Repeat until convergence
Image Classification Mammals Dataset 271 images, 6 classes 90/10 train/test split 5 folds
Image Classification Kumar, Packer and Koller, In Preparation FIXED FIXED SPMKL SPMKL HOG-Based Model. Dalal and Triggs, 2005
Motif Finding UniProbe Dataset ~ 40,000 sequences Binding vs. Not-Binding 50/50 train/test split 5 folds
Motif Finding Kumar, Packer and Koller, NIPS 2010 FIXED FIXED SPMKL SPMKL Motif + Markov Background Model. Yu and Joachims, 2009
Outline • Two Types of Problems • Latent SVM (Background) • Self-Paced Learning • Max-Margin Min-Entropy Models • Discussion
MAP Inference Pr(a,h|x) = exp(wT(x,a,h)) Z(x) Pr(a1,h|x)
MAP Inference Pr(a,h|x) = exp(wT(x,a,h)) Z(x) mina,h – log (Pr(a,h|x)) Value of latent variable? Pr(a1,h|x) Pr(a2,h|x)
Min-Entropy Inference mina – log (Pr(a|x)) + Hα (Pr(h|a,x)) Renyi entropy of generalized distribution Q(a; x, w) = Set of all {Pr(a,h|x)} mina Hα(Q(a; x, w))
Max-Margin Min-Entropy Models Miller, Kumar, Packer, Goodman and Koller, AISTATS 2012 min ||w||2 + C∑ii Hα(Q(a; x, w))- Hα(Q(ai; x, w)) ≥ (ai, a) - i i≥ 0 Like latent SVM, minimizes (ai, ai(w)) In fact, when α = ∞...
Max-Margin Min-Entropy Models Miller, Kumar, Packer, Goodman and Koller, AISTATS 2012 min ||w||2 + C∑ii maxhwT(x,ai,h)-maxhwT(x,a,h) ≥ (ai, a) - i i≥ 0 Like latent SVM, minimizes (ai, ai(w)) Latent SVM In fact, when α = ∞...