480 likes | 619 Views
Contextual Classification with Functional Max-Margin Markov Networks. Problem. Geometry Estimation ( Hoiem et al. ). 3-D Point Cloud Classification. Wall. Sky. Vegetation. Vertical. Support. Ground. Tree trunk. Our classifications. Room For Improvement.
E N D
Contextual Classification withFunctional Max-Margin Markov Networks
Problem Geometry Estimation (Hoiemet al.) 3-D Point Cloud Classification Wall Sky Vegetation Vertical Support Ground Tree trunk Our classifications
Approach: Improving CRF Learning “Boosting” (h) Gradient descent (w) + Better learn models with high-order interactions + Efficiently handle large data & feature sets + Enable non-linear clique potentials • Friedman et al. 2001, Ratliff et al. 2007
Conditional Random Fields Lafferty et al. 2001 yi Labels x Pairwise model MAP Inference
Parametric Linear Model Weights Local features that describe label
Associative/Potts Potentials Labels Disagree
Overall Score Overall Score for a labeling y to all nodes
Learning Intuition φ( ) increase score φ( ) decrease score • Iterate • Classify with current CRF model • If (misclassified) • (Same update with edges)
Max-Margin Structured Prediction Taskaret al. 2003 Score withground truth labeling Best score fromalllabelings (+M) min w Ground truth labels Convex
Descending† Direction (Objective) Labels from MAP inference Ground truth labels
Update Rule , and λ = 0 - - + wt+1+= Ground truth Inferred Unit step-size
Verify Learning Intuition - + - wt+1 += φ( ) increase score φ( ) decrease score • Iterate • If (misclassified)
Alternative Update , +1 , +1 D= , -1 , -1 • Create training set: D • From the misclassified nodes & edges
Alternative Update ht(∙) D Create training set: D Train regressor: ht
Alternative Update (Before) Create training set: D Train regressor: h Augment model:
Functional M3N Summary D • Given features and labels • for T iterations • Classification with current model • Create training set from misclassified cliques • Train regressor/classifier ht • Augment model
Illustration D +1 +1 -1 -1 Create training set
Illustration h1(∙) ϕ(∙) = α1 h1(∙) Train regressorht
Illustration ϕ(∙) = α1 h1(∙) Classification with current CRF model
Illustration D +1 -1 ϕ(∙) = α1 h1(∙) Create training set
Illustration h2(∙) ϕ(∙) = α1 h1(∙)+α2 h2(∙) Train regressorht
Illustration ϕ(∙) = α1 h1(∙)+α2 h2(∙) Stop
Boosted CRF Related Work • Gradient Tree Boosting for CRFs • Dietterichet al. 2004 • Boosted Random Fields • Torralbaet al. 2004 • Virtual Evidence Boosting for CRFs • Liao et al. 2007 • Benefits of Max-Margin objective • Do not need marginal probabilities • (Robust) High-order interactions • Kohliet al. 2007, 2008
Using Higher Order Information Colored by elevation
Region Based Model • Inference: graph-cut procedure • Pn Potts model (Kohliet al. 2007)
How To Train The Model 1 Learning
How To Train The Model Learning (ignores features from clique c)
How To Train The Model Robust Pn Potts Kohliet al. 2008 β Learning 1 β
Experimental Analysis 3-D Point Cloud Classification Geometry Surface Estimation
Random Field Description [0,0,1] θ normal Local shape Orientation Elevation Nodes: 3-D points Edges: 5-Nearest Neighbors Cliques: Two K-means segmentations Features
Qualitative Comparisons Parametric Functional (this work)
Qualitative Comparisons Parametric Functional (this work)
Qualitative Comparisons Parametric Functional (this work)
Quantitative Results (1.2 M pts) Parametric 64.3% Functional 71.5% +3% +24% +4% +5% -1% Macro* AP:
Experimental Analysis 3-D Point Cloud Classification Geometry Surface Estimation
Random Field Description More Robust • Nodes: Superpixels (Hoiemet al. 2007) • Edges: (none) • Cliques: 15 segmentations (Hoiemet al. 2007) • Features (Hoiemet al. 2007) • Perspective, color, texture, etc. • 1,000 dimensional space
Quantitative Comparisons Hoiemet al. 2007 Parametric (Potts) Functional (Potts) Functional (Robust Potts)
Qualitative Comparisons Parametric (Potts) Functional (Potts)
Qualitative Comparisons Parametric (Potts) Functional (Robust Potts) Functional (Potts)
Qualitative Comparisons Hoiemet al. 2007 Parametric (Potts) Functional (Robust Potts) Functional (Potts)
Qualitative Comparisons Hoiemet al. 2007 Parametric (Potts) Functional (Robust Potts) Functional (Potts)
Qualitative Comparisons Hoiemet al. 2007 Parametric (Potts) Functional (Robust Potts) Functional (Potts)
Conclusion • Effective max-margin learning of high-order CRFs • Especially for large dimensional spaces • Robust Potts interactions • Easy to implement • Future work • Non-linear potentials (decision tree/random forest) • New inference procedures: • KomodakisandParagios 2009 • Ishikawa 2009 • Gould et al. 2009 • Rotheret al. 2009
Thank you • Acknowledgements • U. S. Army Research Laboratory • Siebel Scholars Foundation • S. K. Divalla, N. Ratliff, B. Becker • Questions?