740 likes | 971 Views
Predictive Learning from Data. LECTURE SET 9-1 Margin-Based Methods and Support Vector Machines. Electrical and Computer Engineering. OUTLINE . Motivation and background Margin-based loss SVM for classification SVM examples Support Vector regression SVM and regularization Summary.
E N D
Predictive Learning from Data LECTURESET 9-1 Margin-Based Methods and Support Vector Machines Electrical and Computer Engineering
OUTLINE Motivation and background Margin-based loss SVM for classification SVM examples Support Vector regression SVM and regularization Summary
Introduction of Learning Methods Predictive learning algorithm proposed using ‘reasonable’ heuristic arguments. reasonable ~ statistical or biological 2. Empirical validation + improvement Statistical explanation (why it really works) Examples: CART, MARS, neural networks, AdaBoost In contrast, SVM has been originally introduced as a purely theoretical method. 3 3
Interpretation of Learning Methods Predictive learning algorithms can be interpreted under probabilistic framework, and vice versa growing conceptual confusion Often results of data-analytic modeling are presented without clear problem setting and assumptions This is common in machine learning and data mining 4 4
Example: classification problem Predictive approach: minimize fitting error Decision rule Loss function Fitting error Approximate Ind via sigmoid Logistic regression 5 5
SVM: Brief History Margin (Vapnik & Lerner) Margin(Vapnik and Chervonenkis, 1964) 1964 RBF Kernels (Aizerman) 1965 Optimization formulation (Mangasarian) 1971 Kernels (Kimeldorf annd Wahba) 1992-1994 SVMs (Vapnik et al) 1996 – present Rapid growth, numerous apps 1996 – present Extensions to other problems
Growing Popularity of SVMs GOOGLE search on SVM 5.5 mln results GOOGLE search on Kernel Methods 4.6 mln results GOOGLE search on CART 5 mln results GOOGLE search on Multilayer Perceptrons 0.9 mln results BUT plenty of conceptual misunderstanding
MOTIVATION for SVM Recall ‘conventional’ methods: - model complexity ~ dimensionality (# features) - nonlinear methods multiple minima - hard to control complexity ‘Good’ learning method: (a) tractable optimization formulation (b) tractable complexity control(1-2 parameters) (c) flexible nonlinear parameterization Properties (a), (b) hold for linear methods SVM solution approach
SVM APPROACH Linear approximation in Z-space using special adaptive loss function Complexity independent of dimensionality
OUTLINE Margin-based loss - Example: binary classification - VC-theoretical motivation - Philosophical motivation SVM for classification SVM examples Support Vector regression SVM and regularization Summary
Given: Linearly separable data How to construct linear decision boundary? Example: binary classification
LDA solution Separation margin Linear Discriminant Analysis
Many solutions explain the data well (zero error) All solutions ~ the same linear parameterization Larger margin ~ more confidence (falsifiability) Largest-margin solution
VC Generalization Bound and SRM • Classification:the following bound holds with probability of for all approximating functions • Two general strategies for implementing SRM: 1. Keep fixed and minimize (most statistical and neural network methods) 2. Keep fixed (small) and minimize larger margin smaller VC-dimension • Equivalence classes on a set of possible models • For each class select the largest-margin hyperplane.
Complexity of -margin hyperplanes • If data samples belong to a sphere of radius R, then the set of -margin hyperplanes has VC dimension bounded by • For large margin hyperplanes, VC-dimension controlled independent of dimensionality d.
Motivation: philosophical • Classical view: good model explains the data + low complexity • Occam’s razor (complexity ~ # parameters) • VC theory: good model explains the data + low VC-dimension ~ VC-falsifiability: good model: explains the data + has large falsifiability The idea: falsifiability ~ empirical loss function
Adaptive loss functions • Both goals (explanation + falsifiability) can encoded into empirical loss function where - (large) portion of the data has zero loss - the rest of the data has non-zero loss, i.e. it falsifies the model • The trade-off (between the two goals) is adaptively controlled adaptive loss fct • Examples of such loss functions for different learning problems are shown next
Margin-based loss for classification: margin size is adapted to training data
Margin based complexity control • Large degree of falsifiability achieved by - large margin (classification) - small epsilon (regression) • Margin-based methods control complexity independent of problem dimensionality • The same idea can be used for other learning settings
Single Class Learning Boundary is specified by a hypersphere with center a and radius r. An optimal model minimizes the volume of the sphere and the total distance of the data points outside the sphere
Margin-based loss: summary • Classification: falsifiability controlled by margin • Regression: falsifiability controlled by • Single class learning: falsifiability controlled by radius r NOTE: the same interpretation/ motivation for margin-based loss for different types of learning problems.
OUTLINE Margin-based loss SVM for classification - Linear SVM classifier - Inner product kernels - Nonlinear SVM classifier SVM examples Support Vector regression SVM and regularization Summary
( ) D x > + 1 1 w ( ) D x = + 1 ( ) D x ¢ w ( ) < - 1 D x ( ) D x = 0 x ¢ ( ) D x = - 1 Optimal Separating Hyperplane Distance btwn hyperplane and sample Margin Shaded points are SVs
Optimization Formulation Given training data Find parameters of linear hyperplane that minimize under constraints Quadratic optimization with linear constraints tractable for moderate dimensions d For large dimensions use dual formulation: - scales with n rather than d - uses only dot products
From Optimization Theory: For a given convex minimization problem with convex inequality constraints there exists an equivalent dual unconstrained maximization formulation with nonnegative Lagrange multipliers Karush-Kuhn-Tucker (KKT) conditions: Lagrange coefficients only for samples that satisfy the original constraint with equality ~ SV’s have positive Lagrange coefficients
Convex Hull Interpretation of Dual Find convex hulls for each class. The closest points to an optimal hyperplane are support vectors
Dual Optimization Formulation Given training data Find parameters of an opt. hyperplane as a solution to maximization problem under constraints Note:data samples with nonzero are SVs Formulation requires only inner products
( ) x = 1 - D x 1 1 x 1 ( ) x 1 - D x = 2 2 x 2 ( ) D x + 1 = x 3 ( ) D = x 0 ( ) x = + D x 1 3 3 ( ) D x - = 1 SVM for non-separable data Minimize under constraints
SVM Dual Formulation Given training data Find parameters of an opt. hyperplane as a solution to maximization problem under constraints Note:data samples with nonzero are SVs Formulation requires only inner products
Nonlinear Decision Boundary • Fixed (linear) parameterization is too rigid • Nonlinear curved margin may yield larger margin (falsifiability) andlower error
Nonlinear Mapping via Kernels Nonlinear f(x,w) + margin-based loss = SVM • Nonlinear mapping to feature z space, i.e. • Linear in z-space=nonlinear in x-space • BUT ~ kernel trick Compute dot product via kernel analytically
SVM Formulation (with kernels) Replacing leads to: Find parameters of an optimal hyperplane as a solution to maximization problem under constraints Given:the training data an inner product kernel regularization parameter C
Examples of Kernels Kernel is a symmetric function satisfying general (Mercer’s) conditions Examples of kernels for different mappings xz • Polynomials of degree q • RBF kernel • Neural Networks for given parameters Automatic selection of the number of hidden units (SV’s)
More on Kernels • The kernel matrix has all info (data + kernel) H(1,1) H(1,2)…….H(1,n) H(2,1) H(2,2)…….H(2,n) …………………………. H(n,1) H(n,2)…….H(n,n) • Kernel defines a distance in some feature space (aka kernel-induced feature space) • Kernels can incorporate apriori knowledge • Kernels can be defined over complex structures (trees, sequences, sets etc)
Kernel Terminology • The term kernel is used in 3 contexts: - nonparametric density estimation - equivalent kernel representation for linear least squares solution - SVM kernels • SVMs are often called kernel methods • Kernel trick can be used with any classical linear method, to yield a nonlinear method. For example, ridge regression + kernel LS SVM
Support Vectors • SV’s ~ training samples with non-zero loss • SV’s are samples that falsify the model • The model depends only on SVs SV’s ~ robust characterization of the data WSJ Feb 27, 2004: About 40% of us (Americans) will vote for a Democrat, even if the candidate is Genghis Khan. About 40% will vote for a Republican, even if the candidate is Attila the Han. This means that the election is left in the hands of one-fifth of the voters. • SVM Generalization ~ data compression
New insights provided by SVM • Why linear classifiers can generalize? (1) Margin is large (relative to R) (2) % of SV’s is small (3) ratio d/n is small • SVM offers an effective way to control complexity (via margin + kernel selection) i.e. implementing (1) or (2) or both • Requires common-sense parameter tuning
OUTLINE Introduction and motivation Margin-based loss SVM for classification SVM examples Support Vector regression SVM and Regularization Summary 41
RBF SVM for Ripley’s Data Set • No. of Training samples = 250 • No. of Test samples = 1,000 • Model selection via 10-fold cross-validation • Test error 9.8%
Details of Model Selection • RBF kernel • Two tuning parameters: C and gamma (RBF parameter) • Optimal values selected via 10-fold cross-validation • Note log scale for parameter values
IRIS Data Set • Modified Iris data set: - two input variables, petal length and petal width - two classes: iris versicolor (+) vs not_versicolor () • Two SVM models: polynomial kernel and RBF kernel
Handwritten digit recognition 28 pixels 28 pixels 28 pixels 28 pixels • Binary classification task:digit “5” vs. digit “8” • No. of Training samples = 1000 (500 per class). • No. of Validation samples = 1000 (used for model selection). • No. of Test samples = 1866. • Dimensionality of input space = 784 (28 x 28). • RBF SVM yields good generalization (similar to humans) Digit “5” Digit “8”
Univariate histogram of projections +1 0 -1 W W -1 0 +1 Project training data onto normal vector w of the trained SVM
TYPICAL HISTOGRAMS OF PROJECTIONS • Projections of training data. Training error=0 (b) Projections of validation data. Validation error=1.7% • Selected SVM parameter values (c) Projections of test data: Test error =1.23%
Many challenging applications Mimic human recognition capabilities - high-dimensional data - content-based - context-dependent Example: read the sentence Sceitnitss osbevred: it is nt inptrant how lteters are msspled isnide the word. It is ipmoratnt that the fisrt and lsat letetrs do not chngae, tehn the txet is itneprted corrcetly SVM suitable for sparse high-dimensional representations 48
Example SVM Applications • Handwritten digit recognition • Face detection in unrestricted images • Text/ document classification • Image classification and retrieval • …….
Handwritten Digit Recognition (mid-90’s) • Data set: postal images (zip-code), segmented, cropped; ~ 7K training samples, and 2K test samples • Data encoding: 16x16 pixel image 256-dim. vector • Original motivation: Compare SVM with custom MLP network (LeNet) designed for this application • Multi-class problem: one-vs-all approach 10 SVM classifiers (one per each digit)