620 likes | 633 Views
Outline. Support Vector Machines Linear SVM Maximal Margin Non-linear Case Soft Margin Kernel Tricks Summary. Just in Case. W is a vector orthogonal to the hyperplane <w,x> is the length of x along the direction of w (scaled by ||w||). Linear Classification.
E N D
Outline • Support Vector Machines • Linear SVM • Maximal Margin • Non-linear Case • Soft Margin • Kernel Tricks • Summary
Just in Case • W is a vector orthogonal to the hyperplane • <w,x> is the length of x along the direction of w (scaled by ||w||)
Linear Classification • Binary Classification problem • The data above the red line belongs to class ‘x’ • The data below red line belongs to class ‘o’ x x x x x x x o x x o o x o o o o o o o o o o
Separating hyperplane • samples are assumed to be linearly separable Which one of two would you choose as the classifier? because it can be trusted more for unknown data
Goal of SVMFind Maximum Margin • definition of margin • minimum distance between a separating hyperplane and the sets of or margin
Goal of SVMFind Maximum Margin • Goal • Find a separating hyperplane with maximum margin
Small Margin Large Margin Support Vectors SVM – Support Vector Machines
More • Minimize the risk of overfitting by choosing the maximal margin • Classification is less sensitive to the exact location of the training points • Generalization error of hyperplane can be bounded by an expression depending on 1/margin^{2}. • Related to injecting noise in inputs for neural network learning • Robustness
Calculate margin • A separating hyperplane • w and b are not uniquely determined • under the constraint min|<w,x>+b|=1, they are uniquely determine
Calculate margin • the distance between a point x and is given by |<w,x>+b|/||w|| • thus, the margin is given by 1/||w||
Optimization of margin • maximization of margin
Optimization of margin separating hyperplane with maximal margin separating hyperplane with minimum • Therefore, we want to don’t forget that we want to know and
Lagrange Multiplier • optimization problem under constraints can be solved by the method of Lagrange Multipliers • Lagrangian is obtained as follows: • for equality constraints • for inequality constraints
Lagrange Multiplier • In our case • Inequality constraints
Convex Optimization • an optimization problem is said to be convex iff the target(or cost) function as well as the constraints are convex • the optimization problem for SVM is convex • the solution to a convex problem, if it exist, is unique. that is, there is no local optimum! • for convex optimization problem, KKT(Karush-Kuhn-Tucker) condition is necessary and sufficient for the solution
KKT(Karush-Kuhn-Tucker) condition • KKT condition • The gradient of the Lagrangian with respect to the original variable is 0 • The original constraints are satisfied • Multipliers for inequality constraints • (Complementary KKT) product of multiplier and constraints equal to 0 • for convex optimize problems,1-4 are necessary and sufficient for the solution
KKT condition for the optimization of margin • recall • KKT condition (3.66) (3.62) (3.63) (3.64) (3.65)
KKT condition for the optimization of margin • Combining (3.66) with (3.62) (3.67) (3.68)
Remarks-support vector • The optimal solution w is a linear combination of feature vectors which are associated with • support vectors are associated with
Remarks-support vector The resulting hyperplane classifier is insensitive to the number and position of non-support vector
Remark-computation w0 • can be implicitly obtained by any of the condition satisfying strict complement (i.e. ) • In practice, is computed as an average value obtained using all conditions of the type
Remark-optimal hyperplane is unique • the optimal hyperplane classifier of a support vector machine is unique and this is guaranteed by two condition • the cost function is a strict convex one • the inequality constraints consist of linear functions an optimization problem is said to be convex iff the target(or cost) function as well as the constraints are convex (the optimization problem for SVM is convex) the solution to a convex problem, if it exist, is unique. that is, there is no local optimum!
Computation optimal Lagrange multiplier • It belongs to the convex programing family of problems • can be soved by considering the so called Lagrangian duality and can be stated equivalently by its Wolfe dual representation form (3.71) (3.72) (3.73) (3.74)
Computation optimal Lagrange multiplier • once the optimal Langrangian multipliers have been computed, the optimal hyperplane is obtained (3.75) (3.76)
Remarks • the cost function does not depend explicitly on the dimensionality of the input space • this allows for efficient generalizations in the case of nonlinearly separable classes
Today’s Lecture • Support Vector Machines • Linear SVM • Maximal Margin • Non-linear Case • Soft Margin • Kernel Tricks • Summary • Other Classification Method • Combining Classifiers
SVM for Non-separable Classes • in the case of non-separable, the training feature vector belong to one of the following three categories
Two Approaches • Allow soft margins • Allowing soft margins means that if a training point is on the wrong side of the hyperplane then a cost will be applied to the point • Increase Dimensionality • By increasing the dimensionality of the data, the likelihood of the data becoming linearly separable increase dramatically
Today’s Lecture • Support Vector Machines • Linear SVM • Maximal Margin • Non-linear Case • Soft Margin • Kernel Tricks • Summary • Other Classification Method • Combining Classifiers
SVM for Non-separable Classes • All three cases can be treated under a single type constraints
SVM for Non-separable Classes • The goal is • make the margin as large as possible • keep the number of points with as small as possible • (3.79) is intractable because of discontinuous function (3.79)
SVM for Non-separable Classes • as common case, we choose to optimize a closely related cost function
SVM for Non-separable Classes • to Langrangian
SVM for Non-separable Classes • The corresponding KKT condition (3.85) (3.86) (3.87) (3.90) (3.88) (3.89)
SVM for Non-separable Classes • The associated Wolfe dual representation now becomes
SVM for Non-separable Classes • equivalent to
Remarks-difference with the linearly separable case • Lagrange multipliers( ) need to be bounded by C • the slack variables, , and their associated Lagrange multipliers, , do not enter into the problem explicitly • reflected indirectly though C
Today’s Lecture • Support Vector Machines • Linear SVM • Maximal Margin • Non-linear Case • Soft Margin • Kernel Tricks • summary • Other Classification Method • Combining Classifiers
General SVM This classification problem clearly do not have a good optimal linear classifier. Can we do better? A non-linear boundary as shown will do fine.