350 likes | 383 Views
Kernel Methods for Classification From Theory to Practice. 14. Sept 2009 Iris Adä, Michael Berthold, Ulrik Brandes, Martin Mader, Uwe Nagel. Goals of the Tutorial. At lunch time on Tuesday, you will Have learned about Linear Classifiers and SVMs Have improved a kernel based classifier
E N D
Kernel Methods for ClassificationFrom Theory to Practice 14. Sept 2009Iris Adä, Michael Berthold, Ulrik Brandes, Martin Mader, Uwe Nagel
Goals of the Tutorial At lunch time on Tuesday, you will • Have learned about Linear Classifiers and SVMs • Have improved a kernel based classifier • Will know what Finnish looks like • Have a hunch what a kernel is • Had a chance at winning a trophy. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Outline – Monday (13:15 – 23:30) • The Theory: • Motivation: Learning Classifiers from Data • Linear Classifiers • Delta Learning Rule • Kernel Methods & Support Vector Machines • Dual Representation • Maximal Margin • Kernels • The Environment: • KNIME: A short Intro • Practical Stuff: • How to develop nodes in KNIME • Install on your laptop(s) • You work, we rest: • Invent a new (and better) Kernel • Dinner • (Invent an even better Kernel…) GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Outline – Tuesday (9:00 – 12:00) • ~9-11: refine your kernel • 11:00 score test data set • 11:13 winning kernel(s) presented • 12:00 Lunch and Award “Ceremony” GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Learning Models Model Assumptions: • no major influence of non-observed inputs observedinputs otherinputs System Data observedoutputs GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Predicting Outcomes Assumptions: • static system newinputs Model predictedoutputs GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Learning Classifiers from Data • Training data consists of input with labels, e.g. • Credit card transactions (fraud: no/yes) • Hand written letter (“A”, … “Z”) • Drug candidate classification (toxic, non toxic) • … • Multi-label classification problems can be reduced to a binary yes/no classification • Many, many algorithms around.Why? • Choice of algorithm influences generalization capability • There is no best algorithm for all classification problems GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Linear Discriminant • Simple linear, binary classifier: • Class A if f(x) positive • Class B if f(x) negative e.g. is the decision function. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Linear Discriminant • Linear discriminants represent hyperplanes in feature space GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Primal Perceptron • Rosenblatt (1959) introduced simple learning algorithm for linear discriminants (“perceptron”): GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Rosenblatt Algorithm • Algorithm is • On-line (pattern by pattern approach) • Mistake driven (updates only in case of wrong classification) • Algorithm converges guaranteed if a hyperplane exists which classifies all training data correctly (data is linearly separable) • Learning rule: • One observation: • Weight vector (if initialized properly) is simply a weighted sum of input vectors(b is even more trivial) GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Dual Representation • Weight vector is a weighted sum of input vectors: • “difficult” training patterns have larger alpha • “easier” ones have smaller or zero alpha GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Dual Representation Dual Representation of the linear discriminant function: GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Dual Representation • Dual Representation of Learning Algorithm GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Dual Representation • Learning Rule • Harder to learn examples have larger alpha (higher information content) • The information about training examples enters algorithm only through the inner products (which we could pre-compute!) GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Dual Representation in other spaces • All we need for training: • Computation of inner products of all training examples • If we train in a different space: • Computation of inner products in the projected space GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernel Functions • A kernel allows us (via K) to compute the inner product of two points x and y in the projected space without ever entering that space... GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
…in Kernel Land… • The discriminant function in our project space: • And, using a kernel: GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
The Gram Matrix • All data necessary for • The decision function • The training of the coefficients can be pre-computed using a Kernel or Gram Matrix: (If K is symmetric and positive semi-definite then K() is a Kernel.) GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernels • A simple kernel is • And the corresponding projected space: • Why? GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernels • A few (slightly less) simple kernels are • And the corresponding projected spaces are of dimension …computing the inner products in the projected space becomes pretty expensive rather quickly… GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernels • Gaussian Kernel: • Polynomial Kernel of degree d: GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Why? • Great:we can also apply Rosenblatt’s algorithm to other spaces implicitly. • So what? GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Transformations… GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Polynomial Kernel GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Gauss Kernel GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernels • Note that we do not need to know the projection Φ,it is sufficient to prove that K(.) is a Kernel. • A few notes: • Kernels are modular and closed: we can compose new Kernels based on existing ones • Kernels can be defined over non numerical objects • text: e.g. string matching kernel • images, trees, graphs, … • Note also: A good Kernel is crucial • Gram Matrix diagonal: classification easy and useless • No Free Kernel: too many irrelevant attributes: Gram Matrix diagonal. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Finding Linear Discriminants • Finding the hyperplane (in any space) still leaves lots of room for variations – does it? • We can define “margins” of individual training examples:(appropriately normalized this is a “geometrical” margin) • The margin of a hyperplane (with respect to a training set): • And a maximal margin of all training examples indicates the maximum margin over all hyperplanes. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
(maximum) Margin of a Hyperplane GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Support Vector Machines • Dual Representation • Classifier as weighted sum over inner products of training pattern(or only support vectors) and the new pattern. • Training analog • Kernel-Induced feature space • Transformation into higher-dimensional space(where we will hopefully be able to find a linear separation plane). • Representation of solution through few support vectors (alpha>0). • Maximum Margin Classifier • Reduction of Capacity (Bias) via maximization of margin(and not via reduction of degrees of freedom). • Efficient parameter estimation: see IDA book. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Soft and Hard Margin Classifiers • What can we do if no linear separating hyperplane exists? • Instead of focusing on find a hard margin, allow minor violations • Introduce (positive) slack variables (patterns with slack are allowed to lie in margin) • Misclassifications are allowed if slack can be negative. GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Kernel Methods: Summary • Main idea of Kernel Methods: • Embed data into a suitable vector space • Find linear classifier (or other linear pattern of interest) in the new space • Needed: Mapping (implicit or explicit) • Key Assumptions • Information about relative position is often all that is needed by learning methods • The inner products between points in the projected space can be computed in the original space using special functions (kernels). GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
Support Vector Machines • Powerful classifier • computation of optimal classifier is possible • Choice of kernel is critical GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel
KNIME • Coffee Break. • And then: • KNIME, the Konstanz Information Miner • SVMs (and other classifiers) in KNIME GK-Tutorial "Kernel Methods for Classification" - Adä, Berthold, Brandes, Mader, Nagel