490 likes | 771 Views
CMSC 471. Machine Learning: k-Nearest Neighbor and Support Vector Machines. skim 20.4, 20.6-20.7. Revised End-of-Semester Schedule. Wed 11/21 Machine Learning IV Mon 11/26 Philosophy of AI (You must read the three articles!) Wed 11/28 Special Topics Mon 12/3 Special Topics
E N D
CMSC 471 Machine Learning:k-Nearest Neighbor andSupport Vector Machines skim 20.4, 20.6-20.7
Revised End-of-Semester Schedule • Wed 11/21 Machine Learning IV • Mon 11/26 Philosophy of AI (You must read the three articles!) • Wed 11/28 Special Topics • Mon 12/3 Special Topics • Wed 12/5 Review / Tournament dry run #2 (HW6 due) • Mon 12/10 Tournament • Wed 12/19 FINAL EXAM (1:00pm - 3:00pm) (Project and final report due) NO LATE SUBMISSIONS ALLOWED! • Special Topics • Robotics • AI in Games • Natural language processing • Multi-agent systems
k-Nearest Neighbor Instance-Based Learning Some material adapted from slides by Andrew Moore, CMU. Visit http://www.autonlab.org/tutorials/ for Andrew’s repository of Data Mining tutorials.
1-Nearest Neighbor • One of the simplest of all machine learning classifiers • Simple idea: label a new point the same as the closest known point Label it red.
1-Nearest Neighbor • A type of instance-based learning • Also known as “memory-based” learning • Forms a Voronoi tessellation of the instance space
Distance Metrics • Different metrics can change the decision surface • Standard Euclidean distance metric: • Two-dimensional: Dist(a,b) = sqrt((a1 – b1)2 + (a2 – b2)2) • Multivariate: Dist(a,b) = sqrt(∑ (ai – bi)2) Dist(a,b) =(a1 – b1)2 + (a2 – b2)2 Dist(a,b) =(a1 – b1)2 + (3a2 – 3b2)2 Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.
Four Aspects of anInstance-Based Learner: • A distance metric • How many nearby neighbors to look at? • A weighting function (optional) • How to fit with the local points? Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.
1-NN’s Four Aspects as anInstance-Based Learner: • A distance metric • Euclidian • How many nearby neighbors to look at? • One • A weighting function (optional) • Unused • How to fit with the local points? • Just predict the same output as the nearest neighbor. Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.
Zen Gardens Mystery of renowned zen garden revealed [CNN Article] Thursday, September 26, 2002 Posted: 10:11 AM EDT (1411 GMT) LONDON (Reuters) -- For centuries visitors to the renowned Ryoanji Temple garden in Kyoto, Japan have been entranced and mystified by the simple arrangement of rocks. The five sparse clusters on a rectangle of raked gravel are said to be pleasing to the eyes of the hundreds of thousands of tourists who visit the garden each year. Scientists in Japan said on Wednesday they now believe they have discovered its mysterious appeal. "We have uncovered the implicit structure of the Ryoanji garden's visual ground and have shown that it includes an abstract, minimalist depiction of natural scenery," said Gert Van Tonder of Kyoto University. The researchers discovered that the empty space of the garden evokes a hidden image of a branching tree that is sensed by the unconscious mind. "We believe that the unconscious perception of this pattern contributes to the enigmatic appeal of the garden," Van Tonder added. He and his colleagues believe that whoever created the garden during the Muromachi era between 1333-1573 knew exactly what they were doing and placed the rocks around the tree image. By using a concept called medial-axis transformation, the scientists showed that the hidden branched tree converges on the main area from which the garden is viewed. The trunk leads to the prime viewing site in the ancient temple that once overlooked the garden. It is thought that abstract art may have a similar impact. "There is a growing realisation that scientific analysis can reveal unexpected structural features hidden in controversial abstract paintings," Van Tonder said Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.
k – Nearest Neighbor • Generalizes 1-NN to smooth away noise in the labels • A new point is now assigned the most frequent label of its k nearest neighbors Label it red, when k = 3 Label it blue, when k = 7
k-Nearest Neighbor (k = 9) Appalling behavior! Loses all the detail that 1-nearest neighbor would give. The tails are horrible! A magnificent job of noise smoothing. Three cheers for 9-nearest-neighbor. But the lack of gradients and the jerkiness isn’t good. Fits much less of the noise, captures trends. But still, frankly, pathetic compared with linear regression. Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.
Support Vector Machines and Kernels Doing Really Well with Linear Decision Surfaces Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County
Outline • Prediction • Why might predictions be wrong? • Support vector machines • Doing really well with linear models • Kernels • Making the non-linear linear
Supervised ML = Prediction • Given training instances (x,y) • Learn a model f • Such that f(x) = y • Use f to predict y for new x • Many variations on this basic theme
Why might predictions be wrong? • True Non-Determinism • Flip a biased coin • p(heads) = • Estimate • If > 0.5 predict heads, else tails • Lots of ML research on problems like this • Learn a model • Do the best you can in expectation
Why might predictions be wrong? • Partial Observability • Something needed to predict y is missing from observation x • N-bit parity problem • x contains N-1 bits (hard PO) • x contains N bits but learner ignores some of them (soft PO)
Why might predictions be wrong? • True non-determinism • Partial observability • hard, soft • Representational bias • Algorithmic bias • Bounded resources
X X X X O O O O Representational Bias • Having the right features (x) is crucial X X O O O O X X
Support Vector Machines Doing Really Well with Linear Decision Surfaces
Strengths of SVMs • Good generalization in theory • Good generalization in practice • Work well with few training instances • Find globally best model • Efficient algorithms • Amenable to the kernel trick
Linear Separators • Training instances • x n • y {-1, 1} • w n • b • Hyperplane • <w, x> + b = 0 • w1x1 + w2x2 … + wnxn + b = 0 • Decision function • f(x) = sign(<w, x> + b) • Math Review • Inner (dot) product: • <a, b> = a · b = ∑ ai*bi • = a1b1 + a2b2 + …+anbn
Intuitions O O X X O X X O O X O X O X O X
Intuitions O O X X O X X O O X O X O X O X
Intuitions O O X X O X X O O X O X O X O X
Intuitions O O X X O X X O O X O X O X O X
A “Good” Separator O O X X O X X O O X O X O X O X
Noise in the Observations O O X X O X X O O X O X O X O X
Ruling Out Some Separators O O X X O X X O O X O X O X O X
Lots of Noise O O X X O X X O O X O X O X O X
Maximizing the Margin O O X X O X X O O X O X O X O X
“Fat” Separators O O X X O X X O O X O X O X O X
O O X O X X O X O O X O X Support Vectors X O X
The Math • Training instances • x n • y {-1, 1} • Decision function • f(x) = sign(<w,x> + b) • w n • b • Find w and b that • Perfectly classify training instances • Assuming linear separability • Maximize margin
The Math • For perfect classification, we want • yi (<w,xi> + b) ≥ 0 for all i • Why? • To maximize the margin, we want • w that minimizes |w|2
Dual Optimization Problem • Maximize over • W() = ii - 1/2 i,jij yi yj <xi, xj> • Subject to • i 0 • ii yi = 0 • Decision function • f(x) = sign(ii yi <x, xi> + b)
Strengths of SVMs • Good generalization in theory • Good generalization in practice • Work well with few training instances • Find globally best model • Efficient algorithms • Amenable to the kernel trick …
O O O O O O O O O O O X X X X O O X X O O O O O O O Image from http://www.atrandomresearch.com/iclass/ What if Surface is Non-Linear?
Kernel Methods Making the Non-Linear Linear
x12 X X X X O O x1 O O When Linear Separators Fail x2 x1 X X O O O O X X
Mapping into a New Feature Space • Rather than run SVM on xi, run it on (xi) • Find non-linear separator in input space • What if (xi) is really big? • Use kernels to compute it implicitly! : x X = (x) (x1,x2) = (x1,x2,x12,x22,x1x2) Image from http://web.engr.oregonstate.edu/ ~afern/classes/cs534/
Kernels • Find kernel K such that • K(x1,x2) = < (x1), (x2)> • Computing K(x1,x2) should be efficient, much more so than computing (x1) and (x2) • Use K(x1,x2) in SVM algorithm rather than <x1,x2> • Remarkably, this is possible
The Polynomial Kernel • K(x1,x2) = < x1, x2 > 2 • x1 = (x11, x12) • x2 = (x21, x22) • < x1, x2 > = (x11x21 + x12x22) • < x1, x2 > 2 = (x112 x212 + x122x222 + 2x11x12 x21x22) • (x1) = (x112, x122, √2x11x12) • (x2) = (x212, x222, √2x21x22) • K(x1,x2) = < (x1), (x2)>
The Polynomial Kernel • (x) contains all monomials of degree d • Useful in visual pattern recognition • Number of monomials • 16x16 pixel image • 1010 monomials of degree 5 • Never explicitly compute (x)! • Variation - K(x1,x2) = (< x1, x2 > + 1) 2
A Few Good Kernels • Dot product kernel • K(x1,x2) = < x1,x2 > • Polynomial kernel • K(x1,x2) = < x1,x2 >d (Monomials of degree d) • K(x1,x2) = (< x1,x2 > + 1)d (All monomials of degree 1,2,…,d) • Gaussian kernel • K(x1,x2) = exp(-| x1-x2 |2/22) • Radial basis functions • Sigmoid kernel • K(x1,x2) = tanh(< x1,x2 > + ) • Neural networks • Establishing “kernel-hood” from first principles is non-trivial
The Kernel Trick “Given an algorithm which is formulated in terms of a positive definite kernel K1, one can construct an alternative algorithm by replacing K1 with another positive definite kernel K2” • SVMs can use the kernel trick
These are kernels! Using a Different Kernel in the Dual Optimization Problem • For example, using the polynomial kernel with d = 4 (including lower-order terms). • Maximize over • W() = ii - 1/2 i,jij yi yj <xi, xj> • Subject to • i 0 • ii yi = 0 • Decision function • f(x) = sign(ii yi <x, xi> + b) (<xi, xj> + 1)4 X So by the kernel trick, we just replace them! (<xi, xj> + 1)4 X
Conclusion • SVMs find optimal linear separator • The kernel trick makes SVMs non-linear learning algorithms