1 / 16

Nonparametric Density Estimation: k-Nearest-Neighbor (kNN)

Learn about k-nearest-neighbor (kNN) method in nonparametric density estimation, distance metrics used, computational complexity, and methods to optimize efficiency. Explore techniques like partial distance calculation, prestructuring of samples, and editing of prototypes to enhance kNN performance.

bettyallen
Download Presentation

Nonparametric Density Estimation: k-Nearest-Neighbor (kNN)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE471-571 – Pattern RecognitionLecture 10 – Nonparametric Density Estimation – k-nearest-neighbor (kNN) Hairong Qi, Gonzalez Family Professor Electrical Engineering and Computer Science University of Tennessee, Knoxville http://www.eecs.utk.edu/faculty/qi Email: hqi@utk.edu

  2. Pattern Classification Statistical Approach Non-Statistical Approach Supervised Unsupervised Decision-tree Basic concepts: Baysian decision rule (MPP, LR, Discri.) Basic concepts: Distance Agglomerative method Syntactic approach Parameter estimate (ML, BL) k-means Non-Parametric learning (kNN) Winner-takes-all LDF (Perceptron) Kohonen maps NN (BP) Mean-shift Support Vector Machine Deep Learning (DL) Dimensionality Reduction FLD, PCA Performance Evaluation ROC curve (TP, TN, FN, FP) cross validation Stochastic Methods local opt (GD) global opt (SA, GA) Classifier Fusion majority voting NB, BKS

  3. Review - Bayes Decision Rule Maximum Posterior Probability If , then x belongs to class 1, otherwise, 2. Likelihood Ratio Discriminant Function Case 1: Minimum Euclidean Distance (Linear Machine), Si=s2I Case 2: Minimum Mahalanobis Distance (Linear Machine), Si = S Case 3: Quadratic classifier , Si = arbitrary Estimate Gaussian, Two-modal Gaussian Dimensionality reduction Performance evaluation and ROC curve

  4. Motivation • Estimate the density functions without the assumption that the pdf has a particular form

  5. *Problem with Parzen Windows • Discontinuous window function -> Gaussian • The choice of h • Still another one: fixed volume

  6. kNN (k-Nearest Neighbor) • To estimate p(x) from n samples, we can center a cell at x and let it grow until it contains kn samples, and kn can be some function of n • Normally, we let

  7. kNN in Classification • Given c training sets from c classes, the total number of samples is • *Given a point x at which we wish to determine the statistics, we find the hypersphere of volume V which just encloses k points from the combined set. If within that volume, km of those points belong to class m, then we estimate the density for class m by

  8. kNN Classification Rule • The decision rule tells us to look in a neighborhood of the unknown feature vector for k samples. If within that neighborhood, more samples lie in class i than any other class, we assign the unknown as belonging to class i.

  9. Problems • What kind of distance should be used to measure “nearest” • Euclidean metric is a reasonable measurement • Computation burden • Massive storage burden • Need to compute the distance from the unknown to all the neighbors

  10. Computational Complexity of kNN • In both space (storage space) and time (search time) • Algorithms reducing the computational burden • Computing partial distances • Prestructuring • Editing the stored prototypes

  11. Method 1 - Partial Distance • Calculate the distance using some subset r of the full d dimensions. If this partial distance is too large, we don’t need to compute further • What we know about the subspace is indicative of the full space

  12. Method 2 - Prestructuring • Prestructure samples in the training set into a search tree where samples are selectively linked • Distances are calculated between the testing sample and a few stored “root” samples, and then consider only the samples linked to it • Closest distance no longer guaranteed. X x o o x x o o X x x o o x x x x

  13. Method 3 – Editing/Pruning/Condensing • Eliminate “useless” samples during training • For example, eliminate samples that are surrounded by training points of the same category label • Leave the decision boundaries unchanged

  14. Discussion • Combining three methods • Other concerns • The choice of k • The choice of metrics

  15. Distance (Metrics) Used by kNN • Properties of metrics • Nonnegativity (D(a,b) >=0) • Reflexivity (D(a,b) = 0 iff a=b) • Symmetry (D(a,b) = D(b,a)) • Triangle inequality (D(a,b) + D(b,c) >= D(a,c)) • Different metrics • Minkowski metric • Manhattan distance (city block distance) • Euclidean distance • When k is inf, maximum of the projected distances onto each of the d coordinate axes

  16. Visualizing the Distances

More Related