470 likes | 692 Views
Department of Computer Science University of Massachusetts Lowell. Gaussian Kernel Width Exploration and Cone Cluster Labeling For Support Vector Clustering. Nov. 28, 2007 Sei-Hyung Lee Karen Daniels. Outline. Clustering Overview SVC Background and Related Work
E N D
Department of Computer Science University of Massachusetts Lowell Gaussian Kernel Width Exploration and Cone Cluster Labeling For Support Vector Clustering Nov. 28, 2007 Sei-Hyung Lee Karen Daniels
Outline • Clustering Overview • SVC Background and Related Work • Selection of Gaussian Kernel Widths • Cone Cluster Labeling • Comparisons • Contributions • Future Work
Clustering Overview • Clustering • discovering natural groups in data • Clustering problems arise in • bioinformatics • patterns of gene expression • data mining/compression • pattern recognition/classification
Definition of Clustering • Definition by Everitt(1974) • “A cluster is a set of entities which are alike, and entities from different clusters are not alike.” If we assume that the objects to be clustered are represented as points in the measurement space, then • “Clusters may be described as connected regions of a multi-dimensional space containing a relatively high density of points, separated from other such regions bya region containing a relatively low density of points.”
Sample Clustering Taxonomy (Zaiane1999) Partitioning Hierarchical Density-based Grid-based Model-based fixed number of clusters k Statistical (COBWEB) Neural Network (SOM) Hybrids are also possible. http://www.cs.ualberta.ca/~zaiane/courses/cmput690/slides/ (Chapter 8)
Comparison of Clustering Techniques k = number of clusters, i = number of iterations, N = number of data points, Nsv = number of support vectors, Nbsv = number of bounded support vectors. SVC time is for single combination of parameters.
Jain et al. Taxonomy (1999) Cross-cutting Issues Agglomerative vs. Divisive Monothetic vs. Polythetic (sequential feature consideration) Hard vs. Fuzzy Deterministic vs. Stochastic Distance between 2 clusters = minimum of distances between all inter- cluster pairs. Distance between 2 clusters = maximum of distances between all inter- cluster pairs. Incremental vs. Non-incremental
More Recent Clustering Surveys • Clustering Large Datasets (Mercer 2003) • Hybrid Methods: e.g. Distribution-Based Clustering Algorithm for Clustering Large Spatial Datasets (Xu et al. 1998) • Hybrid: model-based, density-based, grid-based • Doctoral Thesis (Lee 2005) • Boundary-Detecting Methods: • AUTOCLUST (Estivill-Castro et al. 2000) • Voronoi modeling and Delaunay triangulation • Random Walks (Harel et al. 2001) • Delaunay triangulation modeling and k-nearest-neighbors • Random walk in weighted graph • Support Vector Clustering (Ben-Hur et al. 2001) • One-class Support Vector Machine + cluster labeling
Overview of SVM • Map non-linearly separable data into a feature space where they are linearly separable • Class of hyperplanes : where, ω is normal vector of a hyper-plane • b is the offset from the origin : non-linear mapping
Overview of SVC • Support Vector Clustering (SVC) • Clustering algorithm using (one-class) SVM • Able to handle arbitrary shaped clusters • Able to handle outliers • Able to handle high dimensions, but… • Need input parameters • For kernel function that defines inner product in feature space • e.g. Gaussian kernelwidthqin • Soft marginCto control outliers
SVC Main Idea R : Radius of the minimal hyper-sphere a : center of the sphere R(x) : distance between F(x) and a BSV : data x outside of sphere, R(x) > R Num(BSV) is controlled by C SV : data x on the surface of sphere, R(x)=R Num(SV) is controlled by q Others : data x inside of sphere, R(x) < R unit ball SV BSV Gaussian Kernel R a x x Φ(x) q Data space contours are not explicitly available. “Attract” hyper-plane onto data points instead of “repel.”
1 2 3 4 5 3 4 4 2 1 5 2 Find Minimal Hyper-sphere (with BSVs) (Only points on boundary contribute.) by by
Relationship Between Minimal Hyper-sphere and Cluster Contours R : Radius of the minimal hyper-sphere a : center of the sphere R(x) : distance between φ(x) and a Challenge: Contour boundaries are not explicitly available. Number of clusters increases with increasing q.
SVC High-Level Pseudo-Code SVC (X) q initial value; C initial C ( =1) loop K computeKernel(X,q); β solveLagrangian(K,C); cluster labeling(X,β); if clustering result is satisfactory, exit choose new q and/or C; end loop
Previous Work on SVC • Tax and Duin (1999): Novelty detection using (one-class) SVM. • SVC suggested by A. Ben-Hur, V.Vapnik, et al. (2001) • Complete Graph • Support Vector Graph • J. Yang, et al. (2002): Proximity Graph • J. Park, et al. (2004): Spectral Graph Partitioning • J. Lee, et al. (2005): Gradient Descent • W. Puma-Villanueva et al. (2005) Ensembles • S. Lee and K. Daniels (2004, 2005, 2006, 2007): Kernel width exploration and fast cluster labeling
Gradient Descent (GD) support vectors Non-SV data points stable equilibrium points
Traditional Sample Points Technique • CG, SVG, PG, and GD use this technique. ③ xj y ② xi xj ① xi xj m sample points disconnected ① disconnected ② connected ③
Problems of Sample Points Technique xi xi xj xj False Negative False Positive sample points
Difficult to find appropriate qand C no guidance for choosing q and C too much trial and error Slow cluster labeling O(N2Nsvm) time for CG method, where m is the number of sample points on the line segment connecting any pair of data points general size of Delaunay triangulation in d dimensions = Bad performance in high-dimensions as the number of principal components is increased, there is a performance degradation Problems of SVC
Lemmas If q=0, then R2=0 If q=∞, then βi=1/N for all i∈{1,…, N} If q =∞,then R2=1-1/N R2=1 iff q =∞, and N =∞ If N is finite, then R2≤1-1/N <1 Theorem Under certain circumstances, R2 is a monotonically nondecreasing function of q Secant-like algorithm Ourq Exploration
Estimation of q-list length≈ depends only on spatial characteristics of the data set and not on the dimensionality of the data set or the number of data 89% accuracy w.r.t. the actual q-list length for all datasets considered q-list Length Analysis
Our Recent q Exploration Work • Curve typically has one critical radius of curvature at q*. • Approximate q* to yield (without cluster labeling). • Use as starting q value in sequence.
Dim. 9 25 3 4 200 q Exploration Results • 2D: On average actual number is • 32% of estimate • 22% of secant length • Higher dimensions: On average actual number is • 112% of estimate. • 82% of secant length
Cone Cluster Labeling (CCL) • Motivation: Avoid line segment sampling • Approach: • Leverage geometry of feature space. • For Gaussian kernel • Images of all data points are on surface of unit ball in feature space. • Hyper-sphere in data space corresponds to cone in feature space with apex at origin. Gaussian Kernel unit ball q Low-Dimensional View of High-Dimensional Feature Space Sample 2D Data Space
Cone Cluster Labeling vj vi θ θ θ θ
Cone Cluster Labeling • Cone base angles are all = q. • Cones have a’ in common. • Pythagorean Theorem holds in feature space. • To derive data space hyper-sphere radius, use
Cone Cluster Labeling q=0.137 q=0.003 P’
Sample Higher Dimensional CCL Results in “Heat Map” Form d N = 12 d = 9 3 clusters N N = 30 d = 25 5 clusters N = 205 d = 200 5 clusters
Comparison – cluster labeling algorithms m: the number of sample points i: the number of iterations for convergence Time is for a single (q,C) combination.
Contributions • Automatically generate Gaussian kernel width values • include appropriate width values for our test data sets • obtain some reasonable cluster results from the q-list • Faster cluster labeling method • faster than any other SVC cluster labeling algorithms • good clustering quality
Future Work “The presence or absence of robust, efficient parallel clustering techniques will determine the success or failure of cluster analysis in large-scale data mining applications in the future.” - Jain et al. 1999 Make SVC scalable!