1.3k likes | 1.49k Views
Data Mining: Concepts and Techniques — Chapter 7 —. Chapter 7. Cluster Analysis. What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Model-Based Methods
E N D
Data Mining:Concepts and Techniques— Chapter 7 — Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
What is Cluster Analysis? • Cluster: a collection of data objects • Similar to one another within the same cluster • Dissimilar to the objects in other clusters • Cluster analysis • Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters • Unsupervised learning: no predefined classes • Typical applications • As a stand-alone tool to get insight into data distribution • As a preprocessing step for other algorithms Data Mining: Concepts and Techniques
Clustering: Rich Applications and Multidisciplinary Efforts • Pattern Recognition • Spatial Data Analysis • Create thematic maps in GIS by clustering feature spaces • Detect spatial clusters or for other spatial mining tasks • Image Processing • Economic Science (especially market research) • WWW • Document classification • Cluster Weblog data to discover groups of similar access patterns Data Mining: Concepts and Techniques
Examples of Clustering Applications • Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs • Land use: Identification of areas of similar land use in an earth observation database • Insurance: Identifying groups of motor insurance policy holders with a high average claim cost • City-planning: Identifying groups of houses according to their house type, value, and geographical location • Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults Data Mining: Concepts and Techniques
Quality: What Is Good Clustering? • A good clustering method will produce high quality clusters with • high intra-class similarity • low inter-class similarity • The quality of a clustering result depends on both the similarity measure used by the method and its implementation • The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns Data Mining: Concepts and Techniques
Measure the Quality of Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric: d(i, j) • There is a separate “quality” function that measures the “goodness” of a cluster. • The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables. • Weights should be associated with different variables based on applications and data semantics. • It is hard to define “similar enough” or “good enough” • the answer is typically highly subjective. Data Mining: Concepts and Techniques
Requirements of Clustering in Data Mining • Scalability • Ability to deal with different types of attributes • Ability to handle dynamic data • Discovery of clusters with arbitrary shape • Minimal requirements for domain knowledge to determine input parameters • Able to deal with noise and outliers • Insensitive to order of input records • High dimensionality • Incorporation of user-specified constraints • Interpretability and usability Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
Data Structures • Data matrix • (two modes) • Dissimilarity matrix • (one mode) Data Mining: Concepts and Techniques
Type of data in clustering analysis • Interval-scaled variables • Binary variables • Nominal, ordinal, and ratio variables • Variables of mixed types Data Mining: Concepts and Techniques
Interval-valued variables • Standardize data • Calculate the mean absolute deviation: where • Calculate the standardized measurement (z-score) • Using mean absolute deviation is more robust than using standard deviation Data Mining: Concepts and Techniques
Similarity and Dissimilarity Between Objects • Distances are normally used to measure the similarity or dissimilarity between two data objects • Some popular ones include: Minkowski distance: where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer • If q = 1, d is Manhattan distance Data Mining: Concepts and Techniques
Similarity and Dissimilarity Between Objects (Cont.) • If q = 2, d is Euclidean distance: • Properties • d(i,j) 0 • d(i,i)= 0 • d(i,j)= d(j,i) • d(i,j) d(i,k)+ d(k,j) • Also, one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures Data Mining: Concepts and Techniques
Object j Object i Binary Variables • A contingency table for binary data • Distance measure for symmetric binary variables: • Distance measure for asymmetric binary variables: • Jaccard coefficient (similarity measure for asymmetric binary variables): Data Mining: Concepts and Techniques
Dissimilarity between Binary Variables • Example • gender is a symmetric attribute • the remaining attributes are asymmetric binary • let the values Y and P be set to 1, and the value N be set to 0 Data Mining: Concepts and Techniques
Nominal Variables • A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green • Method 1: Simple matching • m: # of matches, p: total # of variables • Method 2: use a large number of binary variables • creating a new binary variable for each of the M nominal states Data Mining: Concepts and Techniques
Ordinal Variables • An ordinal variable can be discrete or continuous • Order is important, e.g., rank • Can be treated like interval-scaled • replace xif by their rank • map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by • compute the dissimilarity using methods for interval-scaled variables Data Mining: Concepts and Techniques
Ratio-Scaled Variables • Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt • Methods: • treat them like interval-scaled variables—not a good choice! (why?—the scale can be distorted) • apply logarithmic transformation yif = log(xif) • treat them as continuous ordinal data treat their rank as interval-scaled Data Mining: Concepts and Techniques
Variables of Mixed Types • A database may contain all the six types of variables • symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio • One may use a weighted formula to combine their effects • f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise • f is interval-based: use the normalized distance • f is ordinal or ratio-scaled • compute ranks rif and • and treat zif as interval-scaled Data Mining: Concepts and Techniques
Vector Objects • Vector objects: keywords in documents, gene features in micro-arrays, etc. • Broad applications: information retrieval, biologic taxonomy, etc. • Cosine measure • A variant: Tanimoto coefficient Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
Major Clustering Approaches (I) • Partitioning approach: • Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors • Typical methods: k-means, k-medoids, CLARANS • Hierarchical approach: • Create a hierarchical decomposition of the set of data (or objects) using some criterion • Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON • Density-based approach: • Based on connectivity and density functions • Typical methods: DBSACN, OPTICS, DenClue Data Mining: Concepts and Techniques
Major Clustering Approaches (II) • Grid-based approach: • based on a multiple-level granularity structure • Typical methods: STING, WaveCluster, CLIQUE • Model-based: • A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other • Typical methods:EM, SOM, COBWEB • Frequent pattern-based: • Based on the analysis of frequent patterns • Typical methods: pCluster • User-guided or constraint-based: • Clustering by considering user-specified or application-specific constraints • Typical methods: COD (obstacles), constrained clustering Data Mining: Concepts and Techniques
Typical Alternatives to Calculate the Distance between Clusters • Single link: smallest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = min(tip, tjq) • Complete link: largest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = max(tip, tjq) • Average: avg distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = avg(tip, tjq) • Centroid: distance between the centroids of two clusters, i.e., dis(Ki, Kj) = dis(Ci, Cj) • Medoid: distance between the medoids of two clusters, i.e., dis(Ki, Kj) = dis(Mi, Mj) • Medoid: one chosen, centrally located object in the cluster Data Mining: Concepts and Techniques
Centroid, Radius and Diameter of a Cluster (for numerical data sets) • Centroid: the “middle” of a cluster • Radius: square root of average distance from any point of the cluster to its centroid • Diameter: square root of average mean squared distance between all pairs of points in the cluster Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
Partitioning Algorithms: Basic Concept • Partitioning method: Construct a partition of a database D of n objects into a set of k clusters, s.t., min sum of squared distance • Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion • Global optimal: exhaustively enumerate all partitions • Heuristic methods: k-means and k-medoids algorithms • k-means (MacQueen’67): Each cluster is represented by the center of the cluster • k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster Data Mining: Concepts and Techniques
The K-Means Clustering Method • Given k, the k-means algorithm is implemented in four steps: • Partition objects into k nonempty subsets • Compute seed points as the centroids of the clusters of the current partition (the centroid is the center, i.e., mean point, of the cluster) • Assign each object to the cluster with the nearest seed point • Go back to Step 2, stop when no more new assignment Data Mining: Concepts and Techniques
10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 The K-Means Clustering Method • Example 10 9 8 7 6 5 Update the cluster means Assign each objects to most similar center 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign reassign K=2 Arbitrarily choose K object as initial cluster center Update the cluster means Data Mining: Concepts and Techniques
Comments on the K-Means Method • Strength:Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k)) • Comment: Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness • Applicable only when mean is defined, then what about categorical data? • Need to specify k, the number of clusters, in advance • Unable to handle noisy data and outliers • Not suitable to discover clusters with non-convex shapes Data Mining: Concepts and Techniques
Variations of the K-Means Method • A few variants of the k-means which differ in • Selection of the initial k means • Dissimilarity calculations • Strategies to calculate cluster means • Handling categorical data: k-modes (Huang’98) • Replacing means of clusters with modes • Using new dissimilarity measures to deal with categorical objects • Using a frequency-based method to update modes of clusters • A mixture of categorical and numerical data: k-prototype method Data Mining: Concepts and Techniques
10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 What Is the Problem of the K-Means Method? • The k-means algorithm is sensitive to outliers ! • Since an object with an extremely large value may substantially distort the distribution of the data. • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster. Data Mining: Concepts and Techniques
The K-MedoidsClustering Method • Find representative objects, called medoids, in clusters • PAM (Partitioning Around Medoids, 1987) • starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering • PAM works effectively for small data sets, but does not scale well for large data sets • CLARA (Kaufmann & Rousseeuw, 1990) • CLARANS (Ng & Han, 1994): Randomized sampling • Focusing + spatial data structure (Ester et al., 1995) Data Mining: Concepts and Techniques
10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 A Typical K-Medoids Algorithm (PAM) Total Cost = 20 10 9 8 Arbitrary choose k object as initial medoids Assign each remaining object to nearest medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 K=2 Randomly select a nonmedoid object,Oramdom Total Cost = 26 Do loop Until no change Compute total cost of swapping Swapping O and Oramdom If quality is improved. Data Mining: Concepts and Techniques
PAM (Partitioning Around Medoids) (1987) • PAM (Kaufman and Rousseeuw, 1987), built in Splus • Use real object to represent the cluster • Select k representative objects arbitrarily • For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih • For each pair of i and h, • If TCih < 0, i is replaced by h • Then assign each non-selected object to the most similar representative object • repeat steps 2-3 until there is no change Data Mining: Concepts and Techniques
j t t j h i h i h j i i h j t t PAM Clustering: Total swapping cost TCih=jCjih Data Mining: Concepts and Techniques
What Is the Problem with PAM? • Pam is more robust than k-means in the presence of noise and outliers because a medoid is less influenced by outliers or other extreme values than a mean • Pam works efficiently for small data sets but does not scale well for large data sets. • O(k(n-k)2 ) for each iteration where n is # of data,k is # of clusters • Sampling based method, CLARA(Clustering LARge Applications) Data Mining: Concepts and Techniques
CLARA (Clustering Large Applications) (1990) • CLARA (Kaufmann and Rousseeuw in 1990) • Built in statistical analysis packages, such as S+ • It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output • Strength: deals with larger data sets than PAM • Weakness: • Efficiency depends on the sample size • A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased Data Mining: Concepts and Techniques
CLARANS (“Randomized” CLARA) (1994) • CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94) • CLARANS draws sample of neighbors dynamically • The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids • If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum • It is more efficient and scalable than both PAM and CLARA • Focusing techniques and spatial access structures may further improve its performance (Ester et al.’95) Data Mining: Concepts and Techniques
Chapter 7. Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • A Categorization of Major Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Grid-Based Methods • Model-Based Methods • Clustering High-Dimensional Data • Constraint-Based Clustering • Outlier Analysis • Summary Data Mining: Concepts and Techniques
Step 0 Step 1 Step 2 Step 3 Step 4 agglomerative (AGNES) a a b b a b c d e c c d e d d e e divisive (DIANA) Step 3 Step 2 Step 1 Step 0 Step 4 Hierarchical Clustering • Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition Data Mining: Concepts and Techniques
AGNES (Agglomerative Nesting) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e.g., Splus • Use the Single-Link method and the dissimilarity matrix. • Merge nodes that have the least dissimilarity • Go on in a non-descending fashion • Eventually all nodes belong to the same cluster Data Mining: Concepts and Techniques
Dendrogram: Shows How the Clusters are Merged • Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. • A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. Data Mining: Concepts and Techniques
DIANA (Divisive Analysis) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e.g., Splus • Inverse order of AGNES • Eventually each node forms a cluster on its own Data Mining: Concepts and Techniques
Recent Hierarchical Clustering Methods • Major weakness of agglomerative clustering methods • do not scale well: time complexity of at least O(n2), where n is the number of total objects • can never undo what was done previously • Integration of hierarchical with distance-based clustering • BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters • ROCK (1999): clustering categorical data by neighbor and link analysis • CHAMELEON (1999): hierarchical clustering using dynamic modeling Data Mining: Concepts and Techniques
BIRCH (1996) • Birch: Balanced Iterative Reducing and Clustering using Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96) • Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering • Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) • Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree • Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness: handles only numeric data, and sensitive to the order of the data record. Data Mining: Concepts and Techniques
Clustering Feature:CF = (N, LS, SS) N: Number of data points LS: Ni=1=Xi SS: Ni=1=Xi2 Clustering Feature Vector in BIRCH CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) Data Mining: Concepts and Techniques
CF-Tree in BIRCH • Clustering feature: • summary of the statistics for a given subcluster: the 0-th, 1st and 2nd moments of the subcluster from the statistical point of view. • registers crucial measurements for computing cluster and utilizes storage efficiently • A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering • A nonleaf node in a tree has descendants or “children” • The nonleaf nodes store sums of the CFs of their children • A CF tree has two parameters • Branching factor: specify the maximum number of children. • threshold: max diameter of sub-clusters stored at the leaf nodes Data Mining: Concepts and Techniques