660 likes | 1.51k Views
Chapter 12: Cluster analysis and segmentation of customers . Commercial applications. A chain of radio-stores uses cluster analysis for identifying three different customer types with varying needs.
E N D
Commercial applications • A chain of radio-stores uses cluster analysis for identifying three different customer types with varying needs. • An insurance company is using cluster analysis for classifying customers into segments like the “self confident customer”, “the price conscious customer” etc. • A producer of copying machines succeeds in classifying industrial customers into “satisfied” and “non-satisfied or quarrelling” customers.
Output-data 2 Cluster 1 Classify rows Cluster 2 4 3 Factor: X1 X2X3… Xj…Xn Obs. 1 Obs. 2 … Obs. m Classify columns Factor 2 Factor 1 Input-data 1 Cluster: X1 X2 … Xn Obs. 1 Obs. 2 Obs. 3 … Obs. i … Obs, m Figure 11.1 Relatedness of multivariate methods: cluster analysis and factor analysis
Dependence and Independence methods Dependence Methods: We assume that a variable (i.e. Y) depends on (are caused or determined by) other variables (X1, X2 etc.) Examples: Regression, ANOVA, Discriminant Analysis Independence Methods: We do not assume that any variable(s) is (are) caused by or determined by others. Basically, we only have X1, X2 ….Xn (but no Y) Examples: Cluster Analysis, Factor Analysis etc.
Dependence and Independence methods Dependence Methods: The model is defined apriori (prior to survey and/or estimation) Examples: Regression, ANOVA, Discriminant Analysis Independence Methods: The model is defined aposteriori (after the survey and/or estimation has been carried out) Examples: Cluster Analysis, Factor Analysis etc. When using independence methods we let the data speak for themselves!
Dependence method: Multiple regression The primary focus is on the variables!
Independence method: Cluster analysis Cluster 1 Cluster 2 Cluster 3 The primary focus is on the observations!
Cluster analysis output: A new cluster-variablewith a cluster-number on each respondent
“Younger male nerds” Core-families with Traditional values “Senior-relaxers” Cluster analysis: A cross-tab between the cluster-variable and background + opinions is established
Cluster profiling:(hypothetical) Cluster 2: “Traditional shopper” Cluster 1: “Ecological shopper” Buy ecological food Advertisements funny Low price important 1 2 3 4 5 1 = Totally Agree Note: Finally the clusters’ respective media-behaviour needs to be uncovered
Governing principle Maximization of homogeneity within clusters and simultaneously Maximization of heterogeneity across clusters
Non-overlapping (Exclusive) Methods Overlapping Methods Non-hierarchical Hierarchical Non-hierarchical/ Partitioning/k-means • - Overlapping k-centroids • Overlapping k-means • Latent class techniques • - Fuzzy clustering • - Q-type Factor analysis (9) Agglomerative Divisive - Sequential threshold - Parallel threshold - Neural Networks - Optimized partitioning (8) Centroid Methods Variance Methods Linkage Methods - Average - Between (1) - Within (2) - Weighted - Single - Ordinary (3) - Density - Two stage Density - Complete (4) - Centroid (5) - Median (6) - Ward (7) Note: Methods in italics are available In SPSS. Neural networks necessitate SPSS’ data mining tool Clementine Figure 12.1 Overview of clustering methods
Non overlapping Overlapping Single Linkage: Minimum distance * * Complete Linkage: Maximum distance * * Hierarchical Non-hierarchical * * Average Linkage: Average distance * * * * Centroid method: Distance betweencentres * * ¤ ¤ Agglomerative Divisive * * * * * * 1a 1b 1b * Wards method: Minimization of within-cluster variance 1a * * * ¤ ¤ * * * 1c * 1b1 * 1b2 2 Figure 12.2 Illustration of important clustering issues in Figure 12.1
(x2, y2) y2-y1 (x1, y1) x2-x1 d = (x2-x1)2 + (y2-y1)2 Euclidean distance (Default in SPSS): Y B * Other distances available in SPSS:City-Block uses of absolute differences instead of squared differences of coordinates. Moreover: Minkowski distance, Cosine distance, Chebychev distance, Pearson Correlation. A * X
(3, 5) 5-2 (1, 2) 3-1 d = (3-1)2 + (5-2)2 = 3,61 Euclidean distance Y B * A * X
Which two pairs of points are to be clustered first? G * A B * * F C * * D * E * H *
Maybe A/B and D/E (depending on algorithm!) G * A B * * F C * * D * E * H *
Quo vadis, C? G * A B * * C * D * E * H *
Quo vadis, C? (Continued) G * A B * * C * D * E * H *
How does one decide which cluster a “newcoming” point is to join? Measuring distances from point to clusters or points: • “Farthest neighbour” (complete linkage) • “Nearest neighbour” (single linkage) • “Neighbourhood centre” (average linkage)
Quo vadis, C? (Continued) G * A B * * 11,0 10,5 8,5 7,0 C * 8,5 D * 9,0 12,0 9,5 E * H *
Minimize longest distance from cluster to point Complete linkage G * A B * * 10,5 C * D * 9,5 E * H *
Minimize average distance from cluster to point Average linkage G * A B * * 8,5 C * D * 9,0 E * H *
Minimize shortest distance from cluster to point Single linkage G * A B * * 7,0 C * 8,5 D * E * H *
Cluster formation begins All the time the closest observation is put into the existing cluster(s) Chaining or Snake-like clusters A and C merge into the same cluster omitting B! Single linkage: Pitfall * * * A C * * B * * * * *
** * Single linkage: Advantage Good outlier detection and removal procedure in cases with “noisy” data sets * * * * * Outliers * * * * Entropy group * * * *
Cluster analysis More potential pitfalls & problems: • Do our data at all permit the use of means? • Some methods (i.e. Wards) are biased toward production of clusters with approximately the same number of observations. • Other methods (i. e. Centroid) require data as input that are metric scaled. So, strictly speaking it is not allowable to use this algorithm, when clustering data containing interval scales (Likert- or semantic differential scales).
0,68 0,42 0,92 0,58 Cluster analysis: Small artificial example 1 3 2 6 4 5 Note: 6 points yield 15 possible pairwise distances - [n*(n-1)]/2
Cluster analysis: Small artificial example 1 0,68 3 0,42 2 6 0,92 4 5 0,58
Cluster analysis: Small artificial example 1 0,68 3 0,42 2 6 0,92 4 5 0,58
Dendrogram OBS 1 * OBS 2 * Step 0: Each observation is treated as a separate cluster OBS 3 * OBS 4 * OBS 5 * OBS 6 * Distance Measure 0,2 0,4 0,6 0,8 1,0
Dendrogram (Continued) OBS 1 * Cluster 1 OBS 2 * Step 1: Two observations with smallest pairwise distances are clustered OBS 3 * OBS 4 * OBS 5 * OBS 6 * 0,2 0,4 0,6 0,8 1,0
Dendrogram (Continued) OBS 1 * Cluster 1 OBS 2 * Step 2: Two other observations with smallest distances amongst remaining points/clusters are clustered OBS 3 * OBS 4 * OBS 5 * Cluster 2 OBS 6 * 0,2 0,4 0,6 0,8 1,0
Dendrogram (Continued) OBS 1 * Cluster 1 OBS 2 * OBS 3 * Step 3: Observation 3 joins with cluster 1 OBS 4 * OBS 5 * Cluster 2 OBS 6 * 0,2 0,4 0,6 0,8 1,0
Dendrogram (Continued) OBS 1 * OBS 2 * “Supercluster” OBS 3 * OBS 4 * Step 4: Cluster 1 and 2 - from Step 3 joint into a “Supercluster” OBS 5 * OBS 6 * 0,2 0,4 0,6 0,8 1,0 A single observation remains unclustered (Outlier)
Textbooks in Cluster Analysis Cluster Analysis, 1981 Brian S. Everitt Cluster Analysis for Social Scientists, 1983 Maurice Lorr Cluster Analysis for Researchers, 1984 Charles Romesburg Cluster Analysis, 1984 Aldenderfer and Blashfield
Case: Clustering of beer brands • Brand profiles based om the 17 semantic differential scales • Purpose: to determine the market structure in terms of similar/different brands • Hypothesis: reflects the competitive structure among brands due to consumers bahaviour