330 likes | 352 Views
This talk outlines the concept of core-sets for weighted and outlier-resistant clustering problems with tame loss functions. It covers various clustering scenarios such as k-median, weighted k-median, k-median with penalties, k-line median, and more. The main technical result presented is a method for computing weighted core-sets for loss functions, tackling the challenge of handling arbitrary-weight centers. The talk delves into sensitivity analysis, algorithmic approaches, and the recursive robust median method for generating efficient core-sets in clustering tasks. Overall, the talk highlights the significance of core-sets in addressing complex clustering objectives efficiently.
E N D
Data reduction for weighted and outlier-resistant clustering Leonard J. Schulman Caltech joint with Dan Feldman MIT
Talk outline • Clustering-type problems: • k-median • weighted k-median • k-median with m outliers (small m) • k-median with penalty (clustering with many outliers) • k-line median • Unifying framework: tame loss functions • Core-sets, a.k.a. -approximations • Common existence proof and algorithm
k-Median with penalty: good for outliers 2-median clustering of a data set: Same data set plus an outlier: Now cluster with h-robust loss function:
Why are all these problems in the same paper? In each case the objective function is a suitably tame “loss function”. The loss in representing a point p by a center c is: k-median: D(p) = dist(p,c) Weighted k-median: D(p) = w · dist(p,c) Robust k-median: D(p) = min{h, dist(p,c)} What qualifies as a “tame” loss function?
Many examples of LgLgLp loss functions: Robust M-estimators in Statistics figure: Z. Zhang
Weighted-k-clustering core-set for loss D Handling arbitrary-weight centers is the “hard part”
Our main technical result • For every LgLgLp loss fcn D on a metric space, for every set P of n points, there is a weighted-(D,k)-core-set S of size |S| = O(log2 n) (In more detail: |S|=(dkO(k)/2) log2 n in Rd. For finite metrics, d=log n.) • S can be computed in time O(n)
Sensitivity [Langberg and S, SODA’11] The sensitivity of a point p P determines how important it is to include P in a core-set: Why this works: If s(p) is small, then p has many “surrogates” in the data, we can take any one of them for the core-set. If s(p) is large, then there is some C for which p alone contributes a significant fraction of the loss, so we need to include p in any core-set. DW(p,C) s(p) = maxC qP DW(q,C)
Total sensitivity The total sensitivity T(P) is the sum of the sensitivities of all the points: The total sensitivity of the problem is the maximum of T(P) over all input sets P. Total sensitivity ~ n: cannot have small core-sets. Total sensitivity constant or polylog: there may exist small core-sets. T(P)=sP s(p)
The main thing we need to do in order to produce a small core-set for weighted-k-median: For each p P compute a good upper bound on s(p) in amortized O(1) time per point. (Upper bound should be good enough that s(p) is small)
Algorithm for computing sensitivities Recursive-Robust-Median(P,k) • Input: • A set P of n points in a metric space • An integer k 1 • Output: • A subset Q P of (n/kk) points We prove that any two points in Q can serve as each others’ surrogates w.r.t. any query. Hence each point p Q has sensitivity s(p) O(1/|Q|). Outer loop: Call Recursive-Robust-Median(P,k), then set P:=P-Q. Repeat until P is empty. Total sensitivity bd: T # calls to Recursive-Robust-Median kk log n.
A detail Actually it’s more complicated than described because we can’t afford to look for a (1+)-approximation, or even a 2-approximation, to the best k-median of any b·n points (b constant). Instead look for a bicriteria approximation: a 2-approximation of the best k-median of any b·n/2 points. Linear time algorithm from [F,Langberg STOC’11].
High-level intuition for the correctness of Recursive-Robust-Median Consider any p in the “output” set Q. If for all queries C, D(p,C) is small, then p has low sensitivity. If there is a query C for which D(p,C) is large then in that query, all points of Q are assigned to the same center c C, and are closer to each other than to c; so they are surrogates.
Thank you
Many examples of LgLgLp loss functions: Robust M-estimators in Statistics …