290 likes | 466 Views
(Rare) Category Detection Using Hierarchical Mean Shift. Pavan Vatturi (vatturi@eecs.oregonstate.edu) Weng-Keen Wong (wong@eecs.oregonstate.edu). 1. Introduction. Applications for surveillance, monitoring, scientific discovery and data cleaning require anomaly detection
E N D
(Rare) Category Detection Using Hierarchical Mean Shift Pavan Vatturi (vatturi@eecs.oregonstate.edu) Weng-Keen Wong (wong@eecs.oregonstate.edu)
1. Introduction • Applications for surveillance, monitoring, scientific discovery and data cleaning require anomaly detection • Anomalies often identified as statistically unusual data points • Many anomalies are simply irrelevant or correspond to known sources of noise
1. Introduction Known objects (99.9% of the data) Anomalies (0.1% of the data) Pictures from: Sloan Digital Sky Survey (http://www.sdss.org/iotw/archive.html) Pelleg, D. (2004). Scalable and Practical Probability Density Estimators for Scientific Anomaly Detection. PhD Thesis, Carnegie Mellon University. Uninteresting (99% of anomalies) Interesting (1% of anomalies)
1. Introduction Category Detection [Pelleg and Moore 2004]: human-in-the-loop exploratory data analysis Ask User to Label Categories of Interesting Data Points Data Set Update Model with Labels Build Model Spot Interesting Data Points
1. Introduction • User can: • Label a query data point under an existing category • Or declare data point to belong to a previous undeclared category Ask User to Label Categories of Interesting Data Points Data Set Update Model with Labels Build Model Spot Interesting Data Points
1. Introduction • Goal: present to user a single instance from each category in as few queries as possible • Difficult to detect rare categories if class imbalance is severe • Interested in rare categories for anomaly detection
Outline • Introduction • Related Work • Background • Methodology • Results • Conclusion / Future Work
2. Related Work • Interleave [Pelleg and Moore 2004] • Nearest-Neighbor-based active learning for rare-category detection for multiple classes [He and Carbonell 2008] • Multiple output identification [Fine and Mansour 2006]
3. Background: Mean Shift [Fukunaga and Hostetler 1975] Reference data set Mean shift vector (follows density gradient) Query point Center of Mass Mean shift vector with kernel k
3. Background: Mean Shift [Fukunaga and Hostetler 1975] Reference data set Convergence to cluster center Query point Center of Mass
3. Background: Mean Shift Blurring Reference data set Query point Center of Mass • Blurring: • When query points are the same as the reference data set • Progressively blurs the original data set
3. Background: Mean Shift End result of applying mean shift to a synthetic data set
4. Methodology: Overview • Sphere the data • Hierarchical Mean Shift • Query user
4. Methodology: Hierarchical Mean Shift Repeatedly blur data using Mean Shift with increasing bandwidth: hnew = k * hold
4. Methodology: Hierarchical Mean Shift • Mean Shift complexity is O(n2dm) where • n = # of data points • d = dimensionality of data points • m = # of mean shift iterations • Single kd-tree optimization used to speed up Hierarchical Mean Shift
4. Methodology: Querying the User Rank cluster centers for querying to user. • Outlierness [Leung et al. 2000] for Cluster Ci: Lifetime of Ci = Log (bandwidth when cluster Ci is merged with other clusters – bandwidth when cluster Ci is formed)
4. Methodology: Querying the User Rank cluster centers for querying to user. • Compactness + Isolation [Leung et al. 2000] for Cluster Ci:
4. Methodology: Tiebreaker • Ties may occur in Outlierness or Compactness/Isolation values. • Highest average distance heuristic: choose cluster center with highest average distance from user-labeled points.
5. Results Data sets used in experiments Shuttle, OptDigits, OptLetters, and Statlog were subsampled to simulate class imbalance.
5. Results (Yeast) Category detection metric: # queries before user presented with at least one example from all categories
5. Results Number of hints to discover all classes
5. Results Area under the category detection curve
6. Conclusion / Future Work Conclusions • HMS-based methods consistently discover more categories in fewer queries than existing methods • Do not need apriori knowledge of dataset properties eg. total number of classes
6. Conclusion / Future Work Future Work • Better use of user feedback • Presentation of an entire cluster to the user instead of a representative data point • Improved computational efficiency • Theoretical analysis