1 / 17

A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts

A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts. Dhillon, Inderjit S., Yuqiang Guan, and Brian Kulis. K means and Kernel K means. Weighted Kernel k means. Matrix Form. Spectral Methods. Spectral Methods. Represented with Matrix. Ratio assoc. Ratio cut. L for Ncut.

Download Presentation

A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts Dhillon, Inderjit S., Yuqiang Guan, and Brian Kulis

  2. K means and Kernel K means

  3. Weighted Kernel k means Matrix Form

  4. Spectral Methods

  5. Spectral Methods

  6. Represented with Matrix Ratio assoc Ratio cut L for Ncut Norm assoc

  7. Weighted Graph Cut Weighted association Weighted cut

  8. Conclusion • Spectral Methods are special case of Kernel K means

  9. Solve the uniformed problem • A standard result in linear algebra states that if we relax the trace maximizations such that Y is an arbitrary orthonormal matrix, then the optimal Y is of the form Vk Q, where Vk consists of the leading k eigenvectors of W1/2KW1/2 and Q is an arbitrary k × k orthogonal matrix. • As these eigenvectors are not indicator vectors, we must then perform postprocessing on the eigenvectors to obtain a discrete clustering of the point

  10. From Eigen Vector to Cluster Indicator 1 2 Normalized U with L2 norm equal to 1

  11. The Other Way • Using k means to solve the graph cut problem: (random start points+ EM, local optimal). • To make sure k mean converge, the kernel matrix must be positive definite. • This is not true for arbitrary kernel matrix

  12. The effect of the regularization ai is in ai is not in

  13. Experiment results

  14. Results (ratio association)

  15. Results (normalized association)

  16. Image Segmentation

  17. Thank you. Any Question?

More Related