1 / 14

Transformation-invariant clustering using the EM algorithm

This paper discusses an unsupervised learning approach for clustering images regardless of transformation. It proposes a probabilistic description of the data and utilizes the EM algorithm for clustering. The method incorporates both pre- and post-transformation noise and can be extended for time series data and more transformations.

potterp
Download Presentation

Transformation-invariant clustering using the EM algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transformation-invariant clustering using the EM algorithm Brendan Frey and Nebojsa Jojic IEEE Trans on PAMI, 25(1) 2003 yan karklin. cns presentation 08/10/2004

  2. Goal • unsupervised learning of image structure regardless of transformation • probabilistic description of the data • clustering as density modeling – grouping “similar” images together Invariance • manifold in data space • all points on manifold “equivalent” • complex even for basic transformations • how to approximate? yan karklin. cns presentation 08/10/2004

  3. Approximating the Invariance Manifold • discrete set of points • sparse matrices Ti map cannonical feature z into transformed feature x (observed) • as a Gaussian probability model, • all possible transformations T enumerated yan karklin. cns presentation 08/10/2004

  4. This is what it would look like for... • a 2x3 image with pixel-shift translations (wrap-around) z = {T1...T6} = x = yan karklin. cns presentation 08/10/2004

  5. The full statistical model • for one feature (one cluster): • data, given latent repr: • joint of all variables: • Gaussian post-transformation with noise Ψ • Gaussian pre-transformation with noise Φ • for multiple features (clusters), mixture model: yan karklin. cns presentation 08/10/2004

  6. The full statistical model • the generative equation: • for each “feature”, have a cannonical mean and cannonical variance • image contains one of the cannonical features (mixture model) that has undergone one transformation yan karklin. cns presentation 08/10/2004

  7. Inference and is Gassian • marginals for inferring parameters T, c, z: yan karklin. cns presentation 08/10/2004

  8. Adapting the rest of parameters • pre-transformation noise • post-tranformation noise • all learned with EM • E-step: assume known params, infer P(z, T, c) • M-step: update parameters yan karklin. cns presentation 08/10/2004

  9. Experiments recovering 4 clusters 4 clusters w/o transform. yan karklin. cns presentation 08/10/2004

  10. Pre/post transformation noise yan karklin. cns presentation 08/10/2004

  11. Pre/post transformation noise mean variance single Gaussian model of image μ Φ transformation-invariant model, no post-t noise μ Φ Ψ transformation-invariant model, with post-t noise yan karklin. cns presentation 08/10/2004

  12. Conclusions • fast (uses sparse matrices, FFT) • incorporates pre- and post-transformation noise • works on artificial data, clustering simple image sets, cleaning up somewhat contrived examples • can be extended to make use of time series data, account for more transformations • poor transformation model • fixed, pre-specified transformations • must be sparse • poor feature model • Gaussian representation of structure yan karklin. cns presentation 08/10/2004

  13. yan karklin. cns presentation 08/10/2004

  14. yan karklin. cns presentation 08/10/2004

More Related