1 / 14

Random Projections for Manifold Learning: Dimensionality Reduction Techniques

This paper explores how random projections can aid in linear dimensionality reduction, preserving manifold information for compressed sensing. It discusses Whitney's Embedding Theorem, recovery algorithms, and intrinsic dimensionality estimation. The Isomap and Grassberger-Procacia algorithms are examined, along with the ML-RP algorithm for manifold learning. Lower bounds on projections for these methods are detailed, with insights on sampling and preserving geodesic distances in smooth manifolds.

mguy
Download Presentation

Random Projections for Manifold Learning: Dimensionality Reduction Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Random Projections of Signal ManifoldsMichael Wakin and Richard BaraniukRandom Projections for Manifold LearningChinmay Hegde, Michael Wakin and Richard BaraniukRandom Projections of Smooth ManifoldsRichard Baraniuk and Michael Wakin Presented by: John Paisley Duke University

  2. Overview/Motivation • Random projections can allow for linear, nonadaptive dimensionality reduction. • If we can ensure that the manifold information is preserved in these projections, we can use all manifold learning techniques in this compressed space and know the results will be (essentially) the same. • Therefore we can sense compressively, meaning we can bypass the overhead and directly sense the compressed (dimensionality reduced) signal.

  3. Random Projections of Signal Manifolds (ICASSP 2006) • This paper: If we have manifold information, we can perform compressive sensing using significantly fewer measurements. • Whitney’s Embedding Theorem: For a noiseless manifold with intrinsic dimensionality of K, this theorem implies that a signal x in RN, projected into RM by the M x N orthonormal matrix, P (y = Px), can be recovered with high probability if M > 2K • Note that K is the intrinsic dimensionality, which is different from (and less than) the level of sparsity.

  4. Random Projections of Signal Manifolds (ICASSP 2006) • The recovery algorithm considered here is a simple search through the projected manifold for the nearest neighbor. • Consider the case where the data is noisy, so slightly off the manifold, and define

  5. Random Projections of Signal Manifolds (ICASSP 2006)

  6. Random Projections for Manifold Learning (NIPS 2007) • How does a random projection of a manifold, impact the ability to estimate the intrinsic dimensionality of the manifold and to embed that manifold into a Euclidean space that preserves geodesic distances (e.g. via the Isomap algorithm)? How many projections are needed? • Grassberger-Procacia (GP) algorithm: A common algorithm for estimating the intrinsic dimensionality of a manifold. • Also written as C(r1)/C(r2) = (r1/r2)K where K is the intrinsic dimensionality. This method uses the fact that the volume of the intersection of a K dimensional object and a hypersphere of radius r is proportional to rK

  7. Random Projections for Manifold Learning (NIPS 2007) • Isomap algorithm: Produces a mapping where the Euclidean distance in the mapped space equals the geodesic distance in the original space.

  8. Random Projections for Manifold Learning (NIPS 2007) • Lower bound on M for the GP algorithm. The proof is in

  9. Random Projections for Manifold Learning (NIPS 2007) • Lower bound on M for the Isomap algorithm. The proof is in

  10. Random Projections for Manifold Learning (NIPS 2007) • ML-RP algorithm (manifold learning using random projections) • Developed in paper to find M

  11. Random Projections for Manifold Learning (NIPS 2007)

  12. Random Projections for Manifold Learning (NIPS 2007)

  13. Random Projections of Smooth Manifolds (in Foundations of Computational Mathematics)

  14. Random Projections of Smooth Manifolds (in Foundations of Computational Mathematics) Sketch of proof • Sample points from the manifold such that the (geodesic) distortion of any point on the manifold to the nearest sampled point is less than some value. Also, sample points from the tangent space of the manifold, ensuring the distance of all points to the nearest sample is less than some threshold. Then use the JL-lemma to ensure that the embedding of all of these sampled points preserves relative distances. Then use some theorems and the facts about how the points were sampled to extend this distance preservation to all points on the manifold.

More Related