540 likes | 696 Views
Manifold Models for Signal Acquisition, Compression and Processing. Richard Baraniuk Mark Davenport Marco Duarte Chinmay Hegde Rice University Michael Wakin Colorado School of Mines. Supported by NSF, ONR, ARO, AFOSR, DARPA, Texas Instruments. Digital Sensing Revolution.
E N D
Manifold Models for Signal Acquisition, Compression and Processing Richard Baraniuk Mark DavenportMarco DuarteChinmay HegdeRice University Michael Wakin Colorado School of Mines Supported by NSF, ONR, ARO, AFOSR, DARPA, Texas Instruments
Pressure is on Digital Sensors • Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support higher resolution / denser sampling • ADCs, cameras, imaging systems, microarrays, … x large numbers of sensors • image data bases, camera arrays, distributed wireless sensor networks, … xincreasing numbers of modalities • acoustic, RF, visual, IR, UV, x-ray, gamma ray, …
Pressure is on Digital Sensors • Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support higher resolution / denser sampling • ADCs, cameras, imaging systems, microarrays, … x large numbers of sensors • image data bases, camera arrays, distributed wireless sensor networks, … xincreasing numbers of modalities • acoustic, RF, visual, IR, UV = deluge of sensor data • how to efficiently acquire, fuse, process, communicate?
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) sample
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) too much data! sample
Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) • compress data using a sparse basis expansion sample compress transmit/store JPEG JPEG2000 … receive decompress
Sparsity / Compressibility largewaveletcoefficients (blue = 0) pixels • Sparse: K-term basis expansion yields exact representation • Compressible: K-term basis expansion yields close approximation
Sample / Compress • Long-established paradigm for digital data acquisition • uniformly sample data at Nyquist rate • compress data using a sparse basis expansion sample compress transmit/store sparse /compressiblewavelettransform receive decompress
What’s Wrong with this Picture? • Why go to all the work to acquire N samples only to discard all but K pieces of data? sample compress transmit/store sparse /compressiblewavelettransform receive decompress
What’s Wrong with this Picture? nonlinear processing nonlinear signal model (union of subspaces) linear processing linear signal model (bandlimited subspace) sample compress transmit/store sparse /compressiblewavelettransform receive decompress
Compressive Sensing • Directly acquire “compressed” data • Replace samples by more general “measurements” compressive sensing transmit/store receive reconstruct
Compressive Sensing • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss • Dimensionality reduction sparsesignal measurements nonzero entries
Compressive Sensing • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss • Random projection will work sparsesignal measurements nonzero entries [Candes-Romberg-Tao, Donoho, 2004]
Why CS Works • Random projection not full rank, but stably embedssignals with concise geometrical structure • sparse signal models is K-sparse • compressible signal models with high probability provided M large enough • Stable embedding: preserves structure • distances between points, angles between vectors, …
Stable Embedding • K-sparse signals live on a union of K-dimensional hyperplanes aligned with coordinate axes in K-planes
Stable Embedding K-planes • For all K-sparse x1 and x2,
CS Signal Recovery • Recover sparse/compressible signal x from CS measurements y via optimization K-sparsemodel K-dim planes recovery linear program
CS Imaging: Single-Pixel Camera target 65536 pixels 1300 measurements (2%) 11000 measurements (16%)
Stable Embedding • Random projection not full rank, but stably embedssignals with concise geometrical structure • sparse signal models is K-sparse • compressible signal models with high probability provided M large enough • Q: What about other concise signal models? • Result: smooth K-dimensional manifolds in
Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V
Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with
Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with Then with probability at least 1-r, the following statement holds: For every pair x1,x2inM, [B and Wakin, FOCM, 2007]
Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with Then with probability at least 1-r, the following statement holds: For every pair x1,x2inM, [B and Wakin, FOCM, 2007]
Stable Manifold Embedding Sketch of proof: • construct a sampling of points • on manifold at fine resolution • from local tangent spaces • apply JLL to these points(concentration of measure) • extend to entire manifold Implication: Nonadaptive (even random) linear projections can efficiently capture & preserve structure of manifold See also: Indyk and Naor, Agarwal et al., Dasgupta and Freund
Application: CompressiveDetection/ClassificationviaSmashed Filtering
Information Scalability • Many applications involve signal inferenceand not reconstructiondetection < classification < estimation < reconstruction computationalcomplexityfor linearprogramming
Information Scalability • Many applications involve signal inferenceand not reconstructiondetection < classification < estimation < reconstruction • Good news: CS supports efficient learning, inference, processing directly on compressive measurements • Random projections ~ sufficient statisticsfor signals with concise geometrical structure • Leverages stable embedding of smooth manifolds
Matched Filter • Detection/classification with K unknown articulation parameters • Ex: position and pose of a vehicle in an image • Ex: time delay of a radar signal return • Matched filter: joint parameter estimation and detection/classification • compute sufficient statistic for each potential target and articulation • compare “best” statistics to detect/classify
Matched Filter Geometry data • Detection/classification with K unknown articulation parameters • Images are points in • Classify by finding closesttarget template to datafor each class (AWG noise) • distance or inner product target templatesfromgenerative modelor training data (points)
Matched Filter Geometry data • Detection/classification with K unknown articulation parameters • Images are points in • Classify by finding closesttarget template to data • As template articulationparameter changes, points map out a K-dimnonlinear manifold • Matched filter classification = closest manifold search articulation parameter space
Recall: CS for Manifolds • Recall the Theorem: random measurements preserve manifold structure • Enables parameter estimation and MFdetection/classificationdirectly on compressivemeasurements • K very small in many applications
Example: Matched Filter • Detection/classification with K=3 unknown articulation parameters • horizontal translation • vertical translation • rotation
Smashed Filter • Detection/classification with K=3 unknown articulation parameters (manifold structure) • Dimensionally reduced matched filter directly on compressive measurements
Smashed Filter • Random shift and rotation (K=3 dim. manifold) • Noise added to measurements • Goal: identify most likely position for each image class identify most likely class using nearest-neighbor test more noise classification rate (%) avg. shift estimate error more noise number of measurements M number of measurements M
Manifold Learning • Given training points in , learn the mapping to the underlying K-dimensional articulation manifold • ISOMAP, LLE, HLLE, … • Ex: images of rotating teapotarticulation space = circle
Compressive Manifold Learning • ISOMAP algorithm based on geodesic distances between points • Random measurements preserve these distances • Theorem: If , then the ISOMAP residual variance in the projected domain is bounded by the additive error factor [Hegde et al NIPS ’08] translatingdisk manifold(K=2) full data (N=4096) M = 100 M = 50 M = 25
Multisensor Inference • Example: Network of J cameras observing an articulating object • Each camera’s images lie on K-dim manifold in • How to efficiently fuse imagery from J cameras to solve an inference problem while minimizing network communication?
Multisensor Fusion • Fusion: stack corresponding image vectors taken at the same time • Fused images still lie on K-dim manifold in“joint manifold”
Joint Manifolds • Given submanifolds • -dimensional • homeomorphic (we can continuously map between any pair) • Define joint manifoldas concatenation of • Example: joint articulation
Joint Manifolds: Properties • Joint manifold inherits properties from component manifolds • compactness • smoothness • volume: • condition number ( ): • Translate into algorithm performance gains • Bounds are often loose in practice
Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=225 pixelsJ=20 views at different angles
Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=225 pixelsJ=20 views • Embeddingslearnedseparately • Embedding learned jointly
Manifold Learning via JM+CS • Goal:Learn embeddingvia random compressivemeasurements N=45x45=225 pixels J=20 views • Embeddingslearnedseparately • Embedding learned jointly M=100 measurements per view
Multisensor Fusion via JM+CS • Can take random CS measurements of stacked images and process or make inferences w/ unfused CS w/ unfused and no CS
Multisensor Fusion via JM+CS • Can compute CS measurements in-place • ex: as we transmit to collection/processing point
Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Goal: classify a test image class 1 class 2
Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Smashed filtering • independent • majority vote • joint manifold Joint Manifold