1.08k likes | 1.38k Views
Randomized Dimensionality Reduction with Applications to Signal Processing and Communications. Richard Baraniuk Rice University. The Digital Universe. Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years
E N D
Randomized Dimensionality Reduction with Applications to Signal Processing and Communications Richard Baraniuk Rice University
The Digital Universe • Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years > Avogadro’s number (6.02x1023) in 15 years • Growth fueled by multimedia data audio, images, video, surveillance cameras, sensor nets, … • In 2007 digital data generated > total storage by 2011, ½ of digital universe will have no home [Source: IDC Whitepaper “The Diverse and Exploding Digital Universe” March 2008]
Act I What’s wrong with today’s multimedia sensor systems? why go to all the work to acquire massive amounts of multimedia data only to throw much/most of it away? Act II A way out: dimensionality reduction (compressive sensing) enables the design of radically new sensors and systems Act III Compressive sensing in action new cameras, imagers, ADCs, … Act IV Intersections with info theorynew problems, new solutions
Sense by Sampling sample
Sense by Sampling too much data! sample
Sense then Compress sample compress JPEG JPEG2000 … decompress
Sparsity largewaveletcoefficients (blue = 0) pixels
Sparsity largewaveletcoefficients (blue = 0) largeGabor (TF)coefficients pixels widebandsignalsamples frequency time
What’s Wrong with this Picture? • Why go to all the work to acquire N samples only to discard all but K pieces of data? sample compress decompress
What’s Wrong with this Picture? • nonlinear processing • nonlinear signal model • (union of subspaces) linear processing linear signal model (bandlimited subspace) sample compress decompress
Compressive Sensing • Directly acquire “compressed” data via dimensionality reduction • Replace samples by more general “measurements” compressive sensing recover
Sampling • Signal is -sparse in basis/dictionary • WLOG assume sparse in space domain sparsesignal nonzero entries
Sampling • Signal is -sparse in basis/dictionary • WLOG assume sparse in space domain • Sampling sparsesignal measurements nonzeroentries
Compressive Sampling • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through linear dimensionality reduction sparsesignal measurements nonzero entries
How Can It Work? • Projection not full rank…… and so loses information in general • Ex: Infinitely many ’s map to the same(null space)
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors columns
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors • is effectively MxK columns
How Can It Work? • Projection not full rank…… and so loses information in general • But we are only interested in sparse vectors • Design so that each of its MxK submatrices are full rank (ideally orthobasis) columns
Geometrical Situation • Sparse signal: All but K coordinates are zero • Q: Structure of the set of all K-sparse vectors? sparsesignal nonzero entries
Sparse signal: All but K coordinates are zero Model: union of K-dimensional subspaces aligned w/ coordinate axes (highly nonlinear!) Geometrical Situation sparsesignal nonzero entries
Stable Embedding An information preserving projection preserves the geometry of the set of sparse signals How to do this? Ensure K-dim subspaces
Restricted Isometry Property RIP of order 2K implies preserves distancesbetween any two K-sparse vectors each K-sparse; is 2K-sparse RIP: any selection of 2Kcolumns is close to anorthobasis columns
Restricted Isometry Property RIP of order 2K implies preserves distancesbetween any two K-sparse vectors each K-sparse; is 2K-sparse RIP: any selection of 2Kcolumns is close to anorthobasis Unfortunately, a combinatorial, NP-complete design problem columns
Insight from the 80’s [Kashin, Gluskin] • Draw at random • iid Gaussian • iid Bernoulli … • Then has the RIP with high probability provided columns
One Proof Technique Consider one K-dim subspace Cover unit sphere in subspace with a point cloud Apply Johnson-Lindenstrauss Lemma to show isometry for all points in cloud log(Q) random projections preserve inter-point distances in a cloud of Q points in whp Extend to entire K-dim subspace Extend via union bound to all subspaces [B, Davenport, DeVore, Wakin, Constructive Approximation, 2008] K-dim subspace
Randomized Dim. Reduction • Measurements = random linear combinations of the entries of • No information loss for sparse vectors whp sparsesignal measurements nonzero entries
CS Signal Recovery • Goal: Recover signal from measurements • Problem: Randomprojection not full rank(ill-posed inverse problem) • Solution: Exploit the sparse/compressiblegeometry of acquired signal
CS Signal Recovery • Random projection not full rank • Recovery problem:givenfind • Null space • Search in null space for the “best” according to some criterion • ex: least squares (N-M)-dim hyperplaneat random angle
Signal Recovery • Recovery: given(ill-posed inverse problem) find (sparse) • Optimization: • Closed-form solution:
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Signal Recovery “find sparsest vectorin translated nullspace”
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Correct! But NP-Complete alg Signal Recovery “find sparsest vectorin translated nullspace”
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Signal Recovery Candes Romberg Tao Donoho
Recovery: given(ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Correct! Polynomial time alg Signal Recovery
Summary: CS Encoding: = random linear combinations of the entries of Decoding: Recover from via optimization sparsesignal measurements nonzero entries
CS Hallmarks Stable acquisition/recovery process is numerically stable Asymmetrical(most processing at decoder) conventional: smart encoder, dumb decoder CS: dumb encoder, smart decoder Democratic each measurement carries the same amount of information robust to measurement loss and quantization “digital fountain” property Random measurements encrypted Universal same random projections / hardware can be used forany sparse signal class (generic)
Universality • Random measurements can be used for signals sparse in any basis
Universality • Random measurements can be used for signals sparse in any basis
Universality • Random measurements can be used for signals sparse in any basis sparsecoefficient vector nonzero entries
Gerhard Richter 4096 Farben / 4096 Colours 1974254 cm X 254 cmLaquer on CanvasCatalogue Raisonné: 359 Museum Collection:Staatliche Kunstsammlungen Dresden (on loan) Sales history: 11 May 2004Christie's New York Post-War and Contemporary Art (Evening Sale), Lot 34US$3,703,500
“Single-Pixel” CS Camera scene single photon detector imagereconstructionorprocessing DMD DMD random pattern on DMD array w/ Kevin Kelly
“Single-Pixel” CS Camera scene single photon detector imagereconstructionorprocessing DMD DMD random pattern on DMD array … • Flip mirror array M times to acquire M measurements • Sparsity-based (linear programming) recovery
First Image Acquisition target 65536 pixels 1300 measurements (2%) 11000 measurements (16%)
Utility? Fairchild 100Mpixel CCD single photon detector DMD DMD
CS Low-Light Imaging with PMT true color low-light imaging 256 x 256 image with 10:1 compression [Nature Photonics, April 2007]