1 / 34

Michael Elad The Computer Science Department

Explore Compressed Sensing (CS) and optimize linear projections for better results in signal compression and reconstruction processes. Learn about sparse and redundant signal modeling and the criteria for optimal projections.

denicew
Download Presentation

Michael Elad The Computer Science Department

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimized Projection Directions for Compressed Sensing Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel The IV Workshop on SIP & IT Holon Institute of Technology June 20th, 2007

  2. What is This Talk About? • Compressed Sensing (CS): • An emerging field of research, • Deals with combined of sensing-compression, • Offering novel sampling results for signals, • Leans on sparse & redundant modeling of signals, • Key players: Candes, Tao, Romberg, Donoho, Tropp, Baraniuk, Gilbert, DeVore, Strauss, Cohen, … A key ingredient in the use of Compressed-Sensing is Linear Projections In this talk we focus on this issue, offering a way to design these projections to yield better CS performance. Optimized Projection Directions for Compressed Sensing

  3. This lecture is based on the paper: M. Elad, "Optimized Projections for Compressed-Sensing", to appear in IEEE Trans. on Signal Processing. Agenda • What is Compressed-Sensing (CS)? • Little-bit of Background • 2. The Choice of Projections for CS • Criteria for Optimality • 3. The Obtained Results • Simulations • What Next? • Conclusions & Open problems Optimized Projection Directions for Compressed Sensing

  4. Typical Acquisition The Typical Signal Acquisition Scenario Sample a signal very densely, and then compress the information for storage or transmission A typical example • This 6.1 Mega-Pixels digital camera senses 6.1e+6 samples to construct an image. • The image is then compressed using JPEG to an average size smaller than 1MB – a compression ratio of ~20. Optimized Projection Directions for Compressed Sensing

  5. ? Could the two processes (sensing & compressing) be combined Compressed–Sensing? A natural question to ask is The answer is YES! This is what Compressed-Sensing (CS) is all about. [Candes,Romberg,&Tao `04, Donoho `06, Candes ‘06, Tsaig&Donoho `06] Optimized Projection Directions for Compressed Sensing

  6. How CS could be Done? A signal of interest Instead of the traditional … Sense all the n samples Compress the raw data Decompress for later processing Apply the following … These p values represent the signal in a compressed form Sense p<<n values Reconstruct x from for later processing Optimized Projection Directions for Compressed Sensing

  7. Sense p<<n values Reconstruct x from for later processing A signal of interest Few Fundamental Questions Few Questions Must be Answered: • What functions fi(x) to use ? • How many measurements to take (p) ? • How can we reconstruct x from the measured values ? Linear Projections fi(x)=xTvi are appealing due to their simplicity. Depends on the complexity (degrees of freedom) of x. Depends on the model we assume on x. Optimized Projection Directions for Compressed Sensing

  8. k n Sparsity and Redundancy • At the heart of CS, lies a specific choice of a model for our signals. • The model used in recent work that studies CS is based on Sparse and Redundant Representations • We assume that each of our signals could be represented as a linear combination of few columns of a matrix (dictionary) D. • Thus, we define the family of signals, Ω, to be such that Optimized Projection Directions for Compressed Sensing

  9. Ω n p Compressed–Sensing A sparse & random vector Reconstruct x from y (i.e. Px) Sense p<<n values by Px Multiply by D Generation of a signal from Ω Optimized Projection Directions for Compressed Sensing

  10. we get a new one … p k The Obtained Linear System Instead of the original system n k which is really … Optimized Projection Directions for Compressed Sensing

  11. Known Reconstructing x 1. Solve • The following are known: • D – part of the model for Ω; • P – our choice of projections; • y – the set of p measurements; and •  is expected to be sparse !!! 2. Set Optimized Projection Directions for Compressed Sensing

  12. In Practice? Is there a practical reconstruction algorithm? • Replace : Basis Pursuit • Build  greedily: Matching Pursuit A General Rule* claims: For signals in Ω, i.e., If p>2T up to a constant (and a log factor!?), we get a perfect recovery of the signal with an overwhelming probability. * See [Candes, Romberg, & Tao `04, Donoho `06, Candes 06`, Tsaig & Donoho `06]. Optimized Projection Directions for Compressed Sensing

  13. Agenda • What is Compressed-Sensing (CS)? • Little-bit of Background • 2. The Choice of Projections for CS • Criteria for Optimality • 3. The Obtained Results • Simulations • What Next? • Conclusions & Open problems Optimized Projection Directions for Compressed Sensing

  14. What P to Use? • The choices recommended in the literature include several random options: • Gaussian IID entries; • Binary (±1) IID entries; and • Fourier IID sampling. • An important incentive to choose one of the above choices is the (relative) ease with which theoretical guarantee theorems can be developed. • Could we propose a better way to design the projection directions P? Reconstruction is performed by approximating the solution of The success (or failure) of this approach depends strongly on a proper choice of P Optimized Projection Directions for Compressed Sensing

  15. HERE IS A GREAT IDEA TOO COMPLICATED This is done by a pursuit algorithm Better P – Phase I How about choosing P that optimizes the recovery performance on a group of “training” signals? Here is how such thing could be done: Step 1: Gather a large set of example signals having sparse representation Step 2: Solve This is a bi-level optimization problem, hard to minimize w.r.t. P Optimized Projection Directions for Compressed Sensing

  16. Better P – Phase II Design P such that pursuit methods are likely to perform better – i.e., optimize performance indirectly. But … How? Here is a Theorem we could rely on: • Suppose that x0=Dα0, where • Then, • The vector α0is the solution of , • Both pursuit algorithms are guaranteed to find it. μis a property of the matrix D, describing its columns’ dependence [Donoho & Elad `02] [Gribonval & Nielsen `04] [Tropp `04] Optimized Projection Directions for Compressed Sensing

  17. How Can this Be Used? Adapting the above Theorem, the implication: When seeking the sparsest solution to the system , if the solution satisfies: then pursuit methods succeed, and find this (sparsest) solution. Optimized Projection Directions for Compressed Sensing

  18. Generation of a signal from Ω A sparse & random vector Sense p<<n values by Px Multiply by D Yes, provided that . The Rationale Great!! So, it is time to talk about μ. We could design P to lead to smallest possible measure μ Optimized Projection Directions for Compressed Sensing

  19. G D = = DT Assume L2 normalized columns DTD Lets Talk about μ • Compute the Gram matrix • The Mutual Coherenceμ(D)is the largest off-diagonal entry in absolute value. • Minimizing μ(PD): finding P such that the worst-possible pair of columns in PD are as distant as possible – a well-defined (but not too easy) problem. • The above is possible but … worthless for our needs! μ is a “worst-case” measure – it does not reveal the average performance of pursuit methods. Optimized Projection Directions for Compressed Sensing

  20. G D = = DT DTD Better P – Phase III (and Last) We propose to use an average measure, taking into account all entries in G that are above a pre-specified threshold, t: Instead of working with a fixed threshold t, we can also work with t% representing an average of the top t% entries in |G| is an average of the off-diagonal entries of |G| , i.e., it becomes the largest entry in |G| Optimized Projection Directions for Compressed Sensing

  21. So, Our Goal Now is to … Minimize the average coherence μt measure w.r.t. P D We need a numerical algorithm to do it Pinit Popt Algorithm Parameters t,iter,γ Optimized Projection Directions for Compressed Sensing

  22. The Involved Forces Our Goal: Defining , we know that the following properties must hold true: 1. The rank of G must be p, 2. The square-root of G should be factorized to PD, 3. Some entries in G should be as small as possible. • We propose an algorithm that projects iteratively onto each of the above constraints, getting gradually a better P. • Closely related to the work in [Tropp, Dhillon, Heath, Strohmer, `05]. Optimized Projection Directions for Compressed Sensing

  23. 1 Normalize columns of Compute the Gram Shrink entries (γ) Reduce rank to p: Update P by min. 0.8 0.6 0.4 Compute SQRT y=x y=γx y=φ(x) 0.2 = = = Output Value 0 -0.2 - 2 F -0.4 No k=iter k=k+1 -0.6 -0.8 p Yes -1 -1 -0.5 0 0.5 1 END Input Value The Numerical Algorithm Set k=0 & P0=Pinit Optimized Projection Directions for Compressed Sensing

  24. Agenda • What is Compressed-Sensing (CS)? • Little-bit of Background • 2. The Choice of Projections for CS • Criteria for Optimality • 3. The Obtained Results • Simulations • What Next? • Conclusions & Open problems Optimized Projection Directions for Compressed Sensing

  25. A Core Experiment Details: • We build a random matrix D of size 200×400 with Gaussian zero mean IID entries. • We find a 30×200 matrix P according to the above-described algorithm. • We present the obtained results, and the convergence behavior. • Parameters: t=20%, γ=0.5. n=200 K=400 p=30 Optimized Projection Directions for Compressed Sensing

  26. Entries in |G| 8000 The histogram of the entries in |G| The matrices |G| as a function of the iteration 7000 6000 5000 0 iterations 5 iterations 10 iterations 15 iterations 20 iterations 4000 3000 2000 1000 0 0 0.2 0.4 0.6 0.8 1.0 Optimized Projection Directions for Compressed Sensing

  27. Convergence 0.3 0.295 As a function of the iteration 0.29 0.285 g =0.95 t m Value of 0.28 g =0.85 0.275 g =0.75 0.27 g =0.65 0.265 g =0.55 50 100 150 200 250 300 Iteration Optimized Projection Directions for Compressed Sensing

  28. 0 10 -1 10 -2 10 Relative # of errors -3 10 -4 10 -5 10 15 20 25 30 35 40 p CS Performance (1): Effect of P Details: • D: 80×120 random. • 100,000 signal examples to test on. Each have T=4 non-zeros in their (random) α. • p varied in the range [16,40]. • CS performance: before and after optimized P, for both BP and OMP. • Optimization: t=20%, γ=0.95, 1000 iterations. • Show average results over 10 experiments. OMP with random P BP with random P OMP with optimized P BP with optimized P Optimized Projection Directions for Compressed Sensing

  29. 0 10 -1 10 -2 10 Relative # of errors -3 10 -4 10 -5 10 1 2 3 4 5 6 7 T - Cardinality of the input signals CS Performance (2): Effect of T Details: • D: 80×120 random. • 100,000 signal examples to test on. Each have T (varies) non-zeros in their α. • T varied in the range [1,7]. • Fixed p=25. • CS performance: before and after optimized P for both BP and OMP. • Optimization: t=20%, γ=0.95, 1000 iterations. • Show average results over 10 experiments. OMP with random P OMP with BP with optimized P random P BP with optimized P Optimized Projection Directions for Compressed Sensing

  30. 0 10 Relative # of errors -1 10 -2 10 40 60 80 100 120 140 160 n CS Performance (3): Effect of n Details: • D: n×1.5n with n varied in the range [40,160] (random). • 100,000 signal examples to test on. Each have T (varies) non-zeros in their α. • T=n/20. • p=n/4. • CS performance: before and after optimized P – OMP and BP. • Optimization: t=20%, γ=0.95, 1000 iterations. • … OMP with random P random P BP with BP with optimized P OMP with optimized P Optimized Projection Directions for Compressed Sensing

  31. 0 10 -1 10 Relative # of errors -2 10 -3 10 -4 10 0 1 2 10 10 10 t% CS Performance (4):Effect of t% Details: • D: 80×120 random. • 100,000 signal examples to test on. Each have T=4 non-zeros in their α. • P=30, T=4: fixed. • CS performance: before and after optimized P. • Optimization: t=varies in the range [1,100]%, γ=0.95, 1000 iterations. • … OMP with optimized P BP with optimized P Optimized Projection Directions for Compressed Sensing

  32. Agenda • What is Compressed-Sensing (CS)? • Little-bit of Background • 2. The Choice of Projections for CS • Criteria for Optimality • 3. The Obtained Results • Simulations • What Next? • Conclusions & Open problems Optimized Projection Directions for Compressed Sensing

  33. ? How should P be chosen Conclusions Linear projection P is used for generating the compressed & sensed measurements in CS Compressed-Sensing (CS) An emerging & exciting research field; combining sensing and compression of signals Sparse Representation over a dictionary D is the model used for deriving “sampling theorems” in CS work. Ideally, we want to choose P to Optimize CS Performance for the class of signals in mind Since it is too difficult, we propose to Optimize w.r.t. a Different Measure, indirectly affecting CS performance Various Experimentsthat we provide show that the damn thing works rather well. More Work is required in order to further improve this method in various ways – see next slide The measure we propose is an Average Mutual Coherence. We present an algorithm for its minimization Optimized Projection Directions for Compressed Sensing

  34. Future Work • Could we optimize the true performance of BP/OMP using the Bi-Level optimization we have shown? • How about optimizing P w.r.t. a simplified pursuit algorithm like simple thresholding? • What to do when the dimensions involved are huge? For example, when using the curvelets, contourlets, or steerable-wavelet transforms. In these cases the proposed algorithm is impractical! • Could we find a clear theoretical way to tie the proposed measure (average coherence) with pursuit performance? • Maybe there is a better (yet simple) alternative measure. What is it? Optimized Projection Directions for Compressed Sensing

More Related