1 / 57

Efficient Algorithms for Robust One-bit Compressive Sensing

Explore advanced algorithms in one-bit compressive sensing for efficient vector and support set recovery. Learn about theoretical guarantees, sample complexity, and the latest research directions.

tabithah
Download Presentation

Efficient Algorithms for Robust One-bit Compressive Sensing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient Algorithms for Robust One-bit Compressive Sensing 张利军 南京大学 http://cs.nju.edu.cn/zlj

  2. Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion

  3. Compressive Sensing • The Basic Formulation where , , ,

  4. Compressive Sensing Sensing • The Basic Formulation where , , , Measurement

  5. Compressive Sensing • The Basic Formulation where , , , • Impossible unless we make some assumptions

  6. Compressive Sensing • The Basic Formulation where , , , • Assumptions • is sparse • has some special property • Restricted Isometry Property (RIP)

  7. History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996]

  8. History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006] • Perfect Recovery under RIP condition Beyond Nyquist Sampling Theorem s

  9. History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006] • Perfect Recovery under RIP condition Johnson–Lindenstrausslemma [Mendelson et al., 2008] s

  10. History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006]

  11. Research Directions • Theoretical Analysis • Upper Bound • Lower Bound There exists an algorithm such that for all There exists asuch that for all algorithms

  12. Research Directions • Theoretical Analysis • Upper Bound • Lower Bound • Algorithm Design • Make the algorithm more efficient • Make the algorithm more practical [Zhang et al., 2015] There exists an algorithm such that for all There exists asuch that for all algorithms

  13. One-Bit Compressive Sensing • The Basic Formulation where , , , could be larger than

  14. One-Bit Compressive Sensing • The Basic Formulation where , , , • The Goal • Vector Recovery • Support Set Recovery

  15. Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion

  16. Terminology • Sparsity , Measurements , Dimensionality • Convergence(Recovery) Rate • Given , , , the order of • Sample (Measurement) Complexity • To ensure , the order of

  17. Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee

  18. Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee ?

  19. Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee • The 1st with Theoretical Guarantee [Jacques et al., 2013] • Non-convex Optimization

  20. Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee • The 1st with Theoretical Guarantee [Jacques et al., 2013] • Non-convex Optimization • An Efficient Two-stage Algorithm [Gopiet al., 2013] • Sample Complexity

  21. Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a]

  22. Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a] • Advantage • Convex Optimization, With Guarantee • Exactly Sparse Vector: • Approximately Sparse Vector:

  23. Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a] • Advantage • Convex Optimization, With Guarantee • Exactly Sparse Vector: • Approximately Sparse Vector: • Limitation • Sample Complexity

  24. Related Work: Noisy • Noisy Observation Model w.p.

  25. Related Work: Noisy • Noisy Observation Model • Several Heuristic Algorithms • [Yan et al., 2012] [Movahed et al., 2012] w.p.

  26. Related Work: Noisy • Noisy Observation Model • Several Heuristic Algorithms • [Yan et al., 2012] [Movahed et al., 2012] • The 1st with Guarantee [Plan and Vershynin, 2013b] • ExactlySparse & Approximately Sparse w.p.

  27. Summary • Sample Complexity of 1-bit CS

  28. Summary Improvable Improvable • Sample Complexity

  29. Summary • Sample Complexity

  30. Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion

  31. Our Motivation • This Direction is Relatively New • Current results are unsatisfactory • Our Key Observation

  32. Our Motivation • This Direction is Relatively New • Current results are unsatisfactory • Our Key Observation Applying Techniques in Classification to One-bit CS

  33. Our Passive Algorithm • A -norm Regularized Formulation

  34. Our Passive Algorithm • A -norm Regularized Formulation • Comparison [Plan and Vershynin, 2013b] Regularization Empirical Risk

  35. Our Passive Algorithm • A -norm Regularized Formulation • Closed-From Solution • Proved by Analyzing the Dual Problem

  36. Theoretical Guarantees • Recovery Rate

  37. Theoretical Guarantees • Sample Complexity • Improve the previous [Plan and Vershynin, 2013b] • Match the minimax rate of CS [Raskutti et al., 2011]

  38. Techniques for Proof • Non-smooth Convex Optimization • Concentration Inequality

  39. Adaptive Algorithm • Key Idea • A Reminder [Donoho, 2006] • Why does it work in One-bit CS? The leaner can choose In traditional CS, adaptive is NOT useful! Because active learning works in binary classification

  40. Challenges • Active Leaning Based on Binary Loss • Good Theoretical Guarantee • Inefficient • Active Learning Based on Convex Loss • Efficient (e.g., SVM Active Learning) • How to bound the recovery error? • Much more difficult than previously thought [Hanneke and Yang, 2010] Convex Risk Recovery Error

  41. Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling

  42. Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling • Shrink the Sampling Space, as we learn

  43. Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling • Shrink the Sampling Space, as we learn • Sampling Space becomes smaller, as we know more

  44. Our Adaptive Algorithm • Data are non i.i.d • The Concentration Inequality Fails • A Common Dilemma in Active Learning • [Yang and Hanneke, 2013] • To Remedy

  45. The Detail

  46. Theoretical Guarantee • Exactly Sparse

  47. Theoretical Guarantee • Approximately Sparse

  48. Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion

  49. Experiments

  50. Experiments

More Related