580 likes | 611 Views
Explore advanced algorithms in one-bit compressive sensing for efficient vector and support set recovery. Learn about theoretical guarantees, sample complexity, and the latest research directions.
E N D
Efficient Algorithms for Robust One-bit Compressive Sensing 张利军 南京大学 http://cs.nju.edu.cn/zlj
Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion
Compressive Sensing • The Basic Formulation where , , ,
Compressive Sensing Sensing • The Basic Formulation where , , , Measurement
Compressive Sensing • The Basic Formulation where , , , • Impossible unless we make some assumptions
Compressive Sensing • The Basic Formulation where , , , • Assumptions • is sparse • has some special property • Restricted Isometry Property (RIP)
History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996]
History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006] • Perfect Recovery under RIP condition Beyond Nyquist Sampling Theorem s
History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006] • Perfect Recovery under RIP condition Johnson–Lindenstrausslemma [Mendelson et al., 2008] s
History • Parameter Estimation (1795) [Davenport et al., 2012] • Lasso [Tibshirani, 1996] • Compressive Sensing [Candès and Tao, 2005] [Donoho, 2006]
Research Directions • Theoretical Analysis • Upper Bound • Lower Bound There exists an algorithm such that for all There exists asuch that for all algorithms
Research Directions • Theoretical Analysis • Upper Bound • Lower Bound • Algorithm Design • Make the algorithm more efficient • Make the algorithm more practical [Zhang et al., 2015] There exists an algorithm such that for all There exists asuch that for all algorithms
One-Bit Compressive Sensing • The Basic Formulation where , , , could be larger than
One-Bit Compressive Sensing • The Basic Formulation where , , , • The Goal • Vector Recovery • Support Set Recovery
Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion
Terminology • Sparsity , Measurements , Dimensionality • Convergence(Recovery) Rate • Given , , , the order of • Sample (Measurement) Complexity • To ensure , the order of
Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee
Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee ?
Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee • The 1st with Theoretical Guarantee [Jacques et al., 2013] • Non-convex Optimization
Related Work: Noiseless • The Seminal Work [Boufounos and Baraniuk, 2008] • Non-convex Optimization, No Guarantee • The 1st with Theoretical Guarantee [Jacques et al., 2013] • Non-convex Optimization • An Efficient Two-stage Algorithm [Gopiet al., 2013] • Sample Complexity
Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a]
Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a] • Advantage • Convex Optimization, With Guarantee • Exactly Sparse Vector: • Approximately Sparse Vector:
Related Work: Noiseless • The 1stConvex Formulation [Plan and Vershynin, 2013a] • Advantage • Convex Optimization, With Guarantee • Exactly Sparse Vector: • Approximately Sparse Vector: • Limitation • Sample Complexity
Related Work: Noisy • Noisy Observation Model w.p.
Related Work: Noisy • Noisy Observation Model • Several Heuristic Algorithms • [Yan et al., 2012] [Movahed et al., 2012] w.p.
Related Work: Noisy • Noisy Observation Model • Several Heuristic Algorithms • [Yan et al., 2012] [Movahed et al., 2012] • The 1st with Guarantee [Plan and Vershynin, 2013b] • ExactlySparse & Approximately Sparse w.p.
Summary • Sample Complexity of 1-bit CS
Summary Improvable Improvable • Sample Complexity
Summary • Sample Complexity
Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion
Our Motivation • This Direction is Relatively New • Current results are unsatisfactory • Our Key Observation
Our Motivation • This Direction is Relatively New • Current results are unsatisfactory • Our Key Observation Applying Techniques in Classification to One-bit CS
Our Passive Algorithm • A -norm Regularized Formulation
Our Passive Algorithm • A -norm Regularized Formulation • Comparison [Plan and Vershynin, 2013b] Regularization Empirical Risk
Our Passive Algorithm • A -norm Regularized Formulation • Closed-From Solution • Proved by Analyzing the Dual Problem
Theoretical Guarantees • Recovery Rate
Theoretical Guarantees • Sample Complexity • Improve the previous [Plan and Vershynin, 2013b] • Match the minimax rate of CS [Raskutti et al., 2011]
Techniques for Proof • Non-smooth Convex Optimization • Concentration Inequality
Adaptive Algorithm • Key Idea • A Reminder [Donoho, 2006] • Why does it work in One-bit CS? The leaner can choose In traditional CS, adaptive is NOT useful! Because active learning works in binary classification
Challenges • Active Leaning Based on Binary Loss • Good Theoretical Guarantee • Inefficient • Active Learning Based on Convex Loss • Efficient (e.g., SVM Active Learning) • How to bound the recovery error? • Much more difficult than previously thought [Hanneke and Yang, 2010] Convex Risk Recovery Error
Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling
Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling • Shrink the Sampling Space, as we learn
Our Adaptive Algorithm • The Basic Idea • At the Beginning, Random Sampling • Shrink the Sampling Space, as we learn • Sampling Space becomes smaller, as we know more
Our Adaptive Algorithm • Data are non i.i.d • The Concentration Inequality Fails • A Common Dilemma in Active Learning • [Yang and Hanneke, 2013] • To Remedy
Theoretical Guarantee • Exactly Sparse
Theoretical Guarantee • Approximately Sparse
Outline • Background • Related Work • Our Algorithms • Experiments • Conclusion