470 likes | 556 Views
Unrolling: A principled method to develop deep neural networks. Chris Metzler, Ali Mousavi, Richard Baraniuk Rice University. Solving Imaging Inverse Problems. Traditional Methods (e.g., ADMM). Deep Neural Networks (DNNs). Blackbox methods which learn complex functions
E N D
Unrolling:A principled method to develop deep neural networks Chris Metzler, Ali Mousavi, Richard Baraniuk Rice University
Solving Imaging Inverse Problems Traditional Methods (e.g., ADMM) Deep Neural Networks (DNNs) Blackbox methods which learn complex functions What happens between layers is an open research question Priors are learned with training data Demonstrate state-of-the-art performance on a variety of imaging tasks • Have well-understood behavior • E.g., iterations are refining an estimate • Convergence guarantees • Interpretable priors • Are easy to apply
Solving Imaging Inverse Problems Traditional Methods (e.g., ADMM) Deep Neural Networks (DNNs) Blackbox methods which learn complex functions What happens between layers is an open research question Priors are learned with training data Demonstrate state-of-the-art performance on a variety of imaging tasks • Have well-understood behavior • E.g., iterations are refining an estimate • Convergence guarantees • Interpretable priors • Are easy to apply Is there a way to combine the strengths of both?
Describe unrolling; a process to turn an iterative algorithm into a deep neural network • Can use training data • Is interpretable • Maintains convergence and performance guarantees • Apply unrolling to the Denoising-based AMP algorithm to solve the compressive imaging problem This talk
The unrolling process [Gregor and Lecun 2010, Kamilov and Mansour 2015, Borgerding and Schniter 2016]
The unrolling process [Gregor and Lecun 2010, Kamilov and Mansour 2015, Borgerding and Schniter 2016]
has free parameters. A and AH can also be treated as parameters to learn • Feed training data, i.e., (x, y) pairs, through the network, calculate errors, backpropagate Training the Network Error Gradients [Gregor and Lecun 2010, Kamilov and Mansour 2015, Borgerding and Schniter 2016]
Performance [Borgerding and Schniter 2016]
The Compressive Imaging Problem Target (x) Sensor Array (y)
The Compressive Imaging Problem Target (x) Sensor Array (y)
The Compressive Imaging Problem Target (x) Sensor Array (y)
The Compressive Imaging Problem Target (x) Sensor Array (y)
The Compressive Imaging Problem Target (x) Sensor Array (y)
The Compressive Imaging Problem Target (x) Sensor Array (y) • Every measurement has a cost; $, time, bandwidth, etc. • Compressed sensing enables us to recover x with fewer measurements
Compressive Imaging Applications Higher Resolution Seismic Imaging Higher Resolution Synthetic Aperture Radar Faster Medical Imaging Low Cost High Speed Imaging Low Cost Infrared/Hyperspectral Imaging Veeraraghavan et al. 2011 Baraniuk 2007
Compressive Imaging Problem Set of all natural images
Traditional Methods vs DNNs Traditional Methods (D-AMP) DNNs (Ali’s talk) Black box Why does it work? When will it fail? Less accurate Much faster Learns priors with training data • Well understood behavior • Recovery guarantees • Noise sensitivity analysis • More accurate • Slower • Uses denoisers to impose priors Learned D-AMP gets the strengths of both
Denoising-based AMP M. et al. 2016
Neural Network (NN) Recovery Methods Mousavi and Baraniuk. 2017
Unroll D-AMP to form a NN Learned D-AMP • Efficiently train a 200 layer network • Demonstrate proposed algorithm is fast, flexible, and effective • >10x faster than D-AMP • Handles arbitrary right-rotationally-invariant matrices • State-of-the-art recovery accuracy Our contribution
Unroll D-AMP Need a denoiser that can be trained
Deep convolution networks are now state-of-the-art image denoisers [Zhang et al. 2016] CNN-based Denoiser
Place a DNN-based denoiser unrolled D-AMP to form Learned D-AMP • We need to train this huge network Learned D-AMP
The Challenge: Network is 200 layers deep and has over 6 million free parameters • Solution: [1] proved that for D-AMP layer-by-layer training is optimal Training [1] M. et al. "From denoising to compressed sensing." IEEE Transactions on Information Theory 62.9 (2016): 5117-5144.
400 training images were broken into ~300000 50x50 patches • Patches were randomly flipped and/or rotated during training • 10 identical networks were trained to remove additive white Gaussian noise at 10 different noise levels • Stopped training when validation error stopped improving; generally 10 to 20 epochs • Trained on 3584 core Titan X GPU for between 3 and 5 hours Training Details
High-Res 10% Gaussian Matrix BM3D-AMP (199 sec) Learned DVAMP (62 sec)
Performance: Gaussian Measurements 1 dB Better than BM3D-AMP
Runtime: Gaussian Measurements >10x Faster than BM3D-AMP
Computational Complexity Computational complexity is dominated by matrix multiplies
Performance: Coded Diffraction Measurements 2 dB Better than BM3D-AMP
Runtime: Coded Diffraction Measurements >17x Faster than BM3D-AMP (at 128x128)
High-Res 5% Coded Diffraction Matrix TVAL3 (6.85 sec) Learned DVAMP (1.22 sec)
High-Res 5% Coded Diffraction Matrix BM3D-AMP (75.04 sec) Learned DVAMP (1.22 sec)
Unrolling turns an iterative algorithm into a deep neural net • Illustrated here with D-AMP Learned D-AMP • Learned-DAMP is fast, flexible, and effective • >10x faster than D-AMP • Handles arbitrary right-rotationally-invariant matrices • State-of-the-art recovery accuracy Summary
NSF GRFP Acknowledgments