1 / 22

Understanding and evaluating blind deconvolution algorithms

Understanding and evaluating blind deconvolution algorithms. Anat Levin 1,2 , Yair Weiss 1,3 , Fredo Durand 1 , Bill Freeman 1,4 1 MIT CSAIL, 2 Weizmann Institute, 3 Hebrew University, 4 Adobe . Blind deconvolution. ?. ?. kernel. blurred image. sharp image.

kathie
Download Presentation

Understanding and evaluating blind deconvolution algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Understanding and evaluating blind deconvolution algorithms Anat Levin1,2, Yair Weiss1,3, Fredo Durand1, Bill Freeman1,4 1MIT CSAIL, 2Weizmann Institute, 3Hebrew University, 4Adobe

  2. Blind deconvolution ? ? kernel blurred image sharp image • Rich literature, no perfect solution • Fergus et al. 06, Levin 06, Jia 07, Joshi et al. 08, Shan et al. 08 • In this talk: • No new algorithm • What makes blind deconvolution hard? • Quantitatively evaluate recent algorithms on the same dataset

  3. Blind deconvolution blurred image sharp image noise blur kernel Unknown, need to estimate Input (known) ? ?

  4. Natural image priors Gaussian: -x2 Laplacian: -|x| -|x|0.5 Log prob -|x|0.25 x x Derivative histogram from a natural image Parametric models Derivative distributions in natural images are sparse:

  5. Sparse priors in image processing • Denoising • Simoncelli et al., Roth&Black • Inpainting • Sapiro et al., Levin et al. • Super resolution • Tappen et al. • Transparency • Levin et al. • Demosaicing • Tappen et al., Hel-Or et al. • Non blind deconvolution • Levin et al.

  6. Naïve MAPx,k estimation Find a kernel k and latent image x minimizing: Convolution constraint Sparse prior Should favor sharper x explanations

  7. The MAPx,k paradox P( , )>P( , ) kernel kernel Latent image Latent image Claim 1: Let be an arbitrarily large image sampled from a sparse prior , and Then the delta explanation is favored

  8. The MAPx,k failure sharp blurred ?

  9. The MAPx,k failure 45x45 windows 25x25 windows 15x15 windows simple derivatives [-1,1],[-1;1] FoE filters (Roth&Black) Red windows = [ p(sharp x) >p(blurred x) ]

  10. The MAPx,k failure - intuition > k=[0.5,0.5] P(step edge) P(blurred step edge) cheaper sum of derivatives: < P(impulse) P(blurred impulse) cheaper sum of derivatives:

  11. Blur reduces derivative contrast < P(sharp real image) P(blurred real image) cheaper Real image row: Noise and texture behave as impulses - total derivative contrast reduced by blur

  12. Why does MAPx,k fail? • Too few measurements? Fails even with infinitely large image • Wrong prior? Fails even for signals sampled from the prior • Choice of estimator

  13. MAPk estimation MAPx,k-estimate x,k simultaneously argmaxP(,|) x k y MAPk-estimate k alone, marginalize x argmaxP(|) k y = P(,|)dx x k y

  14. Results in this paper: Let be an arbitrarily large image sampled from a sparse prior , and Then Claim 1- MAPx,k estimator fails: The delta explanation is favored Claim 2- MAPk estimator succeeds: is maximized by the true kernel

  15. Intuition: dimensionality asymmetry MAPx,k– Estimationunreliable. Number of measurements always lower than number of unknowns: #y<#x+#k MAPk– Estimation reliable. Many measurements for large images: #y>>#k kernel k sharp image x blurred image y Large, ~105 unknowns Small, ~102 unknowns ~105 measurements

  16. Approximate MAPk strategies • Marginalization on x is challenging to compute • Approximation strategies: • Independence assumption in derivatives space: • Levin NIPS06 • Variational approximation: • Miskin and Mackay 00, Fergus et al.SIGGRAPH06 • Laplace approximation: • Brainard and Freeman 97, Bronstein et al. 05

  17. Evaluation on 1D signals Exact MAPk MAPx,k Favors delta solution Favor correct solution despite wrong prior! MAPk, variational approximation (Fergus et al.) MAPk, Gaussian prior

  18. Ground truth data acquisition 4 images x 8 kernels = 32 test images Data available online: http://www.wisdom.weizmann.ac.il/~levina/

  19. Comparison Fergus et al. SIGGRAPH06 MAPk, variational approx. Ground truth MAPx,k MAPk, Gaussian prior Shan et al. SIGGRAPH08 adjusted MAPx,k

  20. Evaluation Fergus, variational MAPk Shan et al. SIGGRAPH08 MAPx,k sparse prior MAPk, Gaussian prior 100 Successes percent 80 60 40 20 Cumulative histogram of deconvolution successes : bin r = #{ deconv error < r }

  21. Problem: uniform blur assumption is unrealistic Variation of dot traces at 4 corners Note: opposite conclusion by Fergus et al., 2006

  22. Summary • Good estimator is more important than correct prior: • - MAPk approach can do deconvolution even with Gaussian prior • - MAPx,k approach fails even with sparse prior • Spatially uniform blur assumption is invalid • Compare blind deconvolution algorithms on the same dataset, Fergus et al. 06 significantly outperforms all alternatives Ground truth data available online

More Related