1 / 40

Optimizing and Learning for Super-resolution

Optimizing and Learning for Super-resolution. Lyndsey C. Pickup, Stephen J. Roberts & Andrew Zisserman Robotics Research Group, University of Oxford. The Super-resolution Problem. Given a number of low-resolution images differing in: geometric transformations

woods
Download Presentation

Optimizing and Learning for Super-resolution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimizing and Learning forSuper-resolution Lyndsey C. Pickup, Stephen J. Roberts & Andrew Zisserman Robotics Research Group, University of Oxford

  2. The Super-resolution Problem Given a number of low-resolution images differing in: • geometric transformations • lighting (photometric) transformations • camera blur (point-spread function) • image quantization and noise. Estimate a high-resolution image:

  3. Low-resolution image 1

  4. Low-resolution image2

  5. Low-resolution image3

  6. Low-resolution image4

  7. Low-resolution image5

  8. Low-resolution image6

  9. Low-resolution image7

  10. Low-resolution image8

  11. Low-resolution image9

  12. Low-resolution image10

  13. Super-Resolution Image

  14. High-resolution image, x. W1 W2 W3 W4 y1 y2 y3 y4 Low-resolution images Generative Model Registrations, lighting and blur.

  15. Generative Model • Set of low-resolution input images, y. We don’t have: We have: • Geometric registrations • Point-spread function • Photometric registrations

  16. x W1 W4 W2 W3 y1 y2 y3 y4 Maximum a Posteriori (MAP) Solution • Standard method: • Compute registrations from low-res images. • Solve for SR image, x, using gradient descent. [Irani & Peleg ‘90, Capel ’01, Baker & Kanade ’02, Borman ‘04]

  17. x W1 W4 W2 W3 y1 y2 y3 y4 What’s new • We solve for registrations and SR image jointly. • We also find appropriate values for parameters in the prior term at the same time. • Hardie et al. ’97: adjust registration while optimizing super-resolution estimate. • Exhaustive search limits them to translation only. • Simple smoothness prior softens image edges. i.e. given the low-res images, y, we solve for the SR image xand the mappings, W simultaneously.

  18. Overview of rest of talk • Simultaneous Approach • Model details • Initialisation technique • Optimization loop • Learning values for the prior’s parameters • Results on real images

  19. x W1 W4 W2 W3 y1 y2 y3 y4 Warp, with parameters Φ. Blur by point-spread function. Decimate by zoom factor. Corrupt with additive Gaussian noise. Image x. Maximum a Posteriori (MAP) Solution y

  20. Details of Huber Prior Huber function is quadratic in the middle, and linear in the tails. ρ (z,α) p (z|α,v) Red: large α Blue: small α Probability distribution is like a heavy-tailed Gaussian. This is applied to image gradients in the SR image estimate.

  21. Details of Huber Prior Advantages: simple, edge-preserving, leads to convex form for MAP equations. Solutions as α and v vary: Ground Truth α=0.05 v=0.05 α=0.01 v=0.01 α=0.01 v=0.005 α=0.1 v=0.4 Edges are sharper Too much smoothing Too little smoothing

  22. Advantages of Simultaneous Approach • Learn from lessons of Bundle Adjustment: improve results by optimizing the scene estimate and the registration together. • Registration can be guided by the super-resolution model, not by errors measured between warped, noisy low-resolution images. • Use a non-Gaussian prior which helps to preserve edges in the super-resolution image.

  23. Overview of Simultaneous Approach • Start from a feature-based RANSAC-like registration between low-res frames. • Select blur kernel, then use average image method to initialise registrations and SR image. • Iterative loop: • Update Prior Values • Update SR estimate • Update registration estimate

  24. Average image Initialisation • Use average image as an estimate of the super-resolution image (see paper). • Minimize the error between the average image and the low-resolution image set. • Use an early-stopped iterative ML estimate of the SR image to sharpen up this initial estimate. ML-sharpened estimate

  25. Optimization Loop • Update prior’s parameter values (next section) • Update estimate of SR image • Update estimate of registration and lighting values, which parameterize W • Repeat till converged.

  26. Registration Fixed Joint MAP Decreasing prior strength Joint MAP Results

  27. Use first set to obtain an SR image. Find error on validation set. Learning Prior Parameters α, ν • Split the low-res images into two sets:

  28. Learning Prior Parameters α, ν • Split the low-res images into two sets: Use first set to obtain an SR image. Find error on validation set. • But what if one of the validation images is mis-registered?

  29. Learning Prior Parameters α, ν • Instead, we select pixels from across all images, choosing differently at each iteration. • We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.

  30. Learning Prior Parameters α, ν • Instead, we select pixels from across all images, choosing differently at each iteration. • We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.

  31. Learning Prior Parameters α, ν • To update the prior parameters: • Re-select a cross-validation pixel set. • Run the super-resolution image MAP solver for a small number of iterations, starting from the current SR estimate. • Predict the low-resolution pixels of the validation set, and measure error. • Use gradient descent to minimise the error with respect to the prior parameters.

  32. Results: Eye Chart MAP version: fixing registrations then super-resolving Joint MAP version with adaptation of prior’s parameter values

  33. Results: Groundhog Day

  34. Blur radius = 1 Blur radius = 1.4 Blur radius = 1.8 Results: Groundhog Day • The blur estimate can still be altered to change the SR result. More ringing and artefacts appear in the regular MAP version. Regular MAP Simultaneous

  35. Lola Rennt

  36. Real Data: Lola Rentt

  37. Real Data: Lola Rentt

  38. Real Data: Lola Rentt

  39. Real Data: Lola Rentt

  40. Conclusions • Joint MAP solution: better results by optimizing SR image and registration parameters simultaneously. • Learning prior values: preserve image edges without having to estimate image statistics in advance. • DVDs: Automatically zoom in on regions with a registrations up to a projective transform and with an affine lighting model. • Further work: marginalize over the registration – see NIPS 2006.

More Related