1 / 91

Computational Photography Course: Techniques and Presentations | Fall 2005

Explore computational photography concepts, assignments, and project updates. Learn about light fields, ray representation, and image interpolation.

gfontenot
Download Presentation

Computational Photography Course: Techniques and Presentations | Fall 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Northeastern University, Fall 2005CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Nov 20, 2005 Course WebPage : http://www.merl.com/people/raskar/photo/course/

  2. Plan for Today • Exam review (Avg 84.5, max 89 min 76) • Light Field • Assignment 4 • Tools: Gradient Domain and Graph Cuts • Paper reading • 2 per student, 15 mins each, Reading list on the web • Starts Nov 10th • Course feedback

  3. Reading Paper Presentations 15 minutes: 10 minute presentation, 5 minutes for discussion Use Powerpoint slides, Bring your own laptop or put the slides on a USB drive, (print the slides to be safe) Format: Motivation Approach (New Contributions) Results Your own view of what is useful, what are the limitations Your ideas on improvements to the technique or new applications (atleast 2 new ideas) It is difficult to explain all the technical details in 15 minutes. So focus on the key concepts and give an intuition about what is new here. Ignore second order details in the paper, instead describe them in the context of the results. Keep the description of the approach simple, a rule of thumb: no more than 3 equations in your presentation. Most authors below have the powerpoint slides on their websites, so feel free to use those slides and modify them. Be careful, do not simply present all their slides in sequence. You should focus on only the key concepts and add your own views. If the slides are not available on the author website, copy paste images from the PDF to create your slides. Sometimes you can send email to the author, and s/he will send you the slides.

  4. Tentative Schedule • Nov 30th • Project Update • Special lecture: Video Special Effects • Dec 2nd (Friday) • Hw 4 due Midnite • Dec 7th • In class exam (instead of Hw 5) • Special lecture: Mok3 • Dec 15th (Exam week) • Final Project Presentation

  5. The Matrix: A Real Revolution • Warner Brothers 1999 A Neo-Realism!

  6. Assignment 4: Playing with Epsilon ViewsSee course webpage for details • Resynthesizing images from epsilon views (rebinning of rays) • http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf In this assignment, you will use multiple pictures under slightly varying position to create a large synthetic aperture and multiple-center-of-projection (MCOP) images You will create (i) An image with programmable plane of focus (ii) A see-through effect --------------------------------------------------------- (A) Available set http://www.eecis.udel.edu/~yu/Teaching/toyLF.zip Use only 16 images along the horizontal translation (B) Your own data set Take atleast 12-16 pictures by translating a camera (push broom) The forground scene is a flat striped paper Background scene is a flat book cover or painting Choose objects with vibrant bright saturated colors Instead of translating the camera, you may find it easier to translate the scene Put the digital camera in remote capture time lapse interval mode (5 second interval) Effect 1: Programmable focus Rebin rays to focus on first plane Rebin rays to focus on back plane Rebin rays to focus on back plane but rejecting first plane Effect 2: MCOP images Rebin rays to create a single image with multiple views Useful links http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf

  7. Light Fields, Lumigraph, and Image-based Rendering

  8. Ray Origin - Pinhole Camera • A camera captures a set of rays • A pinhole camera captures a set of rays passing through a common 3D point Image Plane

  9. Camera Array and Light Fields • Digital cameras are cheap • Low image quality • No sophisticated aperture (depth of view) effects • Build up an array of cameras • It captures an array of images • It captures an array of set of rays

  10. Why is it useful? • If we want to render a new image • We can query each new ray into the light field ray database • Interpolate each ray (we will see how) • No 3D geometry is needed

  11. Key Problem • How to represent a ray? • What coordinate system to use? • How to represent a line in 3D space • Two points on the line (3D + 3D = 6D) Problem? • A point and a direction (3D + 2D = 5D) Problem? • Any better parameterization?

  12. Two Plane Parameterization • Each ray is parameterized by its two intersection points with two fixed planes. • For simplicity, assume the two planes are z = 0 (st plane) and z = 1 (uv plane) • Alternatively, we can view (s, t) as the camera index and (u, v) as the pixel index

  13. Ray Representation For the moment let’s consider just a 2D slice of this 4D ray space Rendering new pictures = Interpolating rays How to represent rays? 6D? 5D? 4D?

  14. (u, v) Light Field Rendering • For each desired ray: • Compute intersection with (u,v) and(s,t) plane • Blend closest ray • What does closest mean? (u2, v2) (u1, v1) (u, v) (s1, t1) (s2, t2) (s, t) (s, t)

  15. (u, v) Light Field Rendering • Linear Interpolation y2 y? • What happens to higher dimension? Bilinear, tri-linear, quadra-linear y1 x2 x x1 (u2, v2) (u1, v1) (u, v) (s1, t1) (s2, t2) (s, t) (s, t)

  16. Quadralinear Interpolation Serious aliasing artifacts

  17. v3 v2 v1 (v1, t1) (v2, t2) (v3, t3) t3 t1 t2 Ray Structure All the rays in a light field passing through a 3D geometry point v v t t 2D Light Field 2D EPI

  18. Why Aliasing? • We study aliasing in spatial domain • Next class, we will study frequency domain (take a quick review of Fourier transformation if you can)

  19. Better Reconstruction Method • Assume some geometric proxy • Dynamically Reparametrized Light Field

  20. Focal Surface Parameterization

  21. Focal Surface Parameterization -2 • Intersect each new ray with the focal plane • Back trace the ray into the data camera • Interpolate the corresponding rays • But need to do ray-plane intersection • Can we avoid that?

  22. Using Focal Plane • Relative parameterization is hardware friendly • [s’, t’] corresponds to a particular texture • [u’, v’] corresponds to the texture coordinate • Focal plane can be encoded as the disparity across the light field (u1, v1) (u, v) (u2, v2) Image Plane (s1, t1) (s, t) (s2, t2) Camera Plane

  23. Results Quadralinear Focal plane at the monitor

  24. Aperture Filtering • Simple and easy to implement • Cause aliasing (as we will see later in the frequency analysis) • Can we blend more than 4 neighboring rays? 8? 16? 32? Or more? • Aperture filtering

  25. Small aperture size

  26. Large aperture size

  27. Using very large size aperture

  28. Variable focus

  29. Close focal surface

  30. Distant focal surface

  31. Synthetic aperture photography

  32. Synthetic aperture photography

  33. Synthetic aperture photography

  34. Synthetic aperture photography

  35. Synthetic aperture photography

  36. Synthetic aperture photography

  37. Synthetic aperture videography

  38. Long-rangesynthetic aperture photography • width of aperture 6’ • number of cameras 45 • spacing between cameras 5” • camera’s field of view 4.5°

  39. The scene • distance to occluder 110’ • distance to targets 125’ • field of view at target 10’

  40. People behind bushes • close to sunset, so input images were noisy • noise decreases as sqrt ( number of cameras ) → camera arrays facilitate low-light imaging

  41. Effective depth of field • 35mm camera lens with equivalent field of view 460mm • typical depth of field for an f/4 460mm lens 10’ • allowing a 1-pixel circle of confusion in a 640 x 480 pixel image • effective depth of field of our array 1’ 6’ 10’ 15’ 125’

  42. Synthetic aperture photographyusing an array of mirrors • 11-megapixel camera (4064 x 2047 pixels) • 18 x 12 inch effective aperture, 9 feet to scene • 22 mirrors, tilted inwards  22 views, each 750 x 500 pixels

  43. Perspective? Or Not? It’s pretty cool, dude!

  44. Multiperspective Camera?

  45. Assignment 4: Playing with Epsilon Views • Resynthesizing images from epsilon views (rebinning of rays) • http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf In this assignment, you will use multiple pictures under slightly varying position to create a large synthetic aperture and multiple-center-of-projection (MCOP) images You will create (i) An image with programmable plane of focus (ii) A see-through effect --------------------------------------------------------- (A) Available set http://www.eecis.udel.edu/~yu/Teaching/toyLF.zip Use only 16 images along the horizontal translation (B) Your own data set Take atleast 12-16 pictures by translating a camera (push broom) The forground scene is a flat striped paper Background scene is a flat book cover or painting Choose objects with vibrant bright saturated colors Instead of translating the camera, you may find it easier to translate the scene Put the digital camera in remote capture time lapse interval mode (5 second interval) Effect 1: Programmable focus Rebin rays to focus on first plane Rebin rays to focus on back plane Rebin rays to focus on back plane but rejecting first plane Effect 2: MCOP images Rebin rays to create a single image with multiple views Useful links http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf

  46. Computational Illumination • Presence or Absence • Flash/No-flash • Light position • Multi-flash for depth edges • Programmable dome (image re-lighting and matting) • Light color/wavelength • Spatial Modulation • Synthetic Aperture Illumination • Temporal Modulation • TV remote, Motion Tracking, Sony ID-cam, RFIG • Natural lighting condition • Day/Night Fusion

  47. A Night Time Scene: Objects are Difficult to Understand due to Lack of Context Dark Bldgs Reflections on bldgs Unknown shapes

More Related