1 / 35

Progressively Refined Reflectance Fields from Natural Illumination

Progressively Refined Reflectance Fields from Natural Illumination. Wojciech Matusik Matt Loper Hanspeter Pfister. Motivation. Complex natural scenes are difficult to acquire Acquisition needs to be easy and robust Image-based lighting offers high realism

kineta
Download Presentation

Progressively Refined Reflectance Fields from Natural Illumination

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Progressively Refined Reflectance Fields from Natural Illumination Wojciech Matusik Matt Loper Hanspeter Pfister

  2. Motivation • Complex natural scenes are difficult to acquire • Acquisition needs to be easy and robust • Image-based lighting offers high realism • We would like to relight image-based models at any scale (from small objects to cities)

  3. Motivation • Image-based Relighting • no scene geometry – just images • no assumptions about scene reflectance properties

  4. Previous Work • Forward Approaches • Georghiades 99, Debevec 2000, Malzbender 01, Masselus 02, Peers 03 • Inverse Approaches • Zongker 99, Chuang 00, Wexler 02 • Pre-computed Light Transport • Sloan 02, Ng 03

  5. Reflectance Field • 8D function: [Debevec 2000] (θi,φi) (θr,φr) (ui,vi) (ur,vr)

  6. Reflectance (Weighting) Function • Assumes incident illumination originates at infinity • x,y are image space coordinates φi θi

  7. Unroll to a vector Unroll to a vector Light Transport Model • A light flow in the scene can be modeled as a multiple-input / multiple-output linear system: Scene light transport matrix T Observed Image B Incident Light L

  8. Light Transport Model • Solve independently for each output pixel multiple-input / single-output linear system : Scene light transport vector Ti Observed Pixel bi Incident Light L

  9. Representation • Approximate Ti as a sum of 2D rectangular kernels Rk,i, each with weight wk,i. φi θi

  10. Inverse Estimation • Given inputimages Lj we record observed pixel values bij: • Given matrix L and vector bi the goal is to estimate Ti • Positions and sizes of the rectangular kernels Rk,i • Weights wk, i

  11. Estimating Kernel Weights • Assume that we know sizes and positions of the kernels Rk,i and would like to compute their weights • Efficient solution using quadratic programming

  12. Estimating Kernel Positions & Sizes • Hierarchical kd-tree subdivision of the kernels input image domain • At each level choose subdivision that reduces error the most • Kernels are non-overlapping

  13. Kernel Subdivisions specular refractive hard shadow subsurface scattering glossy 1 2 4 Subdivisions 3 24 10 20 5

  14. Spatial Correction • The kernels search strategy does not always work • Solution: For each output pixel: • try kernel positions and sizes of the neighboring output pixels • try shifted versions of the current kernels • solve for new weights • keep new kernels if the error decreases

  15. Integration with Incident Illumination • Is very efficient • For each output pixel i • The incident illumination is stored as a summed-area table to evaluate

  16. Data Acquisition • We have built two acquisition systems • Indoor scenes / small objects • Outdoor scenes (city)

  17. Acquisition System I

  18. Example Input Images

  19. Results • Refractive and specular elements Prediction Actual

  20. Results – New Illumination

  21. Results - White Vertical Bar Actual Prediction

  22. Estimate Actual Results • Diffuse elements, shadows

  23. Results - White Vertical Bar

  24. Results • Subsurface Scattering Actual Estimate

  25. Results - White Vertical Bar

  26. Results • Glossy elements and interreflections Actual Estimate

  27. Results - White Vertical Bar

  28. Results • One shifted version of the same image used as input illumination

  29. Acquisition System II • Two Synchronized Cameras Camera #1 Camera #2

  30. Example Observed Images

  31. Results – Relighting The City • White vertical bar

  32. Lessons • Inverse approaches benefit from good kernel search strategies & more computation power • Inverse approaches are more efficient than forward approaches • Challenges: • Scene needs to be static • Varied set of input illumination • Illumination is not at infinity

  33. Conclusions • Advantages of our algorithm: • Natural Illumination Input • All-frequency Robustness • Compact Representation • Progressive Refinement • Fast Evaluation • Simplicity

  34. Future Work • New acquisition systems • object and camera are fixed w.r.t. each other and they rotate in a single, natural environment • Combining representations from different viewpoints and proxy geometry • Coarse-to-fine estimation in the observed image space • start with low resolution observed images & search exhaustively for the best kernels • propagate the kernels to higher resolution images

  35. Acknowledgements • Jan Kautz • Barb Cutler • Jennifer Roderick Pfister • EGSR Reviewers

More Related