340 likes | 361 Views
CS395/495: Spring 2004. IBMR: Image Based Modeling and Rendering Introduction Jack Tumblin jet@cs.northwestern.edu. Admin: How this course works. Refer to class website: (soon) http://www.cs.northwestern.edu/~jet/Teach/2004_3spr_IBMR/IBMRsyllabus2004.htm Tasks:
E N D
CS395/495: Spring 2004 IBMR: Image Based Modeling and Rendering Introduction Jack Tumblin jet@cs.northwestern.edu
Admin: How this course works • Refer to class website: (soon)http://www.cs.northwestern.edu/~jet/Teach/2004_3spr_IBMR/IBMRsyllabus2004.htm • Tasks: • Reading, lectures, class participation, projects • Evaluation: • Progressive Programming Project • Take-Home Midterm • In-class project demo; no final exam
GOAL: First-Class Primitive • Want images as ‘first-class’ primitives • Useful as BOTH input and output • Convert to/from traditional scene descriptions • Want to mix real & synthetic scenes freely • Want to extend photography • Easily capture scene:shape, movement, surface/BRDF, lighting … • Modify & Render the captured scene data • --BUT-- • images hold only PARTIAL scene information “You can’t always get what you want”–(Mick Jagger 1968)
Back To Basics: Scene & Image Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Image Plane I(x,y) Position(x,y) Angle(,)
Trad. Computer Graphics Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Reduced, Incomplete Information Image Plane I(x,y) Position(x,y) Angle(,)
Trad. Computer Vision Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point !TOUGH! ‘ILL-POSED’ Many Simplifications, External knowledge… Image Plane I(x,y) Position(x,y) Angle(,)
IBMR Goal: Bidirectional Rendering • Both forward and ‘inverse’ rendering! ‘3D Scene’ Description ?! New Research? ‘Optical’ Description Camera PoseCamera View Geom Scene illuminationObject shape, positionSurface reflectance,… transparency Traditional Computer Graphics 2D Display Image(s) 2D Display Image(s) 2D Display Image(s) 2D Display Image(s) IBMR
OLDEST IBR: Shadow Maps (1984) Fast Shadows from Z-buffer hardware: 1) Make the “Shadow Map”: • Render image seen from light source, BUT • Keep ONLY the Z-buffer values (depth) 2) Render Scene from Eyepoint: • Pixel + Z depth gives 3D position of surface; • Project 3D position into Shadow map image • If Shadow Map depth < 3D depth, SHADOW!
Early IBR: QuickTime VR (Chen, Williams ’93) 1) Four Planar Images 1 Cylindrical Panorama: Re-sampling Required! Planar Pixels: equal distance on x,y plane (tan-1) Cylinder Pixs: horiz: equal angle on cylinder () vert: equal distance on y (tan-1)
Early IBR: QuickTime VR (Chen, Williams ’93) 1) Four Planar Images 1 Cylindrical Panorama: IN: OUT:
Early IBR: QuickTime VR (Chen, Williams ’93) 2) Windowing, Horizontal-only Reprojection: IN: OUT:
Early IBR: QuickTime VR (Chen, Williams ’93) 2) Windowing, Horizontal-only Reprojection: IN: OUT:
View Interpolation: How? • But what if no depth is available? • Traditional Stereo Disparity Map: pixel-by-pixel search for correspondence
View Interpolation: How? • Store Depth at each pixel: reproject • Coarse or Simple 3D model:
Plenoptic Array: ‘The Matrix Effect’ • Brute force!Simple arc, line, or ring array of cameras • Synchronized shutter http://www.ruffy.com/firingline.html • Warp/blend between images to change viewpoint on ‘time-frozen’ scene:
Plenoptic Function (Adelson, Bergen `91) • for a given scene, describe ALL rays through • ALLpixels, of • ALL cameras, at • ALL wavelengths, • ALL time F(x,y,z,,,, t) “Eyeballs Everywhere” function (5-D x 2-D!) … … … … … … … … … … …
Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm 1)Manually set some corresp.points (eye corners, etc.) 2) pre-warp and post-warp to match points in 3D, 3) Reproject for Virtual cameras
Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm
‘Scene’ causes Light Field Light field: holds all outgoing light rays Shape, Position, Movement, Emitted Light Reflected, Scattered, Light … BRDF, Texture, Scattering Cameras capture subset of these rays.
? Can we recover Shape ? Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)
? Can we recover Surface Material ? Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)
Hey, wait… • Light field describes light LEAVING the enclosing surface…. • ? Isn’t there a complementary ‘light field’ for the light ENTERING the surface? • *YES* ‘Image-Based Lighting’ too!
‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yl l,l, , t) camera
‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yl l,l, , t) camera c c
‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yll,l, , t) camera Projector (laser brick)
‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene • (and wavelength and time) F(xc,yc,c,c,xl,yl l,l, , t) camera projector
‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene • (and wavelength and time) F(xc,yc,c,c,xl,yl l,l, , t) camera projector
‘Full 8-D Light Field’ (10-D, actually: time, ) • ! Complete ! • Geometry, Lighting, BRDF,… ! NOT REQUIRED ! • Preposterously Huge: (?!?! 8-D function sampled at image resolution !?!?), but • Hugely Redundant: • WavelengthRGB triplets, ignore Time, and Restrict eyepoint movement: maybe ~3 or 4D ? • Very Similar images—use Warping rules? (SIGG2002) • Exploit Movie Storage/Compression Methods?
IBR-Motivating Opinions “Computer Graphics: Hard” • Complex! geometry, texture, lighting, shadows, compositing, BRDF, interreflections, etc. etc., etc., … • Irregular! Visibility,Topology, Render Eqn.,… • Isolated! Tough to use real objects in CGI • Slow! compute-bound, off-line only,… “Digital Imaging: Easy” • Simple! More quality? Just pump more pixels! • Regular! Vectorized, compressible, pipelined… • Accessible! Use real OR synthetic (CGI) images! • Fast! Scalable, Image reuse, a path to interactivity…
Practical IBMR What useful partial solutions are possible? • Texture Maps++: Panoramas, Env. Maps • Image(s)+Depth: (3D shell) • Estimating Depth & Recovering Silhouettes • ‘Light Probe’ measures real-world light • Structured Light to Recover Surfaces… • Hybrids: BTF, stitching, …
Conclusion • Heavy overlap with computer vision: careful not to re-invent & re-name! • Elegant Geometry is at the heart of it all, even surface reflectance, illumination, etc. etc. • THUS: we’ll dive into geometry--all the rest is built on it!