400 likes | 560 Views
CS 551 / CS 645. Shadow Models. Administrative. Midterm exam in 9 days Today we’re finishing FvD chapter 16. Recap: Lighting Models. Ambient Light Diffuse Light Phong Lighting Gouraud Shading Interpolate vertex colors along edges Phon. Ambient Light Sources.
E N D
CS 551 / CS 645 Shadow Models
Administrative • Midterm exam in 9 days • Today we’re finishing FvD chapter 16
Recap: Lighting Models • Ambient Light • Diffuse Light • Phong Lighting • Gouraud Shading • Interpolate vertex colors along edges • Phon
Ambient Light Sources • A scene lit only with an ambient light source:
Directional Light Sources • The same scene lit with a directional and an ambient light source
Other Light Sources • Spotlights are point sources whose intensity falls off directionally. • Requires color, pointdirection, falloffparameters • Supported by OpenGL
l n Computing Diffuse Reflection • The angle between the surface normal and the incoming light is the angle of incidence: Idiffuse = kd Ilightcos • In practice we use vector arithmetic: Idiffuse = kd Ilight(n • l)
Specular Reflection • Shiny surfaces exhibit specular reflection • Polished metal • Glossy car finish • A light shining on a specular surface causes a bright spot known as a specular highlight • Where these highlights appear is a function of the viewer’s position, so specular reflectance is view-dependent
The Optics of Reflection • Reflection follows Snell’s Laws: • The incoming ray and reflected ray lie in a plane with the surface normal • The angle that the reflected ray forms with the surface normal equals the angle formed by the incoming ray and the surface normal: (l)ight = (r)eflection
Phong Lighting • The most common lighting model in computer graphics was suggested by Phong: v • The nshiny term is a purelyempirical constant that varies the rate of falloff • Though this model has no physical basis, it works (sort of) in practice
Flat Shading • The simplest approach, flat shading, calculates illumination at a single point for each polygon: • If an object really is faceted, is this accurate? • No: • For point sources, the direction to light varies across the facet • For specular reflectance, direction to eye varies across the facet
Gouraud Shading • This is the most common approach • Perform Phong lighting at the vertices • Linearly interpolate the resulting colors over faces • Along edges • Along scanlines • This is what OpenGL does • Does this eliminate the facets?
Phong Shading • Phong shading is not the same as Phong lighting, though they are sometimes mixed up • Phong lighting: the empirical model we’ve been discussing to calculate illumination at a point on a surface • Phong shading: linearly interpolating the surface normal across the facet, applying the Phong lighting model at every pixel • Same input as Gouraud shading • Usually very smooth-looking results: • But, considerably more expensive
Shortcomings of Shading • Polygonal silhouettes remain • Perspective distortion not captured in interpolation down scanlines • Interpolation dependent on polygon orientation • Shared vertices • Bad averaging to compute vertex normals
Shadows • Fake it • Preprocess polygons and the shadows they cast on the ground plane • Add ‘shadow polygons’ to the ground plane when necessary • Shortcomings • Object->Object shadows are missed • Lights and objects cannot move
Scan-line Shadow Generation • Project each polygon’s shadow onto every other polygon • Create a set of ‘shadow polygons’ • When scan-line rendering, use regular polygons to choose color, but check in shadow polygons for intensity • Shortcomings • Slow • Light and objects cannot move
Shadow Volumes • Create polygonal volume by creating multiple ‘shadow’ polygons anchored at the light and passing through faces of object
Shadow Volumes • Use shadow polygons like clipping planes • Shadow object if contained within a volume • How do we compute? B A
Shadow Volumes • Examples a c a b c b v v
Z-buffer Shadow Maps • Z-buffer • Created as a byproduct of scan conversion • Each scan converted polygon is represented as a location in pixel space and a depth, or distance, from camera • Of all polygons projecting onto a pixel, store the z-value of closest one in z-buffer • Image each pixel having a (r, g, b, z) tuple • We’ll use the z-buffer again when we talk about visibility
Z-buffer Shadow Maps • Store the z-buffer for the image from the viewpoint of the light • Compute the image and z-buffer from the viewer’s position, but… • For each pixel in image, find its object coordinates (x, y, z) • Map this (x, y, z) into light space (x’, y’, z’)
Z-buffer Shadow Maps • Project (x’, y’, z’) to light’s z-buffer and compare z’ to the z-buffer value, zlight • If zlight < z’ z’ is shadowed by something • Otherwise, point is visible from light and should be illuminated • Beware of numerical precision • A polygon should not shadow itself
Global Illumination • We’ve glossed over how light really works • And we will continue to do so… • One step better • Global Illumination • The notion that a point is illuminated by more than light from local lights; it is illuminated by all the emitters and reflectors in the global scene
The ‘Rendering Equation’ • Jim Kajiya (Current head of Microsoft Research) developed this in 1986 • e(x, x’) is the intensity from x’ to x • r(x, x’,x’’) is the intensity of light reflected from x’’ to x from x’ • S is all points on all surfaces
The ‘Rendering Equation’ • The light that hits x from x’ is the direct illumination from x’ and all the light reflected by x’ from all x’’ • To implement: • Must handle recursion effectively • Must support diffuse and specular light • Must model object shadowing
Recursive Ray Tracing • Cast a ray from the viewer’s eye through each pixel • Compute intersection of this ray with objects from scene • Closest intersecting object determines color
Recursive Ray Tracing • Cast a ray from intersected object to light sources and determine shadow/lighting conditions • Also spawn secondary rays • Reflection rays and refraction rays • Use surface normal as guide • If another object is hit, determine the light it illuminates by recursing through ray tracing
Recursive Ray Tracing • Stop recursing when: • ray fails to intersect an object • user-specified maximum depth is reached • system runs out of memory • Common numerical accuracy error • Spawn secondary ray from intersection point • Secondary ray intersects another polygon on same object
Recursive Ray Tracing • Still producing PhD’s after all these years • Many opportunities to improve efficiency and accuracy of ray tracing • Reduce the number of rays cast • Accurately capture shadows caused by non-lights (ray tracing from the light source) • Expensive to recompute as eyepoint changes
Radiosity • Ray tracing models specular reflection and refractive transparency, but still uses an ambient term to account for other lighting effects • Radiosity is the rate at which energy is emitted or reflected by a surface • By conserving light energy in a volume, these radiosity effects can be traced
Radiosity • Radiosity of patch ‘i’ is • ei is the rate at which light is emitted • ri is patch i’s reflectivity • Fj-i is the form factor • fraction of patch j that reaches patch i as determined by orientation of both patches and obstructions • Use Aj/Ai to scale units to total light emitted by j as received per unit area by i
Radiosity • Compute n-by-n matrix of form factors to store radiosity relationships between each light patch and every other light patch • Form factor • draw picture
Radiosity • Spherical projections to model form factor • project polygon Aj on unit hemisphere centered at (and tangent to) Ai • Contributes cosqj / r2 • Project this projection to base of hemisphere • Contributes cosqi • Divide this area by area of circle base • Contributes p
Radiosity • Analytic solution of hemisphere is expensive • Use rectangular approximation, hemicube • cosine terms for top and sides are simplified
Radiosity • Radiosity is expensive to compute • Get your PhD by improving it • Emitted light and viewpoint can change • Light angles and object positions cannot • Specular reflection information is not modeled
View-dependent vs View-independet • Ray-tracing models specular reflection well, but diffuse reflection is approximated • Radiosity models diffuse reflection accurately, but specular reflection is ignored • Combine the two...
Texture Mapping • Limited ability to generate complex surfaces with geometry • Images can convey the illusion of geometry • Images painted onto polygons is called texture mapping
Texture Mapping • Texture map is an image, two-dimensional array of color values (texels) • Texels are specified by texture’s (u,v) space • At each screen pixel, texel can be used to substitute a polygon’s surface property • We must map (u,v) space to polygon’s (s, t) space
Texture Mapping • (u,v) to (s,t) mapping can be explicitly set at vertices by storing texture coordinates with each vertex • How do we compute (u,v) to (s,t) mapping for points in between • Watch for aliasing • Watch for many to one mappings • Watch for perspective foreshortening effects and linear interpolation
Bump Mapping • Use textures to modify surface geometry • Use texel values to modify surface normals of polygon • Texel values correspond to height field • Height field models a rough surface • Partial derivative of bump map specifies change to surface normal