1 / 27

Creating shadows

Creating shadows. Computer Graphics methods. Problem statement. The goal is to produce realistic-looking images (for games, 3D rendering etc.) from 3D objects 3D objects can be described in different way – with polygonal meshes, curves etc. Here we deal only with polygonal objects. versus.

edison
Download Presentation

Creating shadows

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Creating shadows Computer Graphics methods

  2. Problem statement • The goal is to produce realistic-looking images (for games, 3D rendering etc.) from 3D objects • 3D objects can be described in different way – with polygonal meshes, curves etc. Here we deal only with polygonal objects. versus

  3. Simplified Illumination model (self-shadow) Final Color ~ Light Color *NL + Object Color N*L is scalar product of polygon normal N with direction to the light source L. Normal of polygon, that faces away the light source, will have negative scalar product with vector of direction to light source, and polygon will be darker L N N L Normal of polygon, that faces the light source, will have positive scalar product with vector of direction to light source, and it will be brighter

  4. Simplified Illumination model (self-shadow) – cont. • Flat shading: for each point interpolate color of polygon vertices • Smooth shading: for each point interpolate normals of polygon vertices

  5. We want cast shadow • Self-shadowing does not provide realistic enough look for the scene: • We would like it that way: But all polygons of flat floor have same direction of normals! Where from will the shadow effect come? We need to “force” darker coloring on the floor, thus we need different algorithm.

  6. From 3D to 2D • Images are created by rendering geometric objects, 3D “world”. That means taking information like coordinates, light, color etc. from data structures\files, that describe those models, and producing 2D picture of them (image on the screen) • We want a resulting picture to look good rather than be physically correct (exact computations are time consuming), so mathematical calculations for shadows generation are often approximated and not exact.

  7. Algorithms for cast shadows • There are many algorithms for shadow generation. First ones go back to ‘70s, but as technology moves on, more of hardware potential can be used in order to save on time and improve shadow quality (next week) • We will see the following classic algorithms: • Shadow Volumes (projected shadow polygons) [Crow77]-- geometry based algorithm • Two-pass object-precision shadow algorithm [Atherton et al78] – object-space algorithm • Shadow Mapping (Two-pass Z-buffer shadow algorithm ) [Williams78]-- image-space algorithm • Projected Planar Shadows [Blinn88]

  8. 1. Shadow Volumes (Crow algorithm) • A shadow volume is defined by the light source and an object (red triangle) and is bounded by a set of invisible shadow polygons. • Compute as extrusion of silhouettes (contour edges) along light direction • Shadow polygons are not rendered, but are used to determine whether other objects are in shadow.(Any surface lying between those shadow polygons will be shadowed)

  9. Shadow Volumes – cont. • Consider a vector from viewpoint V to point P. The point P is in shadow if vector intersects more front facing (relative to the viewer) than back facing shadow polygons. (A polygon is "back facing" if its normal is facing away from the viewing direction V; similar for front facing)

  10. Z-buffer (depth buffer) • Two dimensional array, each entry contains z-value (depth value) for each pixel. • When polygons are rendered (in scan-line order, row by row), some can be occluded by others. We want to change pixel color (repaint) only if new polygon is closer and not repaint visible pixels Pixels A and B receive green color Pixel A must be updated with new color because of front orange polygon. Pixel B must remain unchanged

  11. Z-buffer (depth buffer) • Each entry of z-buffer is initialized with “deepest” z-value. When a new polygon is scan converted, z-values of pixels, corresponding to it in Z-buffer, are compared with z-values of polygon. If polygon pixels are closer, their color is changed and z-value of those entries of Z-buffer are updated to those of polygon. Else, no change takes place.

  12. Stencil buffer • Stencil buffer is a per-pixel test buffer similar to Z- buffer • The stencil buffer is used to count the times we enter or leave the shadow volume. • Each time we meet front facing polygon covering current pixel – the counter increments. • Each time we meet back facing polygon covering current pixel – the counter decrements.

  13. Shadow Volumes – cont. • The algorithm is geometry based because it requires connectivity information of the polygonal meshes in the scene to efficiently compute the silhouette of each shadow casting object • For each object and light source compute object silhouette from light source viewpoint • Extend each silhouette to form semi-infinite volumes • Feed shadow polygons into regular Z-buffer as fully transparent polygons • For all front facing shadow polygons from viewpoint do • if Z-buffer test passes then increment stencil buffer value • For all back facing shadow polygons from viewpoint do • if Z-buffer test passes then decrement stencil buffer value

  14. Shadow Volumes – cont. • Processed polygon passing Z-buffer test is a polygon, which is currently closest to viewer • Point is lit if stencil counter of that pixel is zero and is in shadow if counter is positive. • Disadvantage: • If the viewer himself is in shadow – above criterions for shadow determination are wrong. • Viewpoint dependent (calculation of crossed shadow polygons are based on viewpoint vector)

  15. 2. Two-pass object-precision shadow algorithm • Algorithm operates in object space (coordinates etc.) • Same approach is used twice, once for viewpoint and once for the light source • The shadows are calculated based only on (constant) light source location and are not dependent on (changing) viewpoint. (on the contrary to Shadow Volumes) • The algorithm first determines surfaces visible from light source viewpoint. The output of this step is list of lit polygons

  16. Two-pass object-precision shadow algorithm – cont. • Polygons are sorted on their nearest z coordinate relative to light source viewpoint. The polygon closest to it is used to clip all other polygons. All polygons (or their parts) that are behind clip polygon are invisible for light source (are in shadow), visible polygon are marked as lit. • The resulting classified polygons are merged with original polygon data base. Again, same method is used to determine visible polygons from the viewer view point this time. Invisible polygons are deleted, visible to viewer polygons that are covered by polygons marked lit are rendered with full color, visible polygons that are covered with unlit polygons from the previous step – are shadowed.

  17. Two-pass object-precision shadow algorithm – cont. • Disadvantage: • computation increases with square of amount of data (straightforward implementation - testing each polygon whether it is covered by unlit polygons to decide if its shadowed; more sophisticated implementation requires reorganizing data structure that is also time consuming) • If there are polygons partly visible by light source – there is a need to split them to lit and unlit part – number of polygons can grow

  18. 3. Shadow Mapping (Two-pass Z-buffer shadow algorithm) • Shadow mapping is image-space algorithm, which means that no knowledge of the scene’s geometry is required to carry out the “in shadow” computations. (geometry is used to pass scene to Z-buffer and knowledge of light and viewer locations) • Compute the discrete visibility of the scene from light source to decide if a pixel is shadowed Scene from Scene from light source eye

  19. Shadow Mapping (Two-pass Z-buffer shadow algorithm) – cont. • Pass 1: • Render scene from light-source “viewpoint” • For each pixel saveonly z depth and no color Visualization of Z-buffer contents: Greater intensity means greater distance from light

  20. Shadow Mapping (Two-pass Z-buffer shadow algorithm) – cont. • Pass 2: • Render the scene from eye viewpoint, and • Transform every pixel P, visible for eye, to the light source space: • x,y,z  xL,yL,zL (where this point is located for light point of view) • Go to xL,yL location in Z-buffer from Pass 1 and obtain the value stored there: Z-light-buff • Compare zL and Z-light-buff • If Z-light-buff is closer to the light than zL (=coordinate of transformed point) that means that something occludes P (value stored in Z-buffer is of closest polygon to light source, and it is not zL) • Else P is illuminated by light source (seen both by viewer and light source)

  21. Shadow Mapping (Two-pass Z-buffer shadow algorithm) – cont. • Disadvantage: As it uses discrete sampling it must deal with various aliasing artifacts (like erroneous self shadowing) • When transforming a point from a surface in the eye’s point of view into the lights coordinate, computations are not exact due to Z-buffer quantization. Everything begins to shadow itself, displaying a vivid moir

  22. 3.5 A Hybrid Approach • One more shadow algorithm which deserves mention is McCool’s clever idea shadow volume reconstruction from depth maps [McCool 2000]. • This algorithm is a hybrid of the shadow map and shadow volume algorithms and does not require a polygonal representation of the scene. • Instead of finding the silhouette edges via a dot product per model edge (shadow volumes), a depth map of the scene from the light’s point of view is acquired (shadow map) from which the silhouette edges are extracted using computer vision techniques. From these edges the shadow volumes are constructed (silhouette edges are extruded)

  23. 4. Projected planar shadows • Projected planar shadows are the simplest shadow generation algorithm still in wide use. It suffices for single objects scenes throwing shadows on a plane. The main idea is to draw a projection of the object on the plane • The idea: given a plane n*X + d = 0 (n is a normal vector, X is a general point (x,y,z), n*X is ascalar multiplication, d isoffset vector) and a point light source L, construct a projection matrix M that projects each vertex v onto the plane: Mv = p, where p is corresponding point on plane.

  24. Projected planar shadows – cont. • l is location vector of light source • By multiplying vertices of object by this matrix, • we obtain borders of a shadow it casts on a plane n*x + d =0 This is a formula general case of arbitrary projection plane. Blinn in his paper provides formulae for plane z=0.

  25. Projected planar shadows – cont. • Disadvantage: The Blinn's model does not really suffice due to the fact that modern scenes are often so complex that the calculation of shadows to all possible surfaces this way would be too time consuming

  26. Umbra and Penumbra Until now all algorithms were limited to dealing with point light source. Soft shadows (penumbra) – Are produced by extended or distributed light source, provide more realistic look, require more calculations. Hard shadows (umbra) – are produced by point light source. Simpler to implement, but their look is too “computerized”.

  27. References • “Shadow algorithms for computer graphics” (F. C. Crow) • “Me and (my) fake shadow” (J. Blinn) • An Introduction to Stencil Shadow Volumes • Shadow Mapping and Shadow Volumes(http://www.devmaster.net/articles) • Computer Graphics: Principles and Practice (J. D.Foley et al, second edition)

More Related