350 likes | 452 Views
Genesis of Image Space NPR. Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90. Image space algorithms. Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90.
E N D
Genesis of Image Space NPR Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90
Image space algorithms Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 •Operations on G-buffersto extractcertain properties various images •Combine these images with renderedimages G-buffers ?
Computer-Generated Images • Special kind of recording equipmentyields special images • x-ray images • thermal images • sonar images
G-buffers • Translate this approach to computer graphics • Render algorithms to create images that show scene properties normally hidden to the • viewer • object ID • distance to view plane • surface normal • patch coordinates (u,v) for spline surfaces • … • G-buffers (geometric buffers)
G-buffers • Pixel color now encodes 3D informationand not just illumination • Reveal information about the underlyinggeometry • Operations on G-buffers • combination • edge detection • …
RGB-buffer eveal
Object ID-buffer eveal
Depth-buffer eveal
Normal-buffer eveal
RGB-buffer eveal
Object ID-buffer eveal
Depth-buffer eveal
Normal-buffer eveal
Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 • Data structures + algorithms: • Drawing discontinuities, edges, contour lines, curved hatching from the image buffer • Edge classification : • Profile - - the border line of an object on the screen • Internal - - a line where two faces meet. • Images generated: • Depth • First-order differential • Second-order differential • Profile • Internal edge
Depth Image Grayscale image that maps [dmin, dmax] to [0, 255] Equalizes thegradient value of depth image with the slope of the surface. Shaded image One pixel length (eye coordinate) Depth image Distance: viewpoint to screen Depth of object (eye coordinate) OpenGL depth image content extracted by glReadPixels with GL_DEPTH_COMPONENT
Depth Image Grayscale image that maps [dmin, dmax] to [0, 255]
Sobel’s filter Depth image First-order differential Second-order differential
Normalization of images Profile image Distinguishes discontinuities from continuous changes Limit of the gradient for the elimination of 0th order discontinuities Internal edge image
Operations on G-buffers (so far…) Edge detection • RGB-buffer discontinuities in brightness(illumination), i.e., shadows, material, objects • z-buffer discontinuities in depth, i.e. Objectboundaries, also boundaries within one object(creases) • OID-buffer discontinuities in “objects”, i.e.,object silhouettes
Schofield, S..Non-photorealistic Rendering: A critical examination and proposed systemPhD thesis, School of Art and Design, Middlesex University, May 1994 http://www.microgds.com/index.shtml
Using Normal Maps to Find Creases and Boundaries Decaudin, P. Cartoon-looking rendering of 3d-scenes. Research Report #2919, INRIA Rocquencourt 1996. We can augment the silhouette edges computed with the depth map by using surface normals as well. We will do this by using a normal map, which is an image that represents the surface normal at each point on an object. The values in each of the (R; G;B) color components of a point on the normal map correspond to the (x; y; z) surface normal at that point. Depth map Normal map
To compute the normal map for an object with a graphics package: • First, we set the object color to white, and the material property to diffuse reflection. • We then place a red light on the X axis, a green light on the Y axis, and a blue light on the Z axis, all facing the object. Additionally, we put lights with negative intensity on the opposite side of each axis. • We then render the scene to produce the normal map. Each light will illuminate a • point on the object in proportion to the dot product of the surface normal with the light’s axis. An example is shown in Figure (c,d). • We can then detect edges in the normal map. These edges detect changes in surface orientation, and can be combined with the edges of the depth map to produce a reasonably good silhouette image (Figure (e)).
Outline drawing with image processing. (a) Depth map. (b) Edges of the depth map. (c) Normal map. (d) Edges of the normal map. (e) The combined edge images.
Outline detection of a more complex model. (a) Depth map. (b) Depth map edges. (c) Normal map. (d) Normal map edges. (e) Combined depth and normal map edges.
Rossignac, J. and van Emmerik, M. Hidden contours on a frame-buffer Proc. of the 7th Eurographics Workshop on Computer Graphics Hardware, 1992. • Creates visible silhouette edges with constant thickness at the same depth value as the corresponding polygon edge • Works well when dihedral angle between the adjacent front- and back-facing is not large • As the line width increase, gaps may occur between silhouette edges
Rossignac, J. and van Emmerik, M. Hidden contours on a frame-buffer Proc. of the 7th Eurographics Workshop on Computer Graphics Hardware, 1992. • Fill background with white • Enable back-face culling, set depth function to “Less Than”” • Render front-face polygons in white • Enable front-face culling, set depth function to “Less Than or Equal To” • In black, draw back-facing polygons in wire-frame mode. • Repeat for a new viewpoint