380 likes | 472 Views
Non-Photoreal Rendering (and other stuff). CSE 191A: Seminar on Video Game Programming Lecture 10: Non-Photoreal Rendering UCSD, Spring, 2003 Instructor: Steve Rotenberg. Info. Rockstar positions Next year Homework. Non-Photoreal Rendering. NPR. Medium
E N D
Non-Photoreal Rendering (and other stuff) CSE 191A: Seminar on Video Game Programming Lecture 10: Non-Photoreal Rendering UCSD, Spring, 2003 Instructor: Steve Rotenberg
Info • Rockstar positions • Next year • Homework
NPR • Medium • Substrate simulation (paper, canvas…) • Medium simulation (pencil, pen, watercolor, oil paint…) • Rendering • Strokes • Edge handling • Shading & filling • Stylistic Simulation • Impressionism, cartoon, mosaic, technical illustration… • Vision & computer interpretation
Medium Simulation • For offline NPR, one can do detailed simulations of the actual substrate & drawing medium • Paper/substrate surface & volumetric characteristics • Pencil/paper interaction & tone transfer • Dynamic processes (watercolor diffusion, paint mixing…) • Real time NPR will most likely use higher level abstractions, but it is still important to think about the physical processes involved in drawing and painting
Strokes • Entire pictures can be described in terms of the individual brush strokes used to create them. • Brush strokes can carry lots of information, including: • Path • Tool orientation • Pressure • Ink/tone transfer rates • Other specific information • One doesn’t have to render the entire image with brush strokes, but they are a useful concept for a lot of NPR applications, especially edges.
Edge Rendering • Boundary edge • Creases & hard edges • Material edge • Silhouette edge
Surface Angle Edge Rendering • This technique uses spherical environment mapping to render darkened edges • Normals are transformed to view space (camera relative) and then the x and y components of the normal map to a texture coordinate n'=n·W·C-1 t.x=0.5*(n.x+1.0) t.y=0.5*(n.y+1.0) • Issues • Can be difficult to tune • Requires smooth shaded normals, so it can be difficult to apply to faceted objects
Line Edge Rendering • With this scheme, edges are explicitly rendered as lines. • Boundary edges, material edges, and creases are identified offline and always rendered • Soft edges (potential silhouettes) are identified offline and tested at runtime. • Every soft edge contains pointers to the two triangles it connects. • If one triangle is facing the viewer and the other is not, then the edge must be a silhouette and gets rendered. • Z-Buffer biasing (or projection biasing) is used to clean up z-fighting
Geometric Edge Rendering • There are a variety of specific techniques, but they all use geometric manipulation of model polygons to extend edges • Typically, front and back facing polygons are handled separately. Usually front facing ones are rendered as normal. Back facing polys are extended or moved in some fashion.
Image Based Edge Rendering • With this technique, the geometry is rendered into the framebuffer and z-buffer. In some variations, the x,y,z value of the normal is written into the framebuffer RGB. • After the image is rendered, the z-buffer and/or framebuffer is processed to identify discontinuities
Stroke-Based Edges • First, edges are found using any usable algorithm • Visible edges are processed and converted into 2D strokes • Strokes are rendered using any stylized algorithm desired
Cartoon Shading • Traditionally, cartoon characters are ‘shaded’ with discreet colors, rather than smooth gradations. A simple is the two-tone light/shadow approach. Observe that the shading boundary will probably not fall on actual vertices and polygon edges. • A simple way to do cartoon shading is by using a 1-dimensional texture map that has the desired ‘color ramp’. Instead of the lighting calculations outputting a color, they output a 1-D texture coordinate. • Another option is the use of ‘normal maps’, but these are mostly useful in more static lighting conditions. • Both of these techniques allow for arbitrary color ramps with hard and/or soft boundaries
Stylistic Rendering • Edge handling • Medium simulation • Image space patterns • Surface patterns
Surface Based Style Patterns • “Real Time Hatching”, Praun, et.al., SIGGRAPH 2001
Style Patterns • Klein, et.al, SIGGRAPH 2000
Mosaics & Patterns • Using off-screen rendering, an image can be first rendered as a normal bitmap, and then re-sampled by an irregular pattern of colored polygons
Image Processing Effects • By using off-screen rendering to implement a two-pass scheme, one can do whatever type of image processing desired to achieve whatever effect necessary (subject to hardware limitations, of course) • Examples • Solarization • Thresholding • Edge detection • Blurring
Non-Photoreal Animation? • Squash & stretch • Free-form deformations • Cartoon physics (& modal dynamics) • Nonlinear camera projections • Silly particles & effects
C++ • Advantages • Availability on a wide range of platforms (including brand new ones) • Standardization (code portability, people portability) • Performance • Compatibility with DirectX, OpenGL, and middleware • Disadvantages • Can be lead to memory/performance bloat if not used carefully • Has been surpassed by ‘cleaner’ languages
Object Oriented Programming • Use objects for things that should be objects. • Use member functions for things that should be member functions. • Use virtual functions only when there is a clear need for polymorphism • Prefer containment over than inheritance when appropriate • Define clear software layers • Use coding conventions • Code mitosis & refactoring • Use a lot of automated testing
Levelization • Minimize cyclic and complex dependencies • Define explicit levels and boundaries for class interaction • Collect groups of related classes into packages
Game Library Levelization • Core libraries (data structures, file IO, math routines…) • Device libraries (rendering, audio, input, networking) • Development libraries (widgets, testing frameworks…) • Graphics (culling, effects, lighting) • Physics (collision detection, particles, rigid bodies…) • Character animation • Game components (weapons, HUD…) • AI • Game
Software Engineering References • “Large Scale C++ Software Design”, Lakos, 1996 • “Design Patterns”, Gamma, et.al., 1995 • “Extreme Programming Explained”, Beck, 2000 • “Agile Software Development”, Cockburn, 2002
Core Components • Rigid body motion • Collision detection • Rigid body collision response • Wheel physics (suspension, friction) • Engine/drivetrain
Car Update For each car { Update engine based on current throttle For each wheel { Compute suspension & friction forces based on final configuration of previous frame, but use new engine torque & current control inputs Apply forces to rigid body } Apply other forces to car (gravity…) Move car to new candidate position } Detect & resolve rigid collisions For each car { For each wheel { Do collision probe to find new wheel orientation } }