310 likes | 443 Views
KIPA Game Engine Seminars. Day 6. Jonathan Blow Ajou University December 2, 2002. Level-of-Detail Method Overview. Traditional Purpose: Speed Boost Ideal: Render a fixed number of triangles always Doesn’t matter how far your view stretches into the distance Diagram of pixel tesselation
E N D
KIPA Game Engine Seminars Day 6 Jonathan Blow Ajou University December 2, 2002
Level-of-Detail MethodOverview • Traditional Purpose: Speed Boost • Ideal: Render a fixed number of triangles always • Doesn’t matter how far your view stretches into the distance • Diagram of pixel tesselation • Object detail / triangle count as a function of distance
Future Purpose:Geometric Antialiasing • Discussion of scenes with many small objects far away • In a rendering paradigm like MCRT we get a certain amount of antialiasing for free • When projecting geometry onto the screen, we do not; we need to implement something that provides antialiasing for us
Level-of-Detail Methods • Static mesh switching • Progressive mesh • Continuous-LOD mesh • Issues involving big objects (static and progressive mesh not good enough?)
Static mesh switching • Pre-generate a series of meshes decreasing in detail • Switch between them based on z distance of the mesh from the camera • Perhaps be more analytical and switch based on max. projected pixel error? • Nobody actually does this because it is far too conservative
Progressive Mesh • Generate one sequence of collapses that takes you from high-res to 1 triangle • Dynamically select number of triangles at runtime • Works well with modern 3D hardware since you only modify a little bit of the index buffer at a time.
Progressive MeshDisadvantages • Relies on frame coherence (bad!) • Interferes with triangle stripping and vertex cache sorting (they become mutually impossible). • High code complexity, and it makes everything else more complicated, and adds restrictions to everything else • Example of normal map generation restricted to object space
Continuous Level-of-DetailAlgorithms • Lindstrom-Koller, ROAM, Rottger quadtree algorithm • Dynamically update tessellation based on estimate of screen-space error • Crack fixing between adjacent blocks, etc
Continuous LOD • Example of binary triangle trees • There are other formats (quadtree, diamond, etc) but the ideas are similar
Continuous LODDisadvantages • Extremely complicated implementations • Slow on modern hardware • Extreme reliance on frame coherence (bad!) • Not conducive to unified rendering (hard to make work on curved surfaces, arbitrary topologies)
Continuous LOD • Has a lot of hype in the amateur and academic communities • Is currently not competitive with other LOD approaches • This is not likely to change any time soon
Introduction • We need an effective way to benchmark / judge LOD schemes • The academic world is not really doing this right now! • We need a standard set of data with comparable results • University of Waterloo Brag Zone for image compression
LOD Metric? • We often create metrics for taking each small step in a geometric reduction • We don’t have a metric for comparing a fully reduced mesh with the source model or another reduced mesh • Because our mesh representations are so ad hoc
Image Compression guyshave a metric • (even though they know it’s not that good) • PSNR measures difference between compressed image and original • They know it has problems (not perceptually driven) and are working on a better metric • But at least they have a way of comparing results, which means they are sort of doing science!
Metric ideas • “Sum of closest-point distances” • Continuous, which is good • Very expensive to compute • Non-monotonic (!), which is bad • Monotonic for small changes, usually, which might be good enough • Ignores texture warping, which is bad • Unless we try it in 5-dimensional space • Ignores vertex placement • Important for rasterization (iterated vertex properties!) • Example of big flat area • Ignores cracks in destination model
Lindstrom/Turk screenspaceLOD comparison • Guide compression of a mesh by taking snapshots of it from many different viewpoints and PSNR’ing the images • This can work but PSNR is not necessarily stable with respect to small image-space motions
Lindstrom/Turk screenspaceLOD comparison • (Talking about paper, showing figures from it)
The Fundamental Problem • Our rendering methods are totally ad-hoc; we have 3 different things: • Vertices • Topology • Texture • A metric that uniformly integrates these things is very difficult.
Complexity of metric • The more complicated a metric is, the more difficult it is to program correctly, ensure we are using it correctly • That our simplest possible metric should be something so complicated … that is a bad sign.
Compare with Voxels • Voxel geometry representations can basically use something like PSNR directly; no need for complicated metrics • Lightfields can also (though it’s a little harder)
“Digital Geometry Processing” • Work by Peter Schroeder at Caltech, and many others • Attempts to develop DSP-like ideas for geometry manipulation • Heavy use of subdivision surfaces
How DGP works • Apply a scaled filter kernel to the neighborhood of a vertex • Like wavelet image analysis in its multiscale aspects • But unlike wavelets/DSP in that the inputs/outputs are not homogeneous • What exactly is the high-pass residual after a low-pass filter? • This is because of that whole topology-different-from-vertices thing
Actual effective DGP would be …? • I don’t know. (It’s a hard problem!) • Spherical harmonics would work, for shapes representable as functions over the sphere
What I Use • Garland/Heckbert Error Quadric Simplification • Static Mesh Switching • I want to do a unified renderer this way (characters, terrain, big airplanes, whatever) • People seem to think crack fixing is hard but it is actually easy • Maybe that’s why people haven’t tried this yet?
Discussion ofGarland/Heckbert Algorithm • (whiteboard)
Garland/Heckbert References • “Surface Simplification Using Quadric Error Metrics” • “Simplifying Surfaces with Color and Texture using Quadric Error Metrics”
G/H is useful also if you are making progressive meshes • It just tells you how to collapse the mesh; doesn’t dictate how you will use that information.
Review of GH AlgorithmIn code • (looking at the code)