480 likes | 573 Views
Computer Graphics Research at Virginia. David Luebke Department of Computer Science. Outline. My current research Perceptually Driven Interactive Rendering Perceptual level of detail control Wacky new algorithms Scanning Monticello Graphics resources Building an immersive display
E N D
Computer Graphics Research at Virginia David Luebke Department of Computer Science
Outline • My current research • Perceptually Driven Interactive Rendering • Perceptual level of detail control • Wacky new algorithms • Scanning Monticello • Graphics resources • Building an immersive display • Building a rendering cluster?
Perceptual Rendering • Next few slides from a recent talk • Apologies to UVA vision group
Perceptually Guided Interactive Rendering David Luebke University of Virginia
Motivation:Stating The Obvious • Interactive rendering of large-scale geometric datasets is important • Scientific and medical visualization • Architectural and industrial CAD • Training (military and otherwise) • Entertainment
Motivation:Model Size • Incredibly, 3-D models are getting bigger as fast as hardware is getting faster…
Big Models:Submarine Torpedo Room 1994: 700,000 polygons Courtesy General Dynamics, Electric Boat Div.
Big Models:Coal-fired Power Plant 1997: 13 million polygons (Anonymous)
Big Models:Plant Ecosystem Simulation 1998: 16.7 million polygons Deussen et al: Realistic Modeling of Plant Ecosystems
Big Models:Double Eagle Container Ship 2000: 82 million polygons Courtesy Newport News Shipbuilding
Big Models:The Digital Michelangelo Project 2000 (David): 56 million polygons 2001 (St. Matthew): 372 million polygons Courtesy Digital Michelangelo Project
(Part Of) The Solution:Level of Detail • Clearly, much of this geometry is redundant for a given view • The idea: simplify complex models by reducing the level of detail used for small, distant, or unimportant regions
Traditional Level of DetailIn A Nutshell… • Create levels of detail (LODs) of objects: 249,924 polys 62,480 polys 7,809 polys 975 polys Courtesy Jon Cohen
Traditional Level of DetailIn A Nutshell… • Distant objects use coarser LODs:
The Big Question How should we evaluate and regulate the visual fidelity of our simplifications?
Measuring Fidelity • Fidelity of a simplification to the original model is often measured geometrically: METRO by Visual Computing Group, CNR-Pisa
Measuring Visual Fidelity • However… • The most important measure of fidelity is usually not geometric but perceptual: does the simplification look like the original? • Therefore: • We are developing a principled framework for LOD in interactive rendering, based on perceptual measures of visual fidelity
Perceptually Guided LOD: Questions And Issues • Several interesting offshoots: • Imperceptible simplification • When can we claim simplification is undetectable? • Best-effort simplification • How best to spend a limited time/polygon budget? • Silhouette preservation • Silhouettes are important. How important? • Gaze-directed rendering • When can we exploit reduced visual acuity
Related Work:Perceptually Guided Rendering • Lots of excellent research on perceptually guided rendering • But most work has focused on offlinerendering algorithms (e.g., path tracing) • Different time frame! • Seconds or minutes vs. milliseconds • Sophisticated metrics: • Visual masking, background adaptation, etc…
8 7 8 Fold A A 2 10 10 6 6 9 9 3 3 Unfold 1 4 5 4 5 Perceptually Guided LOD: Our Approach • Approach: test folds (local simplification operations) against a perceptual model to determine if they would be perceptible
Perception 101:The Contrast Sensitivity Function • Perceptual scientists have long used contrast gratings to measure limits of vision: • Bars of sinusoidally varying intensity • Can vary: • Contrast • Spatial frequency • Eccentricity • Velocity • Etc…
Perception 101: The Contrast Sensitivity Function • Contrast grating tests produce a contrast sensitivity function • Threshold contrastvs. spatial frequency • CSF predicts the minimum detectablestatic stimuli
Your Personal CSF Campbell-Robson Chart by Izumi Ohzawa
Framework: View-Dependent Simplification • Next: need a framework for simplification • We use view-dependent simplification for LOD management • Traditional LOD: create several discrete LODs in a preprocess, pick one at run time • View-dependent LOD: create data structure in preprocess, extract an LOD for the given view
View-Dependent LOD: Examples • Show nearby portions of object at higher resolution than distant portions View from eyepoint Birds-eye view
View-Dependent LOD: Examples • Show silhouette regions of object at higher resolution than interior regions
View-Dependent LOD: Examples • Show more detail where the user is looking than in their peripheral vision: 34,321 triangles
View-Dependent LOD: Examples • Show more detail where the user is looking than in their peripheral vision: 11,726 triangles
View-Dependent LOD:Implementation • We use VDSlib, our public-domain library for view-dependent simplification • Briefly, VDSlib uses a big data structure called the vertex tree • Hierarchical clustering of model vertices • Updated each frame for current simplification
Folding a node affects a limited region: Some triangles change shape upon folding Some triangles disappear completely The Vertex Tree:Region Of Effect 8 7 8 Fold Node A A 2 10 10 6 6 9 9 3 3 UnfoldNode A 1 4 5 4 5
I am interested in exploring new perceptually-driven rendering algorithms Don’t necessarily fit constraints of today’s hardware Ex: frameless rendering Ex: I/O differencing (time permitting) Give the demo, show the movie… Wacky New Algorithms
Non-Photorealistic Rendering (time permitting) • Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
Non-Photorealistic Rendering • Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
Non-Photorealistic Rendering • Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
NPRlib • NPRlib: flexible callback-driven NP rendering Bunny: Traditional CG Rendering
Non-Photorealistic Rendering • NPRlib: flexible callback-driven NP rendering Bunny: Pencil-Sketch Rendering
Non-Photorealistic Rendering • NPRlib: flexible callback-driven NP rendering Bunny: Charcoal Smudge Rendering
Non-Photorealistic Rendering • NPRlib: flexible callback-driven NP rendering Bunny: Two-Tone Rendering
Non-Photorealistic Rendering • NPRlib: flexible callback-driven NP rendering Bunny: Two-Tone Rendering
Scanning Monticello • Fairly new technology: scanning the world
Scanning Monticello • Want a flagship project to showcase this • Idea: scan Thomas Jefferson’s Monticello • Historic preservation • Virtual tours • Archeological and architectural research, documentation, and dissemination • Great driving problem for scanning & rendering research • Results from first pilot project. • Show some data…
Graphics Resources • 2 SGI Octanes • Midrange graphics hardware • SGI InfiniteReality2 • 2 x 225 MHz R10K, 1 Gb, 4 Mb cache • High-end graphics hardware: 13 million triangles/sec, 64 Mb texture memory • Hot new PC platforms (P3s and P4s) • High-end cards built on nVidia’s best chipsets • Stereo glasses, digital video card, miniDV stuff • Quad Xeon on loan • Software! • Maya, Renderman, Lightscape, Multigen, etc.
Graphics Resources • Building an immersive display • NSF grant to build a state-of-the-art immersive display: • 6 projectors, 3 screens, passive stereo • High-end wide-area head tracker • 8 channel spatial audio • PCs to drive it all • Need some help building it…
Graphics Resources • Building a rendering cluster? • Trying to get money to build a high-end rendering cluster for wacky algorithms • 12 dual-Xeon PCs: • 1 Gb RAM • 72 Gb striped RAID • nVidia GeForce3 • Gigabit interconnect • Don’t have the money yet, but do have 6 hot Athlon machines
More Information • I only take students who’ve worked with or impressed me somehow • Summer work: best • Semester work: fine, but harder • Interested in graphics? • Graphics Lunch: Fridays @ noon, OLS 228E • An informal seminar/look at cool graphics papers • Everyone welcome, bring your own lunch