270 likes | 348 Views
An E fficient Spatio-Temporal Architecture for Animation Rendering. Vlastimil Havran , Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel Max-Planck-Institut f ür Informatik, Saarbrücken, Germany. Motivation.
E N D
An Efficient Spatio-Temporal Architecture for Animation Rendering Vlastimil Havran, Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel Max-Planck-Institut für Informatik, Saarbrücken, Germany
Motivation • In the traditional approach to rendering of high quality animation sequences every frame is considered separately. • The temporal coherence is poorly exploited: • redundant computations. • The visual sensitivity to temporal detail cannot be properly accounted for: • too conservative stopping conditions, • temporal aliasing.
Goal • Developing an architecture for efficient rendering of high-quality animations which better exploits the spatio-temporal coherence between frames: • Visibility: Multi-frame ray tracing • Global illumination: Bi-directional Path Tracing extended to re-using samples between frames • Texturing and shading: sharing information between frames • Motion blur: conservative computation in 2D image space • Cache use: coherent patterns of access to data structures in memory (along motion compensation trajectories) • Memory requirements: unlimited image resolution
Previous Work • Industrial solutions such as PhotorealisticRenderman, Maya Rendering System, or Softimage • frame–by–frame computation • postprocessing of animation to decrease temporal aliasing
Previous Work • Temporal Coherence in Ray Tracing • Coherence of shaded pixels • Interpolation between frames [Maisel and Hegron 92] • Reprojection [Adelson and Hughes 95] • 4D radiance interpolants [Bala et al. 99] • Coherence in acceleration data structures • Space-time solutions [Glassner 88, Groeller 91] • Static vs. dynamic objects [Besuievsky and Pueyo 01, Lext and Moller 01, and Wald et al. 02]
Previous Work • Temporal Coherence in Global Illumination • Coherence of illumination samples • Render cache [Walter et al. 99] • Shading cache [Tole et al. 02] • Reusing photon hit points [Myszkowski et al. 01] • Selective Photon Tracing [Dmitriev 02] • Coherence in visibility computation • Global lines [Besuievsky and Pueyo 01] • Coherence in generation of random numbers • Random number sequence associated with each light transport path [Lafortune 96, Jensen 01, Wald et al. 02]
Space-Time Architecture: Principle • Compute samples using a variant of Path-Tracing • Pixels color = mean of sample values • 2 types of samples: • Native samples: • Expensive • Computed from scratch • Recycled samples: • Cheap • Based on previous computations of native samples (using reprojections)
Motion Compensation • Camera and object motion compensation • Memory access coherence …..
Multi-Frame Ray Tracing • Aggregate queries • Single ray geometry • Results possible for all frames in [f(i-R-S)’f(i+R)] • Two types of visibility queries • Ray shooting • Visibility between two points • Three settings of visibility queries • Exact result for single frame f(i) • Exact for single frame f(i) + validity for all other frames • Exact results for all frames in [f(i-R)’f(i+R)]
Multi-Frame Ray Tracing Construction of Spatial Data Structure • Instantiation of dynamic objects for range of frames • Static versus dynamic objects separation • Construction of global kd-tree over static objects • Hierarchical clustering over dynamic objects • Construction of kd-trees for clusters of objects • Insertion of kd-trees for clusters into global kd-tree leaves • Refinement of global kd-tree leaves + empty space cutting off (efficiency improvement methods) Additional techniques used • Mail boxes (cache) for ray transforms, objects, and kd-trees • Frame masks for inserted kd-trees and shadow cache
R S R The Animation Buffer • Iterates over all pixels in S consecutive frames • If more samples are required • Compute a native sample for frame fi • Reproject it and recycle it for all frames in [f(i-R)’ f(i+R)] • S+2R frames are kept in the buffer
S R S R The Animation Buffer • Iterates over all pixels in S consecutive frames • If more samples are required • Compute a native sample for frame fi • Reproject it and recycle it for all frames in [f(i-R)’ f(i+R)] • S+2R frames are kept in the buffer Saving to a disk
Shading Computation • A simplified version of RenderMan Shading Language • Each shader decomposed into • View-independent component • re-usable, shared between frames • View-dependent component • recomputed for each frame
Motion Blur • Accuracy & Quality • The same sample point is considered for multiple frames • In other frame-by-frame architectures the motion of objects must be computed explicitely by additional samples. • Temporal changes in shading are properly accounted for • Difficult in other architectures • Efficiency • 2D computation
Results Speedup for bi-directional path tracing • Moving camera, moving objects: 7.7 • Moving objects only: 13.3 • Moving camera only: 8.8 Time per frame 240 – 415 sec. Proportion of native samples 2.4 - 4.7 % Cost of native samples (profiler) 44 - 64 % of the whole computation time. Disc caching overhead 10 % slowdown (for 1% of main memory used)
Motion blur renderings Time per frame 120 sec. Time per frame 150 sec. Motion Blur - computational overhead 25%
Conclusions • We presented an efficient architecture for rendering of high-quality animations • In our architecture path tracing, texturing and shading, motion blur can be efficiently computed. • Our architecture is efficient in particular for scenes in which a limited number of objects moves locally in the scene and camera motion is slow. • For more complex motion scenario the animation segment length can be reduced which in the limit may boil down to the traditional frame-by-frame computation. • Data structures handling dynamic objects require additional memory, which is acceptable on modern computers. • The memory overhead involved in storing multiple images is negligible due to efficient buffering.
Future Work • Reusing samples for many pixels in the same frame as suggested in [Bekaert et al. 2002, Szirmay-Kalos 2002] • Improving the efficiency of visibility tests for reprojected samples using a shaft culling approach. • Skipping some visibility tests based on the spatio-temporal coherence of neighboring samples as implemented in the Maya Rendering System [Sung et al. 2002]
Thanks! • Polina Kondratieva for rewriting RenderMan Shaders • Markus Weber for help with preparing the scenes • Partial support of IST-2001-34744 project RealReflect • All anonymous reviewers for their comments
Space-Time Architecture: Reprojection N-times recycled 1 x native light reflected
Rendering animations – ray tracing with shaders Speedup ray tracing • Moving camera, moving objects: 2.6 Time per frame 770 sec. ( deterministic reflections must be recomputed! )