1 / 34

Real-Time Volume Graphics

Real-Time Volume Graphics. Dealing With Large Volumes. Large Volumes - Motivation. Long-leg study 512x512x1747@16bit ~ 1GB. Large Volumes - Motivation. Visible Female CT scan: 512x512x1724@16bit ~ 1GB. Large Volumes - Motivation.

ely
Download Presentation

Real-Time Volume Graphics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Volume Graphics

  2. Dealing With Large Volumes

  3. Large Volumes - Motivation • Long-leg study 512x512x1747@16bit ~ 1GB

  4. Large Volumes - Motivation Visible Female CT scan: 512x512x1724@16bit ~ 1GB

  5. Large Volumes - Motivation • Visible Male cryosection RGB data:2048x1216x1877@24bit ~ 14GB

  6. Large Volumes - Motivation • 4D cardiac data: 512x512x240@16 bit, 20 frames ~ 2.5GB

  7. Large Volumes - Caches Memory + Bandwidth! Host memory Graphics card memory AGP8x: 2 GB/sec AGP memory Level 2 cache Level 3 cache 256 MB 160 MB/sec > 30 GB/sec ~5 GB/sec Level 1 cache Texture cache 4 GB

  8. Large Volumes - Introduction • Problems with large volumes on GPUs • Each Voxel is accessed multiple times • Filtering • Gradient Computation • Oversampling => Huge memory bandwidth required • Limited memory bandwidth: GPU: >30GB/sAGP8x: 2GB/s • Limited GPU memory: typically 256 MB

  9. Large Volumes - Overview • Bricking • Multi-Resolution techniques • Compression • Packing techniques • Wavelets • Vector quantization • Procedural description of detail • Optimizations

  10. Brick1 Brick2 No overlap One voxel overlap Large Volumes - Bricking • Subdivide volume into smaller blocks • Allocate memory for one block on GPU • Copy in GPU mem. and render one block at a time • One voxel overlap for contiguous interpolation

  11. Large Volumes - Bricking • Performance mainly limited by AGP transfer • Subsampled copy of data in GPU mem. for interaction • Bricking only for final quality image AGP8x: 2 GB/sec AGP memory Level 2 cache Level 3 cache 160 MB/sec > 30 GB/sec Level 1 cache Texture cache

  12. Large Volumes - Multi-Resolution VR LaMar et al., Multi-Resolution techniques for interactive texture-based volume visualization, IEEE Visualization’99 • Octree-based decomposition of volume into bricks • Render bricks at differentresolution: • Distance to viewer • Focus point

  13. Large Volumes - Multi-Resolution VR • Weiler et al., Level-of-Detail volume rendering via 3D textures, In Volume Visualization and Graphics Sympsium 2000 • Extension to fix boundaries

  14. Large Volumes - Compression • Texture Compression OpenGL extensions: 2D: EXT_texture_compression_s3tc.3D: NV_texture_compression_vtc • Hardware implementation in several graphics chips, e.g. NVIDIA GeForce series, ATI Radeon Series. • Disadvantages of S3TC: • Moderate compression ratios, • Block-artifacts, inappropriate for non-smooth data • Fixed compression scheme • Only for RGB(A) data

  15. Large Volumes - Wavelets • Volume data • Mostly smooth • Fine high-frequency detail incertain regions • Wavelets • Analyze data at differentresolutions and frequencies • Many coefficient very small=> compression • Hierarchy of signals=> Multi-res reconstruction

  16. Large Volumes - Wavelets • Guthe et al., Interactive Rendering of Large Volume Data Sets, Visualization’02 • Hierarchical Wavelet Representation • Divide data into blocks of (2k)3 voxels (k=16 good) • Apply wavelet filters=> lowpass filtered block=> wavelet coefficients • Group 8 blocks • Repeat until 1 block left • 2 encoding schemes

  17. Large Volumes - Wavelets • Decompression of blocks during rendering on CPU • Rendering of block on the GPU • Caching Strategy

  18. Large Volumes - Wavelets • Adjust resolution of data to screen resolution • Project voxel-space to screen • Refine if above screen res. • View-dependent priority schedule • Interactive walkthrough of visible female/male • Still AGP bandwidth bound=> GPU decompression possible ? Google “dwtgpu”

  19. Large Volumes - Packing TWO LEVELS OF THE DATA REPRESENTATION: • Index data (upper level): • Each cell/texel of a coarse grid corresponds to one data block. • Each cell/texel specifies coordinates and scaling factors of the corresponding data block. • Packed data (lower level): • All data blocks packed into one uniform grid/texture. Kraus et al., Adaptive Texture Maps, Graphics Hardware Workshop’02

  20. Large Volumes - Packing TWO STEPS OF SAMPLING ADAPTIVE TEXTURES: • Read index data and calculate coordinates for the second step. • Read and interpolate actual texture data from packed data. • Decoding in fragment stage (s,t) (s’,t’)

  21. Large Volumes - Packing Wei Li et al. – Texture Partitioning and Packing for Accelerating Texture-based Volume Rendering, GI 2003 • Partition texture space with box-growing algorithm • Based on similar densities and non-zero gradient magnitudes

  22. Large Volumes - Packing • Determine gradient sub-textures with non-zero magnitude • Pack sub-textures into a single smaller texture • Decoding in vertex stage using texture coords.

  23. Large Volumes - VQ • Image formats with palettes specify for each pixel one index into a color palette (= codebook). index data decoded data dependent texture read palette (codebook)

  24. Large Volumes - VQ Schneider/Westermann – Compression Domain Volume Rendering, IEEE Visualization 2003 • 3 Level Hierarchical decomposition: • Partition data into blocks of size 43 • Downsample to 23, store difference vector (64 vector) • Downsample to 1, store difference vector (8 vector) • VQ 64 vector, VQ 8 vector

  25. Large Volumes - VQ • Codebooks: 256x64 + 256x8 images, dependent texture lookup • Codebook generation: modified LGB-Algorithm (Linde, Buzo and Gray) • Series of PCA-Splits (Principle component analysis) determine initial codebook

  26. Large Volumes - VQ • With a codebook of length 256, a 10243 volume is reduced to 3*2563 bytes = 48 MBytes, i.e. it fits easily into texture memory. • Compression of 4D sequences: store complete sequence in GPU memory (shockwave sequence - original: 1.5 GB, compressed: 70 MB). • currently: only nearest-neighbor filtering=> decouple decompression and rendering

  27. Large Volumes – Procedural Approach Ebert et al.: Texturing and Modeling: A Procedural Approach, Academic Press, 1998 Images Courtesy of Joe Michael Kniss procedural noise for microstructure coarse volume for macrostructure

  28. Large Volumes – Procedural Approach • Kniss et al., Interactive Translucent Volume Rendering and Procedural Modeling, Visulization’02: • perturb data access (instead of data) • good for distortion of boundaries • implemented using offset-textures Images Courtesy of Joe Michael Kniss original texture multiple scaled versions of perturbation texture summed offset vectors result of dependent fetch

  29. Optimizations - Shading • Pre-computation of gradients impractical due to memory requirements • Calculate the gradient vector on-the-fly • Fetch neighbors and compute normal in fragment program • requires a lot of texture fetches (6 for central diff.) • Only possible on newer graphics hardware(nvidia NV3x, ATI R3xx)

  30. Optimizations - Shading • Shade only where necessary • Shade only if alpha is above a threshold • Two passes • 1st Pass: Render unshaded in first pass • 1st Pass: Set z/alpha where alpha is above threshold • 2nd Pass: Use early-z/stencil test • 2nd Pass: shade where z/alpha test succeeds • Requires early-z/stencil test (ATI R3xx)

  31. Optimizations – Space Leaping • Don’t access memory that contains no data • Subdivide volume into blocks • Store minimum/maximum value per block • Check with transfer function and min/max if block is non-empty • Render block only if not empty

  32. Optimizations – Ray Termination Krueger/Westermann – Acceleration Techniques for GPU-based Volume Rendering, IEEE Visualization 2003 • Terminate rays(pixels) that have accumulated maximum opacity • Termination is done in a separate pass • Render bounding box with framebuffer as texture • Check for each pixel if alpha above threshold • Set z/stencil value if above threshold • Requires early-z/stencil test (ATI R3xx)

  33. Conclusions • Large Volumes can be rendered with GPUs • Compression and adaptive Algorithms required for interactive Performance • Lots of Optimizations possible • Early-Z/Stencil tests very useful

  34. Questions? • Thank you! • Acknowledgements: • NVIDIA and ATI for the demo hardware • For some of the images and figures: • Joe Michael Kniss • Wei Li • Jens Schneider • Martin Kraus • Stefan Guthe

More Related