E N D
1. 21 January 2010 Visual Lunch, Swansea Univ. Dynamic Chunking for Out-of-Core Volume Visualization Applications Dan R. Lipsa(1,2,3), R. Daniel Bergeron(2), Ted M. Sparr(2) and Robert S. Laramee(3)
2. Dynamic Chunking 2 Motivation for Our Work Algorithms: entire data loaded into the main memory.
Scientific datasets are very large: the Visible Woman Data Set has 7.2 GB, the Sloan Digital Sky Survey has 818 GB.
Out-of-core
Goal: speed up out-of-core visualization algorithms. Algorithms are commonly designed to work with the entire data loaded into the main memory.
Scientific datasets are very large: the Visible Woman Data Set has 7.2 GB, the Sloan Digital Sky Survey has 818 GB.
The goal of our work is to speed up out-of-core visualization algorithms.
While visualization is one of the most effective techniques to analyze large datasets, it usually requires loading the entire data into the main memory. However, todays data are measured in tens of GB or even TB which is much larger than typical computer's main memory.
Our work speeds-up a few common out-of-core visualization algorithms.
Algorithms are commonly designed to work with the entire data loaded into the main memory.
Scientific datasets are very large: the Visible Woman Data Set has 7.2 GB, the Sloan Digital Sky Survey has 818 GB.
The goal of our work is to speed up out-of-core visualization algorithms.
While visualization is one of the most effective techniques to analyze large datasets, it usually requires loading the entire data into the main memory. However, todays data are measured in tens of GB or even TB which is much larger than typical computer's main memory.
Our work speeds-up a few common out-of-core visualization algorithms.
3. Data Storage 3 Linear Storage Data near in the 2D area are far away in the file. (see records 0 and 8)
File System prefetching and caching are not effective for n-D data stored using linear storage. A 2D array is stored in a file by traversing its axes using nested loops.
The file system prefetches and caches data in linear fashion. That means data near, in the 2D area might not be in the cache when needed.
So the OS makes the wrong choice when prefetching nD data stored using linear storage.A 2D array is stored in a file by traversing its axes using nested loops.
The file system prefetches and caches data in linear fashion. That means data near, in the 2D area might not be in the cache when needed.
So the OS makes the wrong choice when prefetching nD data stored using linear storage.
4. Data Storage 4 Chunked Storage Data near in the 2D area are near in the file. (see records 0 and 2)
File System prefetching and caching work well with n-D data stored using chunked storage.
5. Dynamic Chunking 5 Dynamic Chunking (DC) File chunking requires data reorganization, which might be impractical
Goal: provide some of the benefits of file chunking, on a linear file, without having to reorganize the file.
Our approach is to dynamically create and cache n-D chunks in the main memory.
6. Dynamic Chunking 6 Idea Application reads a record
DC module reads a 2D block that contains the record
saves the entire block in its local cache
Note that several read operations are needed to do this.
7. Dynamic Chunking 7 Cache Block Table Dynamic chunking splits the 2D area in 2D blocks and creates a table that stores a reference to each of these blocks.
Loading a block is done on demand as soon as a record from the block is needed.
We use LRU block replacement algorithm to maintain cache relevance.
8. Dynamic Chunking Optimizations 8 Block Size Optimization Larger blocks improve performance
Goal: use the maximum block size so that the working set of the application fits in the main memory.
Two block size optimization techniques: analytical (for slice) and adaptive (for other shapes working sets)
9. Dynamic Chunking Optimizations 9 Larger Blocks Improve Performance
10. Dynamic Chunking Optimizations 10 Block Size Optimization
11. Dynamic Chunking 11 Results We process a sub-volume of size 2563 voxels with 3 bytes per voxel (48MB) from a volume of size 1024x1216x2048 voxels (7.2GB).
Java using Java Bindings for OpenGL (Jogl) for rendering
JVM memory is set to 30MB, less than the size of the sub-volume
Slicing Application
Ray Casting Application
12. Results 12 Dynamic Chunking versus Chunking
13. Results 13 Block Size Optimization
14. Dynamic Chunking 14 Conclusions Dynamic chunking speeds-up out-of-core volume visualization applications without the need to reorganize data.
Block size optimization further improves performance
15. International Symposium on Visual Computing (ISVC) 2009 Las Vegas, NV, USA
30 November – 2 December
16. Visual Computing Computer Vision
Computer Graphics
Virtual Reality
Visualization
17. Special Tracks 3D Mapping, Modeling and Surface Reconstruction
Object Recognition
Deformable Models: Theory and Applications
Visualization Enhanced Data Analysis for Health Applications
Computational Bio-imaging
Visual Computing for Robotics
Optimization for Vision, Graphics and Medical Imaging: Theory and Applications
Semantic Robot Vision Challenge
18. Keynote speakers Prof. Pietro Perona, Department of Electrical Engineering, California Institute of Technology (Caltech), USA
Dr. Rakesh (Teddy) Kumar, Vision and Robots, Sarnoff Corporation, USA
Prof. Larry Davis, Department of Computer Science, University of Maryland, USA
Prof. Demetri Terzopoulos, Department of Computer Science, University of California at Los Angeles (UCLA), USA
Prof. Tao Ju, Department of Computer Science and Engineering, Washington University, USA
Prof. Nassir Navab, Informatics Institute I16, Technical University of Munich, Germany
19. Las Vegas, USA