270 likes | 410 Views
Rapid Visualization of Large Point-Based Surfaces. Tamy Boubekeur Florent Duguet Christophe Schlick. Presented by Xavier Granier. Large Data Acquisition. Complex archeological artifact Duguet et al. VAST 2004. David laser scans Digital Michelangelo Project.
E N D
Rapid Visualization of Large Point-Based Surfaces Tamy Boubekeur Florent Duguet Christophe Schlick Presented by Xavier Granier
Large Data Acquisition Complex archeological artifact Duguet et al. VAST 2004 David laser scans Digital Michelangelo Project • Sub-millimeter acquisition and saving of statues and archaeological artifact • Billions of sample for accurate representation • Need of specific methods for visualizing such objects
Large Objects - Prior Art • Visualization of gigantic meshes • Out-of-core rendering [Lindstrom 2003], [Tetra-Puzzles - Cignoni 2004] • Multiscale approach [Far Voxels - Gobetti 2005] • Using points • As rendering primitives [QSplat - Rusinkewicz 2000][Dachsbacher 2003] • As surface representation [VAST - Duguet 2004]
Acquisition & Visualization In-core rendering Real World 2.5D Scans Registration Surface reconstruction Merging Large PBS Large mesh 3D Image Real-time rendering Appearance preserving Simplification PBS : Point-based surface Intermediate large mesh generation and storing
Problem • Surface reconstruction • Simplification of resulting large meshes • Preprocessing for out-of-core rendering …are time-consuming tasks Hours to week of computation on single workstations for scanned statues.
Bottleneck for visualization Real World 2.5D Scans Registration Surface reconstruction Merging Large PBS Large mesh 3D Image Real-time rendering Appearance preserving Simplification Time consuming tasks Intermediate large mesh generation and storing
Appearance Preserving Simplification • Reducing the complexity of 3D objects • Maintaining as much as possible their appearance • Usual solution • Large mesh >>> coarse mesh + high resolution textures (normal, color) • Requires mesh generation and simplification Coarse mesh Normal mapping
Solution: removing the bottleneck Real World 2.5D Scans Registration Surface reconstruction Merging Large PBS Large mesh 3D Image Real-time rendering Appearance preserving Simplification Time consuming tasks Intermediate large mesh generation and storing
Removing the intermediate mesh Real World 2.5D Scans Registration Using only the registered point-cloud Large PBS 3D Image Real-time rendering Appearance preserving Generation Direct processing for visualization
Our fast conversion pipeline • No surface reconstruction at full resolution • No global surface parameterization • Direct PBS to appearance preserving representation conversion Surfel : Surface Element, sampled point with associated sampled normal, color, etc… Our approach
1 - Out-Of-Core Resampling • The first reading pass • Filtering the registered PBS through a regular grid • Keeping at most one represent per cell • Similar to out-of-core simplification for meshes [Lindstrom 2000] • Typical output : few tens thousands points In-core point cloud
2 - Local coarse mesh generation • Building an octree over the simplified point cloud • Local generation of pieces of surfaces : Surfel Strips [Boubekeur 2005] - Lower dimensional triangulation • Overlapping between neighboring pieces of mesh • Provide hole free visualization • Each piece processed independantly Collection of surfel strips
2 - Surfel Stripping [Boubekeur 2005] • Partitioning criterion Height-field predicate • Local 2D Delaunay triangulation • Fast cache-coherent stripping [Reuter 2005] Local partition Projection 2D triangulation Fast stripping Surfel Strip
2 - Local coarse mesh generation • Overlapping decimation • Reducing redundant/non useful triangles • Output : mesh clustered in an octree • Mesh = collection of Surfel Strips • Each surfel strip independently generated • Bounding quad in the average plane Direct visualization of surfel strips
3 - Out-Of-Core normal mapping • Second reading pass • Filtering all the point through the octree • Projecting point’s normals on textures of intersected leaves • Output : textured surfel strips • Coarse mesh + sparse normal map Holes in normal map = no normal projected
4 - Normal map diffusion Coarse surfel strip’s topology Normal mapping • Filling hole in normal maps : diffusion with push-pull • Per surfel strip diffusion • Quad-tree construction • Hierarchical hole filling • Smoothing
4 - Normal map diffusion 1 .Quad-Tree construction (PUSH) 2. Hierarchical hole filling (PULL) Output : Coarse mesh + high resolution normal map Surfel Strips Diffused per-surfel strip normal map 3. Iterative smoothing (gradient constrained) View-coherent packing of texures
Results Omphalos (10 M samples) Drums (20 M samples) Real-time rendering of archaeological artifacts on a single workstation Dancer (30 M samples)
Results St Matthew (186 M samples) Real-time rendering of archaeological artifacts on a single workstation
Performance Timing, frame-rate and memory space for a single workstation Intel PIV 3.4 GHz, 1.5 Go ram
Details preserving • Small details represented only in normal map, stored on GPU texture memory Surfel strips only Normal surfel strips
Comparison to QSplat Our approach QSplat • Better hardware support (coarse mesh + normal maps) • Realtime rendering in high resolution at high framerate (2 to 3 orders of magnitude faster) • Mipmapping = Automatic Hardware Filtering
Avantages • No surface reconstruction of full PBS • No complex processing on full PBS • No global parameterization for normal mapping, only local planar projection • Very fast processing • Final in-core model mostly stored as texture on GPU memory • Automatic hardware filtering
Limitations • Still a simplification approach • Out-of-core resampling can miss small topological details • Using adaptive out-of-core resampling methods [Schaeffer 2003] • After tests : no significant difference with our data sets – Very large object can be resampled in a simple grid [Lindstrom 2000]
Conclusion • An easy-to-implement pipeline for visualizing large scanned objects • Suitable for very large and dense point clouds • Can preserve any sampled surface property : • Normal • Color • Etc… Scanned objects such as statues and other archaeological artifact can be stored as simple unorganized point clouds
On-going work • Mutli-scale GPU-friendly structures • See Surfel Stripping [Boubekeur Graphite 2005] • Larger scenes processing and visualization on single workstations • 10 Billions ? • 100 Billions ? (on-the-fly surface synthesis) • Advanced comparison • Sequential Point Tree + Splatting on Today’s GPU [Dachsbacher 2003] [Botsch 2003] … still less efficient than ours (no true hardware support for point-based rendering)
Thank you for your attention ! http://www.labri.fr/~boubek