300 likes | 489 Views
Real-time Streaming and Rendering of Terrains. Soumyajit Deb (Microsoft Research India, Banglore), Shiben Bhattacharjee, Suryakant Patidar, P. J. Narayanan (CVIT, IIIT Hyderabad). Why Stream terrains?. Central Storage of Data Bulky models, important data Support Heterogeneous Clients
E N D
Real-time Streaming andRendering of Terrains Soumyajit Deb (Microsoft Research India, Banglore), Shiben Bhattacharjee, Suryakant Patidar, P. J. Narayanan (CVIT, IIIT Hyderabad)
Why Stream terrains? • Central Storage of Data • Bulky models, important data • Support Heterogeneous Clients • Easy updates • Low client wait time • Dynamism Model Size: 540 million samples Uncompressed Size: Over 350MB Time to transfer on 512kbps: Over 1 hour Wait time for streaming: Less than 1 minute
Challenges • No Temporal Partitioning • Low Bandwidth and Changing Latency • Varied client types • Dynamic Entities • Client Synchronization • Collaboration between users Low-end Client High-end Client Low bandwidth256 kbps High bandwidth 3 mbps Central Server
Video/Movies Photo Album Music Files Serving Streaming Content Serving FaithfulReproduction Real-timeDelivery
Previous Work Terrain Rendering • Geometry Clipmaps - Lossaso, Hoppe (Siggraph 04)1 • ROAM - Duchaineau et al (VIS 97)2 • Strip Masks – Pouderoux, Marvie (CGIT Australia 2005)3 • Terrain Geomorphing – Wagner (Shader X2 2003)4 2 3 1 4
Previous Work… Geometry Streaming • ARTE - Schneider, Martin (Computers & Graphics 2003)1 • Streaming Remote Walkthroughs - Teler (EG01)2 • Protected 3D Rendering - Levoy et al (Siggraph 04)3 • Windows Live Local4 and Google Earth/Maps • Design of a Geometry Server. Deb & Narayanan. ICVGIP 2004 • Streaming Terrain Rendering. Deb, Narayanan & Bhattacharjee. Sketch, SIGGRAPH 2006 3 1 4 2
Requirements Provide best frame-rate in all situations • Adapt to the client capabilities • Computation & rendering power, memory, etc. • Adapt to the connection parameters • Bandwidth, latency, fluctuations • Adapt to user motion • Handle dynamic objects • Transparent to user • Remotely served objects are just like local ones • Modify them, mix them.
Client & Network: Summary Client Rendering Capability
Resources, Issues and Solutions Client Side Visibility Culling Speed based optimization Progressive Refinement Interactive Frame-rates Client Capabilities Network Performance Level of Detail Latency dependent Prediction and Prefetching Optimal use Of Bandwidth Compression Highest quality rendering Latency Immunity Multiple Height map Resolutions Client side Caching
Terrain Streaming: Overview Client Module Terrain Renderer Client 1 • Server and Client modules exchange data • User program connects to client module Server Module Scene Database Client Module Terrain Renderer Client 2 Network Client Module Terrain Renderer Client 3
Terrain Representation l=1 l=2 l=3 l=0 Sx Sx Sx Sx • Terrain represented by equal sized tiles • For a 2nx2n terrain, m discrete LODs m ≤ n • Lower LODs obtained by dropping alternative heights or averaging them
View Frustum Culling For a tile Bt Level of Detail L L = floor (dt / l) Projection of View Frustum Rejected Tile L=2 L=1 l dt L=0 Baseline • Terrains don’t require 3D VFC. • Project view frustum on base plane and find tiles inside bounding box
Smooth LOD transition For a tile Bt Level of Detail L and blending factor α L = floor (dt / l) α = frac (dt / l) Projection of View Frustum Rejected Tile L=2 L=1 l dt L=0 Baseline • To avoid popping up artifacts, we use smooth LOD transition by blending the two LODs with a factor • The height of a point h is defined by: h = h(2i,2j)l(1 - α) + h(i,j)(l+1) α
Terrain Streaming Basics • Tile Transmission • Multiple LODs • Residue Compression • Tile Selection • Potentially Visible Tile detection • Object Selection • Object LOD is selected based on anchored tile LOD
Performance Optimization • Speculative Prefetching – Prefetch tiles on predicted path of motion • Client side Caching – Cache tiles around viewer’s vicinity. We utilize a Least Potentially Visible scheme for cache replacement
Dynamic Entities • Dynamic Entities change form/position/appearance • Dynamic VE – addition and deletion of entities • Multiple client sync to maintain consistent state • Use Lazy Updates
Client side modifications • Client created annotations • Stored at server, treated as dynamic entities • Potential for sharing annotations across users
Transmission Data Reduction Original Model 350 MB Visibility Detection 20 MB Typical Numbers for each request. LOD Optimization 2 MB Client Side Caching 0.8 MB PTC Compression 300KB
Graphs • Clients with Lower capability and/or network bandwidth receive less data. • The highest detail tiles contribute substantially to data rate
Graphs • Both clients maintain steady frame rate in both bandwidth settings. • Quality Factor improves with higher available bandwidth
Conclusions and Future Work • Efficient terrain streaming over low bandwidths • Support for dynamic entities and environments • Supports collaboration between clients • In Future we will support in real-time in place terrain editing and deformations • Use programmable GPU for high performance at clients end
Geometry Server: Highlights • Server Module: • Visibility computation, LoD selection, client book-keeping, dynamic object updation • Client Module: • Server interface, user program interface, caching, LPV visibility, user motion prediction • Transparent Serving: • Open SceneGraph objects, add/modify remote object to local scenegraph. • Dynamic Objects: • Server informs changes, lazy updation
Geometry Server Design Server Module Client Module Client Cache Vertex & Texture Client Tracker Client Notifier Visibility Culler Predictor Network Module Scene Database Network Module Client Interfaces Server-side Communication Server-side Processing Module Renderer Control Program User Interface Server-side Communication module User Program LOD Optimizer Speed based Optimizer Texture Optimizer Geometry Server
Quantifying System Performance • Walkthrough performance is directly proportional on achieved quality and frame rate. • Network performance is directly proportional to bandwidth utilization. • We define τqual and τnet empirically. • We use simple linear models with scope for improved analysis in future.
LPV Scheme Yr B B B A I Yr A A G I I G G Yv D D D Y Axis C C C F F Yv F H H E E H X Axis E Yl A A E E C C G G H H D D I I F F B B Xl Xv Xr Y-List X-List
Requirements • Content optimization based upon available bandwidth and client capabilities • High quality rendering • Adaptability to changing latencies • Progressive refinement / graceful degradation • Continuous connection monitoring
Tile Stitching • Smooth transition between level l and l+1 • Finally, we stitch the corners of each tile to remove discontinuity