1 / 28

Ray Space Factorization for From-Region Visibility

A paper discussing an algorithm to reduce network traffic by sending only visible geometry to clients for 3D scene rendering. Presented by Alexandre Mattos.

hornsby
Download Presentation

Ray Space Factorization for From-Region Visibility

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ray Space Factorization for From-Region Visibility Authors: Tommer Leyvand, Olga Sorkine, Daniel Cohen-Or Presenter: Alexandre Mattos

  2. Goal • Real-time walkthroughs of large 3D scenes • Server contains all world geometry and needs to transmit it to clients • Only send geometry that is visible to client • Reduce network traffic • Client needs to compute what geometry to render

  3. Strategy • Point-wise visibility • What user can see from their exact current location • Needs to be calculated every time player moves • Will not work due to network latency • Might not work on client either

  4. Strategy • Divide scene into view cells • As user moves around, calculate visible geometry for that view cell and adjacent cells

  5. From-Region Visibility • Given a view cell, compute what is visible from that view cell • An object is visible if there is at least one ray exiting the view cell that intersects that object

  6. From-Region Visibility

  7. Assumptions • Scenes are largely 2.5 + εD • Not much vertical complexity • Example – Sky scrapers of varying heights • Will first explain algorithm assuming 2.5D

  8. Dividing Up the Problem • Paper splits the problem into easier-to-solve vertical and horizontal components • Determine if objects occlude each other vertically or horizontally and then combine the results • Build K-d Tree over entire scene • Allows front to back traversal of scene

  9. View Cell Parameterization • Define two concentric squares • One is view cell • One is outside view cell • Parameterize inner and outer square with S and T respectively

  10. Rays (S,T) • Can define all rays (view directions) leaving view cell with (S, T)

  11. Horizontal Component • Orthographically project all geometry onto the ground • Geometry has a mapping to parameter space

  12. Key Insight • Render geometry in parameter space front to back • If parameter space for geometry is already rendered, then geometry is occluded

  13. Vertical Component • (S, T) define a plane • Intersection of a plane and a triangle defines a vertical line and casts a directional umbra

  14. Vertical Component • Object is occluded if it is contained within vertical umbra

  15. Vertical Component • One way to solve the problem: • Traverse scene front to back and maintain aggregated umbra

  16. Video • In the 2.5D only • Objects are visible if the slope of their umbra is larger than the current umbra

  17. Putting It Together • For all geometry render it in 3D as (S, T, α) where α is angle of the umbra at that point • Using graphics hardware, we can do occlusion tests for all geometry

  18. Hardware Implementation • Disable Z-buffer updates and render geometry • If a single pixel renders, it is visible • To update occlusion map, render geometry with Z-buffer updates enabled

  19. Algorithm • Traverse K-d tree

  20. Extension from 2.5D to 3D • Only comparing α not valid anymore

  21. 3D Umbra • Need to keep four angles (α1, α2, α3, α4) to represent umbra uniquely

  22. Merging Umbra • Many cases • Umbrae may be disjoint

  23. Hardware Implementation • For all geometry render it in 3D as (S, T, V) where V = (α1, α2, α3, α4) • Use a pixel/fragment shader that checks whether a pixel is visible based on V • Pass V values to graphics card in a buffer • Render (S, T, X) where X is index into buffer for V

  24. Hardware Limitations • Can only maintain one aggregated umbra per vertical slice • Pack 16 bit floats into 32 bit floats to allow two aggregated umbrae • Use many buffers to store multiple umbrae • Paper claims that one umbra is sufficient because umbrae merge rapidly

  25. Results • Buildings are 9-12 units and rotated at most 30 degrees • Box model contains random boxes in random orientation • Vienna model

  26. Results City Model Half-umbra VS Full-umbra Box Model Resolution Effect VS is 10,072 triangles Vienna Model

  27. Discussion • Algorithm is sensitive to how much it has to render • Works well for dense scenes because occlusion map quickly occludes entire scene • For a minor cost of rendering one extra K-d tree node they can double model size • If the VS is large, then a lot of geometry is rendered and algorithm slows down • Can calculate PVS over several frames

  28. Discussion • No tests done for sparse models • Umbrae will not converge rapidly • Many K-d tree nodes need to be tested • Algorithm prefers horizontal occlusion over vertical occlusion • Horizontal occlusion is exact up to rendering • Vertical occlusion is conservative based on how many umbrae are used

More Related