1 / 40

CAP4730: Computational Structures in Computer Graphics

This outline covers the goal of visible surface determination, the role of surface normals, backface culling, depth buffers, BSP trees, and more in computer graphics. Learn about optimizing computations and algorithms for quick, accurate surface rendering.

wanne
Download Presentation

CAP4730: Computational Structures in Computer Graphics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CAP4730: Computational Structures in Computer Graphics Visible Surface Determination

  2. Outline • The goal of visible surface determination • Normals • Backface Culling • Depth Buffer • BSP Trees • Determining if something isn’t in the view frustum (research topics)

  3. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a view point and a view direction

  4. Three reasons to not draw something • 1. It isn’t in the view frustum • 2. It is “back facing” • 3. Something is in front of it (occlusion) • We need to do this computation quickly. How quickly?

  5. Surface Normal • Surface Normal - vector perpendicular to the surface • Three non-collinear points (that make up a triangle), also describes a plane. The normal is the vector perpendicular to this plane.

  6. Normals

  7. How do we compute a normal? • Q: Given a Triangle, how do we compute a normal? A: Normal = V0V1 X V0V2 But…. we know V0V1 X V0V2 != V0V2 X V0V1

  8. Vertex Order • Vertex order matters. We usually agree that counterclockwise determines which “side” or a triangle is labelled the “front”. Think: Right handed coordinate system.

  9. What do the normals tell us? Q: How can we use normals to tell us which “face” of a triangle we see?

  10. Examine the angle between the normal and the view direction V N Front if V . N <0

  11. Viewing Coordinates If we are in viewing coordinates, how can we simplify our comparison? Think about the different components of the normals you want and don’t want.

  12. Backface Culling • Before scan converting a triangle, determine if it is facing you • Compute the dot product between the view vector (V) and triangle normal (N) • Simplify this to examining only the z component of the normal • If Nz<0 then it is a front facing triangle, and you should scan convert it • What surface visibility problems does this solve? Not solve? • Review OpenGL code

  13. Multiple Objects • If we want to draw: We can sort in z. What are the advantages? Disadvantages? Called Painter’s Algorithm or splatting.

  14. Painter’s Algorithm Subtleties • What do we mean sort in z? That is for a triangle, what is its representative z value? • Minimum z • Maximum z • Polygon’s centroid • Work cost = sort + draw • We still use Painter’s Algorithms for blended objects (discussed in the Blending Lesson) • An object space visibility algorithm

  15. Side View

  16. Side View - What is a solution?

  17. Even Worse… Why?

  18. Pros: No extra memory Relatively fast Easy to understand and implement Cons: Precision issues (and additional work to handle them) Sort stage Intersecting objects Painter’s Algorithm

  19. Depth Buffers Goal: We want to only draw something if it appears in front of what is already drawn. What does this require? Can we do this on a per object basis?

  20. Depth Buffers We can’t do it object based, it must be image based. What do we know about the x,y,z points where the objects overlap? Remember our “eye” or “camera” is at the origin of our view coordinates. What does that mean need to store?

  21. Side View

  22. Algorithm • We need to have an additional value for each pixel that stores the depth value. • What is the data type for the depth value? • How much memory does this require? • Playstation 1 had 2 MB. • The first 512 x 512 framebuffer cost $50,000 • Called Depth Buffering or Z buffering

  23. Depth Buffer Algorithm • Begin frame • Clear color • Clear depth to z = zmax • Draw Triangles • When scan converting znew pixel < zvalue at the pixel, set color and zvalue at the pixel = znew pixel • What does it mean if znew pixel > zvalue at the pixel? • Why do we clear the depth buffer? • Now we see why it is sometimes called the z buffer

  24. Computing the znew pixel • Q: We can compute the znsc at the vertices, but what is the znsc as we scan convert? • A: We interpolate znsc while we scan convert too!

  25. Metrics for Visibility Algorithms • Running Time - 1 extra compare • Storage Overhead - 1 extra field per pixel • Overdraw - we have to scan convert all triangles • Memory Bandwidth - how much we have to retrieve from memory. Increased • Precision - Let’s examine the possible values of z and what that means

  26. Z Buffer Precision • What does the # of bits for a depth buffer element mean? • The z from eye space to normalized screen space is not linear. That is we do not have the same precision across z. (we divided by z). • In fact, half of our precision is in z=0 and z=0.5. What does this mean? What happens if we do NOT have enough precision?

  27. Z Fighting • If we do not have enough precision in the depth buffer, we can not determine which fragment should be “in front”. • What does this mean for the near and far plane? • We want them to • as closely approximate our volume

  28. Z Fighting Zoomed In Run Demo

  29. Don’t forget • Even in 1994, memory wasn’t cheap. If we wanted 1024x768x16bit = 1.6 MB additional memory. • Depth Buffers weren’t common till recently because of this. • Since we have to draw every triangle -> fill rate goes UP. Currently graphics cards approach the many billions of pixels per second. • An image space algorithm • Let’s review OpenGL code

  30. Pros: Easy to understand and implement per pixel “correct” answer no preprocess draw objects in any order no need to redivide objects Cons: Z precision additional memory Z fighting Depth Buffer Algorithm

  31. BSP Trees (Fuchs, et. al 1980) • Binary Space Partitioning • Doom and most games before depthbuffers (circa 1994-95) • Given a world, we want to build a data structure that given any point, it can return a sorted list of objects • What assumptions are we making? • Note, what happens in those “old” games like Doom?

  32. BSP Trees • Two stages: • preprocess - we do this at the “offline” • runtime - what we do per frame • Draw parallels to Doom • Since this is easier in 2D, note all “old” FPS are really 3D.

  33. BSP Algorithm • For a viewpoint, determine where it sits on the tree. • Now draw objects on the “other half of the tree” • farside.draw(viewpoint) • nearside.draw(viewpoint) • Intuition - we draw things farther away first • Is this an image space or object space algorithm?

  34. Pros Preprocess step means fast determination of what we can see and can’t Works in 3D -> Quake1 Painter’s algorithm Pros Cons Still has intersecting object problems Static scene BSP Trees

  35. Determining if something is viewable • Viewfrustum Culling (football example) • Cells and Portals • definitions • cell • portal • preprocess step • runtime computation • where do we see it? • Quake3

More Related