1.07k likes | 1.23k Views
CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel, AW, van Dam, etc. CSCI 6360/4360. Introduction Implementation -- Algorithms. Angel’s chapter title: “From Vertices to Fragments”
E N D
CG Algorithms and Implementation:“From Vertices to Fragments”Angel: Chapter 6OpenGL Programming and Reference Guides, other sources.ppt from Angel, AW, van Dam, etc. CSCI 6360/4360
IntroductionImplementation -- Algorithms • Angel’s chapter title: “From Vertices to Fragments” • … and in the chapter are “various” algorithms • From one perspective of pipeline: • Next steps in viewing pipeline: • Clipping • Eliminating objects that lie outside view volume - and, so, not visible in image • Rasterization • Produces fragments from remaining objects • Hidden surface removal (visible surface determination) • Determines which object fragments are visible
IntroductionImplementation -- Algorithms • Need to consider another perspective, as well • Next steps in viewing pipeline: • Clipping • Rasterization • Produces fragments from remaining objects • Hidden surface removal (visible surface determination) • Determines which object fragments are visible • Show objects (surfaces)not blocked by objects closer to camera • And next week …
IntroductionImplementation -- Algorithms • Next steps in viewing pipeline: • Clipping • Rasterization • Hidden surface removal (visible surface determination) • Will consider above in some detail in order to give feel for computational cost of these elements • “in some detail” = algorithms for implementing • … algorithms that are efficient • Same algorithms for any standard API • Will see different algorithms for same basic tasks
About Implementation Strategies • Angel: At most abstract level … • Start with application program generated vertices • Do stuff like transformation, clipping, … • End up with pixels in a frame buffer • Can consider two basic strategies • Will see again in hidden surface removal • Object-oriented -- An object at a time … • For each object • Render the object • Each object has series of steps • Image-oriented -- A pixel at a time … • For each pixel • Assign a frame buffer value • Such scanline based algorithms exploit fact that in images values from one pixel to another often don’t change much • Coherence • So, can use value of a pixel in calculating value of next pixel • Incremental algorithm
Tasks to Render a Geometric Entity1Review and Angel Explication • Angel introduces more general terms and ideas, than just for OpenGL pipeline… • Recall, chapter title “From Vertices to Fragments” … and even pixels • From definition in user program to (possible) display on output device • Modeling, geometry processing, rasterization, fragment processing • Modeling • Performed by application program, e.g., create sphere polygons (vertices) • Angel example of spheres and creating data structure for OpenGL use • Product is vertices (and their connections) • Application might even reduce “load”, e.g., no back-facing polygons
Tasks to Render a Geometric Entity2Review and Angel Explication • Geometry Processing • Works with vertices • Determine which geometric objects appear on display • 1. Perform clipping to view volume • Changes object coordinates to eye coordinates • Transforms vertices to normalized view volume using projection transformation • 2. Primitive assembly • Clipping object (and it’s surfaces) can result in new surfaces (e.g., shorter line, polygon of different shape) • Working with these “new” elements to “re-form” (clipped) objects is primitive assembly • Necessary for, e.g., shading • 3. Assignment of color to vertex • Modeling and geometry processing called “front-end processing” • All involve 3-d calculations and require floating-point arithmetic
Tasks to Render a Geometric Entity3Review and Angel Explication • Rasterization • Only x, y values needed for (2-d) frame buffer • … as the frame buffer is what is displayed • Rasterization, or scan conversion, determines which fragments displayed (put in frame buffer) • For polygons, rasterization determines which pixels lie inside 2-d polygon determined by projected vertices • Colors • Most simply, fragments (and their pixels) are determined by interpolation of vertex shades & put in frame buffer • Output of rasterizer is in units of the display (window coordinates)
Tasks to Render a Geometric Entity4Review and Angel Explication • Fragment Processing • Colors • OpenGL can merge color (and lighting) results of rasterization stage with geometric pipeline • E.g., shaded, texture mapped polygon (next chapter) • Lighting/shading values of vertex merged with texture map • For translucence, must allow light to pass through fragment • Blending of colors uses combination of fragment colors, using colors already in frame buffer • e.g., multiple translucent objects • Hidden surface removal performed fragment by fragment using depth information • Anti-aliasing also dealt with
Efficiency and Algorithms • For cg illumination/shading, saw how role of efficiency drove algorithms • Phong shading is “good enough” to be perceived as “close enough” to real world • Close attention to algorithmic efficiency • Similarly, for often calculated geometric processing efficiency is a prime consideration • Will consider efficient algorithms for: • Clipping • Line drawing • Visible surface drawing
Recall, Clipping … • Scene's objects are clipped against clip space bounding box • Eliminates objects (and pieces of objects) not visible in image • Efficient clipping algorithms for homogeneous clip space • Perspective division divides all by homogeneous coordinates, w • Clip space becomes Normalized Device Coordinate (NDC) space after perspective division
ClippingEfficiency matters • Clipping is performed many times in cg pipeline • Depending on algorithms, number of lines, polygons • Different kinds of clipping • 2D against clipping window, 3D against clipping vol. • Easy for line segments of polygons • Polygons can be handled in other ways, too • E.g., Bounding boxes • Hard for curves and text • Convert to lines and polygons first • Will see, 1st example of a cg algorithm • Designed for very efficient execution • Efficiency includes: • multiplication vs. addition • use of Boolean vs. arithmetic operations • Integer vs. real • Space complexity • Even the “constant” in time complexities
Clipping - 2D Line Segments • Could clip using brute force • Compute intersections with all sides of clipping window • Computing intersections is expensive • To explicitly find intersection, essentially solve y = mx + b • Use lines end points to find slope and intercept • See if line is in there • Requires multiplication/division
Cohen-Sutherland AlgorithmAn example clipping algorithm • Cohen-Sutherland clipping algorithm considers all different cases for where line may be wrt clipping region • E.g., will first eliminate as many cases as possible without computing intersections • E.g., both ends of line outside (line C-D), or inside (line A-B) • Again, computing intersections is expensive • Start with four lines that determine sides of clipping window • As if extending sides, top, and bottom of window out • Will use xmin, xmax, ymin ,ymax, in algorithm y = ymax x = xmin x = xmax y = ymin
Consider Cases: Where Endpoints Are y = ymax • Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases • Case 1: • Both endpoints of line segment inside all four lines • Draw (accept) line segment as is • Case 2: • Both endpoints outside all lines and on same side of a line • Discard (reject) the line segment • “trivially reject” • Case 3: • One endpoint inside, one outside • Must do at least one intersection • Case 4: • Both endpoints outside, not on same side • May have part inside • Must do at least one intersection x = xmin x= xmin y = ymin y = ymax x = xmin x = xmax y = ymin
Defining OutcodesA representation for efficiency y = ymax • For each line endpoint defines an outcode: • Endpoint includes both x1, y1 and x2, y2 • 4 bits for each endpoint: b0b1b2b3, • b0 = 1 if y > ymax, 0 otherwise • b1 = 1 if y < ymin, 0 otherwise • b2 = 1 if x > xmax, 0 otherwise • b3 = 1 if x < xmin, 0 otherwise • Examples in red with blue ends at right: • Tedious, but automatic • E.g., left line outcodes: 0000, 0000 • E.g., right line outcodes: 0110, 0010 • Outcodes divide space into 9 regions • Computation of outcode requires at most 4 subtractions • E.g., y1 - ymax • Testing of outcodes can be done with bitwise comparisons x = xmin x= xmin y = ymin
Now will consider each case using outcodes … y = ymax • Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases • Case 1: • Both endpoints of line segment inside all four lines • Draw (accept) line segment as is • Case 2: • Both endpoints outside all lines and on same side of a line • Discard (reject) the line segment • “trivially reject” • Case 3: • One endpoint inside, one outside • Must do at least one intersection • Case 4: • Both endpoints outside, not on same side • May have part inside • Must do at least one intersection x = xmin x= xmin y = ymin y = ymax x = xmin x = xmax y = ymin
Using Outcodes, Case 1Example from Angel y = ymax • Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases • Case 1: • Both endpoints of line segment inside all four lines • Draw (accept) line segment as is • AB: outcode(A) = outcode(B) = 0 • A = 0000, B = 0000 • Accept line segment x = xmin x= xmin y = ymin
Using Outcodes, Case 2Example from Angel y = ymax • Case 2: • Both endpoints outside all lines and on same side of a line • Discard (reject) the line segment • “trivially reject • EF: outcode(E) && outcode(F) 0 • && is bitwise logical AND • E = 0010, F = 0010 • Both outcodes have 1 bit in same place • Line segment is outside of corresponding side of clipping window • Reject – typically, most frequent case x = xmin x= xmin y = ymin
Using Outcodes, Case 3Example from Angel • Case 3: • One endpoint inside, one outside • Must do at least one intersection • CD: outcode (C) = 0, outcode(D) 0 • C = 0000, D = anything else • Here, D = 0010 • Do need to compute intersection • Location of 1 in outcode (D) determines which edge to intersect with • So, “shortened” line is what is displayed • C – D’ • Note: • If there were a segment from A to a point in a region with 2 ones in outcode, might have to do two intersections D’ y = ymax x = xmin x = xmax y = ymin
Using Outcodes, Case 4Example from Angel • Case 4: • Both endpoints outside, not on same side • May have part inside • Must do at least one intersection • GH, IJ (same outcodes), neither zero, but && of endpoints = zero • G (and I) = 0001, H (and J) = 1000 • Test for intersection • If found, shorten line segment by intersecting with one of sides of window • Compute outcode of intersection (new endpoint of shortened line segment) • (Recursively) reexecute algorithm J’ y = ymax I’ x = xmin x = xmax y = ymin
Efficiency and Extension to 3D • Very efficient in many applications • Clipping window small relative to size of entire data base • Most line segments are outside one or more side of the window and can be eliminated based on their outcodes • Inefficient, when code has to be reexecuted for line segments that must be shortened in more than one step • For 3 dimensions • Use 6-bit outcodes • When needed, clip line segment against planes
Rasterization • End of geometry pipeline (processing) – • Putting values in the frame buffer (or raster) • As, write_pixel (x, y, color) • At this stage, fragments – clipped, colored, etc. at level of vertices, are turned into values to be displayed • (deferring for a moment the question of hidden surfaces and colors) • Essential question is “how to go from vertices to display elements?” • E.g., lines • Algorithmic efficiency is a continuing theme
Drawing Algorithms • As noted, implemented in graphics processor • Used bazillions of times per second • Line, curve, … algorithms • Line is paradigm example • most common 2D primitive - done 100s or 1000s or 10s of 1000s of times each frame • even 3D wireframes are eventually 2D lines • optimized algorithms contain numerous tricks/techniques that help in designing more advanced algorithms • Will develop a series of strategies, towards efficiency
Drawing Lines: Overview • Recall, fundamental “challenge” of computer graphics: • Representing the analog (physical) world on a discrete (digital) device • Consider a very low resolution display: • Sampling a continuous line on a discrete grid introduces sampling errors: the “jaggies” • For horizontal, vertical and diagonal lines all pixels lie on the ideal line: special case • For lines at arbitrary angle, pick pixels closest to the ideal line • Will consider several approaches • But, “fast” will be best
Strategy 1 Really Basic Algorithm Qx,y • First, the (really) basic algorithm: • Find equation of line that connects 2 pts, Px,y, Qx,y • y = mx + b • m = Dy / Dx , where Dx = xend – xstart, Dy = yend – ystart • Starting with the leftmost point P, • increment x by 1 and calculate y = mx + b at each x point/value • where m = slope, b = y intercept for x = Px to Qx y = round (m *x + b) // compute y write-pixel (x, y) • This works, but uses computationally expensive operations (multiply) at each step • Perhaps worked for your homework • And do note that when turn on a pixel, in fact does approximate an ideal line Px,y
Strategy 2 Incremental Algorithm, 1 Qx,y • So, (really) basic algorithm: for x = Px to Qx y = round (m*x + b) // compute y write-pixel (x, y) • Can modify basic algorithm to be an incremental algorithm • Use current state of computation in finding next state • i.e., incrementally going toward the solution • Not “recompute” the entire solution, as above – • Not same computation regardless of where are • Use partial solution, here, last y value, to find next value • Modify (really) Basic Algorithm to just add slope, vs. multiply – next slide m = Dy / Dx // compute slope (to be added) y = m * Px + b // still multiply to get first y value for x = Px, Qx write-pixel (x, round(y)) y = y + m // increment y for next value, just by adding • Make incremental calcs based on preceding step to find next y value • Works here because going unit/1 to right, incrementing x by 1 and y by (slope) m Px,y
Strategy 2 Incremental Algorithm, 2 • Incremental algorithm m = Dy / Dx // slope y = m *Px+ b // first y value for x = Px to Qx write-pixel (x, round(y)) y = y + m // inc y for next • Definite improvement over basic algorithm • Still problems • Still too slow • Rounding integers takes time • Variables y and m must be a real or fractional binary because slope is a fraction • Ideally, want just integers and add m
Strategy 3 Midpoint Line Algorithm • Midpoint line algorithm (MLA) considers that “ideal” line is in fact approximated on a raster (pixel based) display • Hence, will be “error” in where ideal line should be and how it is represented by turning on pixels • Will use amount of “possible error” to decide which pixel to turn on for successive steps • … and will do this by only adding and comparing (=, >, <) error error
Strategy 3 MLA, 1 • Assume that the (ideal) line's slope is shallow and positive (0 < m < 1). • Other slopes can be handled by suitable reflections about principal axes • Note are calculating “ideal line” and turning on pixels as approximation • Assume that we have just selected the pixel P at (xp ,yp) • Next, must choose between: • pixel to right (pixel E), or • pixel one right and one up (pixel NE) • Let Q be intersection point of line being scanconverted with grid line at x = xp + 1 • Note that pixel turned on is not exactly on ideal line – so, “error” Q
Strategy 3 - MLA, 2 • Observe on which side of (ideal) line the midpoint M lies: • E pixel closer to line if midpoint lies above line • NE pixel closer to line if midpoint lies below line • (Ideal) line passes between E and NE • Point closer to point Q must be chosen • Either E or NE • Error: (here, ) • Vertical distance between chosen pixel and actual line - always <= ½ • Here, algorithm chooses NE as next pixel for line shown • Now, find a way to calculate on which side of line midpoint lies
MLA – Use Equation of Line for Selection • How to choose which pixel, based on M and distance of M from ideal line • Line equation as a function, f(x): • y = m * x + b • y = dy/dx * x + b • And line equation as an implicit function: • f(x, y) = a*x + b*y + c = 0 • From above, algebraically (multiply by dx) y * dx = dy * x + b * dx • So, algebraically, a = dy, b = dx, c = b*dx, a>0 for y0<y1 • Properties (proof by case analysis): • f(xm, ym) = 0 when any point M is on line • f(xm, ym) < 0 when any point M above line • f(xm, ym) > 0 any point M below line - here • Decision (to choose E or NE) will be based on value of function at midpoint • M at (xp+1, yp+1/2) – ½ - midpoint
MLA, Decision Variable • So, find a way to (efficiently) calculate on which side of line midpoint lies • And that’s what we just saw • E.g., f(xm, ym) < 0 when any point M above line • Decision variable d: • Only need sign (fast) of f (xp+1, yp+1/2) • to see where the line lies, • and then pick nearest pixel. • d = f (xp+1, yp+1/2) • if d > 0 choose pixel NE. • if d < 0 choose pixel E. • if d = 0 choose either one consistently. • Next, how to update d: • On basis of picking E or NE, • figure out the location of M to that pixel, and the corresponding value of d for the next grid line
How to update d, if E was chosen: • M is incremented by one step in x direction • Recall, f(x, y) = a*x + b*y + c = 0 and rewrite • To get the incremental difference, DE, subtract dold from dnew dnew = f(xp + 2, yp + 1/2) = a(xp +2) + b(yp + 1/2) + c d old = a(xp + 1) + b(yp + 1/2) + c • Derive value of decision variable at next step incrementally without computing f(M) directly (recall “incremental algorithm): • dnew = dold + DE = dold + dy • DE = a = dy • DE can be thought of as the correction, or update factor, to take dold to dnew • (and this is the insight: “carrying along the error, vs. recalculating”). • dnew = dold + a • Called “forward difference”
How to update d, ff NE was chosen: • M is incremented by one step each in both the x and y directions • dnew = f(xp+2, yp+3/2) • dnew = a(xp+2) + b(yp+3/2) + c • Subtract dold from dnew to get the incremental difference • dnew = dold + a + b • DNE = a + b = dy dx • So, incrementally, • dnew = dold + DNE = dold + dy dx
MLA Summary • At each step, algorithm chooses between 2 pixels based on sign of decision variable, d, calculated in previous iteration • Update decision variable, d, by adding either DE or DNE to old value d depending on choice of pixel. • Note - First pixel is first endpoint (x0, y0), so can directly calc init val of d for choosing between E and NE. • First midpoint is at (x0 + 1, y0 + 1/2) • F(x0+1, y0+1/2) = a(x0 + 1) + b(y0 + 1/2) + c = ax 0 + by 0 + c + a + b/2 = F(x0, y0 ) + a + b/2 • But (x0 , y0 ) is point on line and F(x0, y0 ) = 0 Therefore, dstart = a + b/2 = dy dx/2. Use d start to choose the second pixel, etc. • To eliminate fraction in d start: Redefine F by multiplying it by 2; F(x,y) = 2(ax + by + c). This multiplies each constant and the decision variable by 2, but does not change the sign.
Hidden Surface Removal • Or, Visible Surface Determination (VSD)
Recall, Projection … • Projectors • View plane (or film plane) • Direction of projection • Center of projection • Eye, projection reference point
About Visible Surface Determination, 1 • Have been considering models, and how to create images from models • e.g., when viewpoint/eye/COP changes, transform locations of vertices (polygon edges) of model to form image • In fact, projectors are extended from front and back of all polygons • Though only concerned with “front” polygons Projectors from front (visible) surface only
About Visible Surface Determination, 2 • To form image, must determine which objects in scene obscured by other objects • Occlusion • Definition of visible surface determination (VSD): • Given a set of 3-D objects and a view specification (camera), determine which lines or surfaces of the object are visible • Also called Hidden Surface Removal (HSR)
Visible Surface Determination: Historical notes • Problem first posed for wireframe rendering • doesn’t look too “real” (and in fact is ambiguous) • Solution called “hidden-line (or surface) removal” • Lines themselves don’t hide lines • Lines must be edges of opaque surfaces that hide other lines • Some techniques show hidden lines as dotted or dashed lines for more info • Hidden surface removal appears as one stage
Classes of VSD Algorithms • Different VSD algorithms have advantages and disadvantages: 0. “Conservative” visibility testing: • only trivial reject - does not give final answer • E.g., back-face culling, canonical view volume clipping • Have to feed results to algorithms mentioned below 1. Image precision • resolve visibility at discrete points in image • Z-buffer, scan-line (both in hardware), ray-tracing 2. Object precision • resolve for all possible view directions from a given eye point
Image Precision • Resolve visibility at discrete points in image • Sample model, then resolve visibility • raytracing, Z-buffer, scan-line • operate on display primitives. e.g., pixels, scan-lines • visibility resolved to the precision of the display • (very) High Level Algorithm: for (each pixel in image, i.e., from COP to model) { 1. determine object closest to viewer pierced by projector thru pixel 2. draw pixel in appropriate color } • Complexity: • O( n.p), where n = objects, p = pixels, from above for loop or just, at each pixel consider all objects and find closest point
Object Precision, 1 • Resolve for all possible view directions from a given eye point • Each polygon is clipped by projections of all other polygons in front of it • Irrespective of view direction or sampling density • Resolve visibility exactly, then sample the results • Invisible surfaces are eliminated and visible sub-polygons are created • e.g., variations on painter's algorithm, poly’s clipping poly’s, 3-D depth sort, BSP: binary-space partitions
Object Precision, 2 • (very) High Level Algorithm for (each object in the world) { 1. determine parts of object whose view is unobstructed by other parts of it or any other object 2. draw pixel in appropriate color } • Complexity: • O( n2 ), where n = number of objects • from above for loop or just • must consider all objects (visibility) interacting with all others • (but, even when n << p, “steps” are longer, as a constant factor)
“Ray Casting” for VSD • Recall ray tracing • For each pixel, follow the path of “light” back to the source, considering surface properties (smoothness, color) • “Color” of pixel is result • For “ray casting”, • Just follow a ray from cop to first polygon encountered – and that polygon will be visible, and others won’t be • Conceptually simple, but not used • An image precision algorithm • Time proportional to number of pixels
Painter’s Algorithm • Another simple algorithm … • Way to resolve visibility • Create drawing order, each poly overwriting the previous ones guarantees correct visibility at any pixel resolution • Strategy is to work back to front • Find a way to sort polygons by depth (z), then draw them in that order • Sort of polygons by smallest (farthest) z-coordinate in each polygon • Draw most distant polygon first, then work forward towards the viewpoint (“painters’ algorithm”) • An object precision algorithm • Time proportional to number of objs/polys
Painter’s AlgorithmProblems • Intersecting polygons present a problem • Even non-intersecting polygons can form a cycle with no valid visibility order: