760 likes | 941 Views
Week 6 - Monday. CS361. Last time. What did we talk about last time? Visual appearance Lights Materials Sensors. Questions?. Project 2. Week 1: Color and XNA. RGB. We will primarily focus on the RGB system for representing color
E N D
Week 6 - Monday CS361
Last time • What did we talk about last time? • Visual appearance • Lights • Materials • Sensors
RGB • We will primarily focus on the RGB system for representing color • With Red, Green, and Blue components, you can combine them to make most (but not all) visible colors • Combining colors is an additive process: • With no colors, the background is black • Adding colors never makes a darker color • Pure Red added topure Green added to pure Blue makes White • RGB is a good model for computer screens
Luminance • If the R, G, B values happen to be the same, the color is a shade of gray • 255, 255, 255 = White • 128, 128, 128 = Gray • 0, 0, 0 = Black • To convert a color to a shade of gray, use the following formula: • Value = .3R + .59G + .11B • Based on the way the human eye perceives colors as light intensities
Brightness and Contrast • We can adjust the brightness of a picture by multiplying each pixel's R,G, and B value by a scalar b • b [0,1) darkens • b (1,) brightens • We can adjust the contrast of a picture by multiplying each pixel's R,G, and B value by a scalar c and then adding -128c + 128 to the value • c [0,1) decreases contrast • c (1,) increases contrast • After adjustments, values must be clamped to the range [0, 255] (or whatever the range is)
HSV • HSV • Hue (which color) • Saturation (how colorful the color is) • Value (how bright the color is) • Hue is represented as an angle between 0° and 360° • Saturation and value are often given between 0 and 1 • Saturation in HSV is not the same as in HSL
XNA basics • Initialize() method • LoadContent() method • Update() method • Draw() method • Texture2D objects • Sprites
Rendering • What do we have? • Virtual camera (viewpoint) • 3D objects • Light sources • Shading • Textures • What do we want? • 2D image
Graphics rendering pipeline • For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline • This pipeline contains three conceptual stages:
Application stage • The application stage is the stage completely controlled by the programmer • As the application develops, many changes of implementation may be done to improve performance • The output of the application stage are rendering primitives • Points • Lines • Triangles
Important jobs of the application stage • Reading input • Managing non-graphical output • Texture animation • Animation via transforms • Collision detection • Updating the state of the world in general
Acceleration • The Application Stage also handles a lot of acceleration • Most of this acceleration is telling the renderer what NOT to render • Acceleration algorithms • Hierarchical view frustum culling • BSP trees • Quadtrees • Octrees
Geometry stage • The output of the Application Stage is polygons • The Geometry Stage processes these polygons using the following pipeline:
Model Transform • Each 3D model has its own coordinate system called model space • When combining all the models in a scene together, the models must be converted from model space to world space • After that, we still have to account for the position of the camera
Model and View Transform • We transform the models into camera space or eye space with a view transform • Then, the camera will sit at (0,0,0), looking into negative z
Vertex Shading • Figuring out the effect of light on a material is called shading • This involves computing a (sometimes complex) shading equation at different points on an object • Typically, information is computed on a per-vertex basis and may include: • Location • Normals • Colors
Projection • Projection transforms the view volume into a standardized unit cube • Vertices then have a 2D location and a z-value • There are two common forms of projection: • Orthographic: Parallel lines stay parallel, objects do not get smaller in the distance • Perspective: The farther away an object is, the smaller it appears
Clipping • Clipping process the polygons based on their location relative to the view volume • A polygon completely inside the view volume is unchanged • A polygon completely outside the view volume is ignored (not rendered) • A polygon partially inside is clipped • New vertices on the boundary of the volume are created • Since everything has been transformed into a unit cube, dedicated hardware can do the clipping in exactly the same way, every time
Screen mapping • Screen-mapping transforms the x and y coordinates of each polygon from the unit cube to screen coordinates • A few oddities: • It probably won't make a difference to us, but XNA (and DirectX 9 and earlier) have weird coordinate systems for pixels where the location is the center of the pixel • Also, XNA conforms to the Windows standard of pixel (0,0) being in the upper left of the screen • OpenGL conforms to the Cartesian system with pixel (0,0) in the lower left of the screen
Backface culling • Backface culling removes all polygons that are not facing toward the screen • A simple dot product is all that is needed • This step is done in hardware in XNA and OpenGL • You just have to turn it on • Beware: If you screw up your normals, polygons could vanish
Rasterizer Stage • The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space • Doing so is called: • Rasterization • Scan Conversion • Note that the word pixel is actually short for "picture element"
More pipelines • As you should expect, the Rasterizer Stage is also divided into a pipeline of several functional stages:
Triangle Setup and Traversal • Setup • Data for each triangle is computed • This could include normals • Traversal • Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel • The properties of this fragment are created by interpolating data from the vertices • These are done with fixed-operation (non-customizable) hardware
Pixel Shading • This is where the magic happens • Given the data from the other stages, per-pixel shading (coloring) happens here • This stage is programmable, allowing for many different shading effects to be applied • Perhaps the most important effect is texturing or texture mapping
Texturing • Texturing is gluing a (usually) 2D image onto a polygon • To do so, we map texture coordinates onto polygon coordinates • Pixels in a texture are called texels • This is fully supported in hardware • Multiple textures can be applied in some cases
Merging • The final screen data containing the colors for each pixel is stored in the color buffer • The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel • Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it) • The Z-buffer is often used for this
More pipes! • Modern GPU's are generally responsible for the Geometry and Rasterizer Stages of the overall rendering pipeline • The following shows colored-coded functional stages inside those stages • Red is fully programmable • Purple is configurable • Blue is not programmable at all
Programmable Shaders • You can do all kinds of interesting things with programmable shading, but the technology is still evolving • Modern shader stages such as Shader Model 4.0 and 5.0 use a common-shader core • Strange as it may seem, this means that vertex, pixel, and geometry shaders use the same language
Vertex shader • Supported in hardware by all modern GPUs • For each vertex, it modifies, creates, or ignores: • Color • Normal • Texture coordinates • Position • It must also transform vertices from model space to homogeneous clip space • Vertices cannot be created or destroyed, and results cannot be passed from vertex to vertex • Massive parallelism is possible
Geometry shader • Newest shader added to the family, and optional • Comes right after the vertex shader • Input is a single primitive • Output is zero or more primitives • The geometry shader can be used to: • Tessellate simple meshes into more complex ones • Make limited copies of primitives • Stream output is possible
Pixel shader • Clipping and triangle set up is fixed in function • Everything else in determining the final color of the fragment is done here • Because we aren't actually shading a full pixel, just a particular fragment of a triangle that covers a pixel • A lot of the work is based on the lighting model • The pixel shader cannot look at neighboring pixels • Except that some information about gradient can be given • Multiple render targets means that many different colors for a single fragment can be made and stored in different buffers
Merging stage • Fragment colors are combined into the frame buffer • This is where stencil and Z-buffer operations happen • It's not fully programmable, but there are a number of settings that can be used • Multiplication • Addition • Subtraction • Min/max
Vector operations • We will be interested in a number of operations on vectors, including: • Addition • Scalar multiplication • Dot product • Norm • Cross product
Interpretations • A vector can either be a point in space or an arrow (direction and distance) • The norm of a vector is its distance from the origin (or the length of the arrow) • In R2 and R3, the dot product follows: where is the smallest angle between u and v
Cross product • The cross product of two vectors finds a vector that is orthogonal to both • For 3D vectors u and v in an orthonormal basis, the cross product w is:
Cross product rules • Also: • wu and wv • u, v,and w form a right-handed system
Matrix operations • We will be interested in a number of operations on matrices, including: • Addition • Scalar multiplication • Transpose • Trace • Matrix-matrix multiplication • Determinant • Inverse
Matrix-matrix multiplication • Multiplication MN is legal only if M is p x q and N is q x r • Each row of M and each column of N are combined with a dot product and put in the corresponding row and column element
Determinant • The determinant is a measure of the magnitude of a square matrix • We'll focus on determinants for 2 x 2 and 3 x 3 matrices
Adjoint • The adjoint of a matrix is a form useful for transforming surface normals • We can also use the adjoint when finding the inverse of a matrix • We need the subdeterminantdij to define the adjoint • The adjointA of an arbitrary sized matrix M is: • For a 3 x 3:
Multiplicative inverse of a matrix • For a square matrix M where |M| ≠ 0, there is a multiplicative inverse M-1 such that MM-1 = I • For cases up to 4 x 4, we can use the adjoint: • Properties of the inverse: • (M-1)T = (MT)-1 • (MN)-1 = N-1M-1
Orthogonal matrices • A square matrix is orthogonal if and only if its transpose is its inverse • MMT = MTM = I • Lots of special things are true about an orthogonal matrix M • |M| = ± 1 • M-1 = MT • MT is also orthogonal • ||Mu|| = ||u|| • Mu Mviffu v • If M and N are orthogonal, so is MN • An orthogonal matrix is equivalent to an orthonormal basis of vectors lined up together
Homogeneous notation • We add an extra value to our vectors • It's a 0 if it’s a direction • It's a 1 if it's a point • Now we can do a rotation, scale, or shear with a matrix (with an extra row and column):
Translations • Then, we multiply by a translation matrix (which doesn't affect a vector) • A 3 x 3 matrix cannot translate a vector