1.29k likes | 1.46k Views
CSC 308 – Graphics Programming. Visual Realism Information modified from Ferguson’s “Computer Graphics via Java”, and “Fundamentals of Computer Graphics.” by Shirley. Dr. Paige H. Meeker Computer Science Presbyterian College, Clinton, SC. Concepts.
E N D
CSC 308 – Graphics Programming Visual Realism Information modified from Ferguson’s “Computer Graphics via Java”, and “Fundamentals of Computer Graphics.” by Shirley Dr. Paige H. Meeker Computer Science Presbyterian College, Clinton, SC
Concepts • When we use computers to create real or imagined scenes, we use attributes of our visual realm – shapes of objects are revealed by light, hidden by shadow, and color is used to create a mood. To create these scenes, we use the procedures used in other media, considering composition of our scene, lighting, model surfaces and materials, camera angles, etc. • Ironically, we go to a lot of trouble to create a 3D scene that we can only see on a 2D monitor. The process of converting our 3D scene in order to produce a 2D image is called rendering. (word from architecture, where 2D drawings of a design is referred to as a “rendering”) There are three approaches to rendering of scenes
Wireframe Rendering • Advantages: • Simplest Approach • Represents object as if it had no surfaces at all – only composed of “wire-like” edges • Easy and fast for computer to calculate • A part of all 3D animation systems • Allows real-time interaction with the model • Disadvantages: • They are transparent • Can be “ambiguous” – difficult to tell which of the “wires” are the front and which are the back
Hidden Line Rendering • Advantages • Takes into account that an object has surfaces and that these surfaces hide the surfaces behind them • Continues to represent the objects as lines, but some lines are hidden by the surfaces in front of them. • Disadvantages • Computationally more complicated than wireframe rendering • Takes longer to render / updates more slowly • Recognizes existence of surfaces, but tell you nothing about the character of the surfaces (i.e. no color or material information)
Shaded Surface Rendering (aka Rendering) • Advantages • Provides information about surface characteristics, lighting, and shading • Disadvantages • More complicated to compute and even longer to render.
Steps in Rendering Process Generally, you can think of the process of producing a 2D rendering of a 3D scene as a 6 step process: Obtaining the geometry of the model Includes characters, props, and sets Placing the camera Also called the “point of view”, we can maneuver our virtual camera in XYZ space in order to view the portion of our scene we are most interested in. Defining the light sources Design and place the lights within the scene. – there can be many lights in one scene, and they can have various characteristics (like changes of color) Defining the surface characteristics Specify: color, texture, shininess, reflectivity, and transparency Choosing the shading technique Related to defining the surface characteristics Running the rendering algorithm Then, you may save and output your image.
Hidden Line Removal (aka Surface Culling) - Introduction • Depth cueing • Surfaces • Vectors/normals • Hidden face culling • Convex/concave solids
Hidden Line Removal • No one best algorithm • Look at a simple approach for convex solids • based upon working out which way a surface is pointing relative to the viewer. • To be a convex solid, a line drawn from any point on one surface to a point on the second surface must pass entirely through the interior of the solid. convex concave
Based on surfaces not lines TO IMPLEMENT: • Need a Surface data structure • WireframeSurface • made up of lines
Flat surfaces • Key requirement of our surfaces is that they are FLAT (contained within the same plane). • Easiest way to ensure this is by using only three points to define the surface…. • (any triangle MUST be flat - think about it) • …but as long as you promise not to do anything that will bend a flat surface, we can allow them to be defined by as many points as you like. • Single sided
Which way does a surface point? • Vector mathematics defines the concept of a surface’s normal vector. • A surface’s normal vector is simply an arrow that is perpendicular to that surface (i.e. it sticks straight out)
Determining visibility Consider the six faces of a cube and their normal vectors Vectors N1 and N2 are are the normals to surfaces 1 and 2 respectively. Vector L points from surface 1 to the viewpoint. It can be seen that surface 1 is visible to the viewer whilst surface 2 cannot be seen from that position.
Determining visibility • Mathematically, a surface is visible from the position given by L if: • Where q is the angle between L and N. • Equivalently,
Determining visibility • Fortunately we can calculate cos q from the direction of L (lx,ly,lz) and N (nx,ny,nz) • This is due to the well known result in vector mathematics - the dot product (or the scalar product) whereby:
Determining visibility • Alternatively: • Where L and N are unit vectors (i.e of length 1)
How do we work out L.N? • At this point we know: • we need to calculate cos q • Values for lx,ly,lz • The only things we are missing are nx,ny,nz
Calculating the normal vector • If you multiply any two vectors using the vector product, the result is another vector that is perpendicular to the plane (i.e normal) which contained the two original vectors.
IMPORTANT • We need to adopt the convention that the calculated normal vector points away from the observer when the angle between the two initial vectors is measured in an clockwise direction. • Failure to do this will lead to MAJOR confusion when you try and implement this
Calculating the normal • Where to find two vectors that we can multiply? • Answer: we can manufacture them artificially from the points that define the plane we want the normal of
Calculating the normal • By subtracting the coordinates of consecutive points we can form vectors which a guaranteed to lie in the plane of the surface under consideration.
Calculating the normal • We define the vectors to be anti-clockwise, when viewing the surface from the interior • (imagine the surface is part of a cube and your looking at it from INSIDE the cube). • Following the anticlockwise convention mentioned above we have produced what is known as an outward normal.
IMPORTANT • An important consequence of this is that when you define the points that define a surface in a program, you MUST add them in anti-clockwise order
Calculating the normal This is a definition of the vector product
Visibility • At this point we know: • we need to calculate cos q • Values for lx,ly,lz • values for nx,ny,nz
Visibility If then draw the surface else don’t! (or draw dashes)
More complex shapes Concave objects Multiple objects
More complex shapes • In these cases, each surface must be considered individually. Two different types of approach are possible: • Object space algorithms - examine each face in space to determine its visibility • Image space algorithms - at each screen pixel position, determine which face element is visible. • Approximately, the relative efficiency of an image space algorithm increases with the complexity of the scene being represented, but often the drawing can be simplified for convex objects by removing surfaces which are invisible even for a single object.
Hidden Surface Removal Algorithms that sort all the points, lines, and surfaces of an object and decide which are visible and which are not. Then, the visible surfaces are kept and the hidden surfaces are removed.
Object Space • Make the calculations in three dimensions. • Require intensive computing • Generate data useful for rendering textures, shadows, and antialiasing • EXAMPLE: Ray Tracing
Image Space • Retain depth information of the objects in the scene • Sort from a lateral position • Sort only to the resolution of the display device • Efficient, but discard some of the original 3D information used for shadowing, texturing, and antialiasing.
Ray Casting • From the “eye” (or “camera”), a ray is cast through the first pixel of the screen • Eye follows the ray until the ray either hits the surface of an object or exits from the viewable world • If the ray hits an object, the program calculates the color of the object at the point where it has been hit. This becomes the color of the pixel through which the ray had been cast. • This repeats through all the pixels of the image
Ray Casting • How do we know what object to render, if there is more than one object in the path of the ray?
Common Algorithms • Painter’s Algorithm • Z-Buffer Algorithm
Painter’s Algorithm • Sort all objects by depth • Start rendering those objects furthest away • Each new object covers up distant objects that have already been rendered • Like a painter who creates the background before painting an object in the foreground • Time consuming! (Esp. for large numbers of objects)
Painter’s Algorithm The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the trees, are painted. Image from Wikipedia.com
Problems with Painter’s • Overlapping polygons can cause the algorithm to fail. • Led to the development of Z-buffer techniques Image from Wikipedia.com
Z-Buffering • aka Depth Buffering • Makes use of a block of memory that stores distance information from the object’s surface to the “eye” Large numbers are stored far away; smaller numbers are closer • Renders objects in any order without presorting them • When a ray hits an object, the depth in Z is calculated and stored in the Z-buffer at the Z-buffer pixel corresponding to the pixel the ray was cast. • When the ray hits a second object, again, the depth is calculated and compared with the previously stored value. It is is less (closer) than the stored value, the new value overwrites the old.value • Usually done in hardware; sometimes in software • http://en.wikipedia.org/wiki/Z-buffering
The z-buffer algorithm based upon sorting the surfaces by their z-coordinates. The algorithm can be summarised thus: • Sort the surfaces into order of increasing depth. Define the maximum z value of the surface and the z-extent. • resolve any depth ambiguities • draw all the surfaces starting with the largest z-value
Ambiguities • Ambiguities arise when the z-extents of two surfaces overlap.
Resolving Ambiguities • An algorithm exists for ambiguity resolution • Where two shapes P and Q have overlapping z-extents, perform the following 5 tests (in sequence of increasing complexity). • If any test fails, draw P first.
Is P not completely on the side of Q further from the viewer?