1 / 29

Last Time

Last Time. Shading Interpolation Texture Mapping introduction. Today. Texture Mapping details Modeling Intro (maybe) Homework 5. Basic OpenGL Texturing. Specify texture coordinates for the polygon: Use glTexCoord2f(s,t) before each vertex: Eg: glTexCoord2f(0,0); glVertex3f(x,y,z);

seanreed
Download Presentation

Last Time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Last Time • Shading Interpolation • Texture Mapping introduction (c) University of Wisconsin, CS 559

  2. Today • Texture Mapping details • Modeling Intro (maybe) • Homework 5 (c) University of Wisconsin, CS 559

  3. Basic OpenGL Texturing • Specify texture coordinates for the polygon: • Use glTexCoord2f(s,t) before each vertex: • Eg: glTexCoord2f(0,0); glVertex3f(x,y,z); • Create a texture object and fill it with texture data: • glGenTextures(num, &indices) to get identifiers for the objects • glBindTexture(GL_TEXTURE_2D, identifier) to bind the texture • Following texture commands refer to the bound texture • glTexParameteri(GL_TEXTURE_2D, …, …) to specify parameters for use when applying the texture • glTexImage2D(GL_TEXTURE_2D, ….) to specify the texture data (the image itself) MORE… (c) University of Wisconsin, CS 559

  4. Basic OpenGL Texturing (cont) • Enable texturing: glEnable(GL_TEXTURE_2D) • State how the texture will be used: • glTexEnvf(…) • Texturing is done after lighting • You’re ready to go… (c) University of Wisconsin, CS 559

  5. Nasty Details • There are a large range of functions for controlling the layout of texture data: • You must state how the data in your image is arranged • Eg: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) tells OpenGL not to skip bytes at the end of a row • You must state how you want the texture to be put in memory: how many bits per “pixel”, which channels,… • Textures must be square with width/height a power of 2 • Common sizes are 32x32, 64x64, 256x256 • Smaller uses less memory, and there is a finite amount of texture memory on graphics cards (c) University of Wisconsin, CS 559

  6. Controlling Different Parameters • The “pixels” in the texture map may be interpreted as many different things. For example: • As colors in RGB or RGBA format • As grayscale intensity • As alpha values only • The data can be applied to the polygon in many different ways: • Replace: Replace the polygon color with the texture color • Modulate: Multiply the polygon color with the texture color or intensity • Similar to compositing: Composite texture with base color using operator (c) University of Wisconsin, CS 559

  7. Example: Diffuse shading and texture • Say you want to have an object textured and have the texture appear to be diffusely lit • Problem: Texture is applied after lighting, so how do you adjust the texture’s brightness? • Solution: • Make the polygon white and light it normally • Use glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_MODULATE) • Use GL_RGB for internal format • Then, texture color is multiplied by surface (fragment) color, and alpha is taken from fragment (c) University of Wisconsin, CS 559

  8. Some Other Uses • There is a “decal” mode for textures, which replaces the surface color with the texture color, as if you stick on a decal • Texture happens after lighting, so the light info is lost • BUT, you can use the texture to store lighting info, and generate better looking lighting (override OpenGL’s lighting) • You can put the color information in the polygon, and use the texture for the brightness information • Called “light maps” • Normally, use multiple texture layers, one for color, one for light (c) University of Wisconsin, CS 559

  9. Textures and Aliasing • Textures are subject to aliasing: • A polygon pixel maps into a texture image, essentially sampling the texture at a point • The situation is very similar to resizing an image, but the resize ratios may change across the image • Standard approaches: • Pre-filtering: Filter the texture down before applying it • Post-filtering: Take multiple pixels from the texture and filter them before applying to the polygon fragment (c) University of Wisconsin, CS 559

  10. Point Sampled Texture Aliasing • Note that the back row is a very poor representation of the true image Texture map Polygon far from the viewer in perspective projection Rasterized and textured (c) University of Wisconsin, CS 559

  11. Mipmapping (Pre-filtering) • If a textured object is far away, one screen pixel (on an object) may map to many texture pixels • The problem is: how to combine them • A mipmap is a low resolution version of a texture • Texture is filtered down as a pre-processing step: • gluBuild2DMipmaps(…) • When the textured object is far away, use the mipmap chosen so that one image pixel maps to at most four mipmap pixels • Full set of mipmaps requires double the storage of the original texture (c) University of Wisconsin, CS 559

  12. Many Texels for Each Pixel Texture map with pixels drawn on it. Some pixels cover many texture elements (texels) Polygon far from the viewer in perspective projection (c) University of Wisconsin, CS 559

  13. Mipmaps For far objects For middle objects For near objects (c) University of Wisconsin, CS 559

  14. Mipmap Math • Define a scale factor, =texels/pixel • A texel is a pixel from a texture •  is actually the maximum from x and y • The scale factor may vary over a polygon • It can be derived from the transformation matrices • Define =log2  •  tells you which mipmap level to use • Level 0 is the original texture, level 1 is the next smallest texture, and so on • If <0, then multiple pixels map to one texel: magnification (c) University of Wisconsin, CS 559

  15. Post-Filtering • You tell OpenGL what sort of post-filtering to do • Magnification: When <0 the image pixel is smaller than the texel: • glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, type) • Type is GL_LINEAR or GL_NEAREST • Minification: When >0 the image pixel is bigger than the texel: • GL_TEX_MIN_FILTER • Can choose to: • Take nearest point in base texture, GL_NEAREST • Linearly interpolate nearest 4 pixels in base texture, GL_LINEAR • Take the nearest mipmap and then take nearest or interpolate in that mipmap, GL_NEAREST_MIPMAP_LINEAR • Interpolate between the two nearest mipmaps using nearest or interpolated points from each, GL_LINEAR_MIPMAP_LINEAR (c) University of Wisconsin, CS 559

  16. Filtering Example Level 2 NEAREST_MIPMAP_NEAREST: level 0, pixel (0,0) LINEAR_MIPMAP_NEAREST: level 0, pixel (0,0) * 0.51 + level 1, pixel (0,0) * 0.49 Level 1 NEAREST_MIPMAP_LINEAR: level 0, combination of pixels (0,0), (1,0), (1,1), (0,1) Level 0 s=0.12,t=0.1 =1.4 =0.49 (c) University of Wisconsin, CS 559

  17. Boundaries • You can control what happens if a point maps to a texture coordinate outside of the texture image • All texture images are assumed to go from (0,0) to (1,1) in texture space • The problem is how to extend the image to make an infinite space • Repeat: Assume the texture is tiled • glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT) • Clamp to Edge: the texture coordinates are truncated to valid values, and then used • glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP) • Can specify a special border color: • glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, R,G,B,A) (c) University of Wisconsin, CS 559

  18. Repeat Border (1,1) (0,0) (c) University of Wisconsin, CS 559

  19. Clamp Border (1,1) (0,0) (c) University of Wisconsin, CS 559

  20. Border Color (1,1) (0,0) (c) University of Wisconsin, CS 559

  21. Other Texture Stuff • Texture must be in fast memory - it is accessed for every pixel drawn • If you exceed it, performance will degrade horribly • Skilled artists can pack textures for different objects into one image • Texture memory is typically limited, so a range of functions are available to manage it • Specifying texture coordinates can be annoying, so there are functions to automate it • Sometimes you want to apply multiple textures to the same point: Multitexturing is now in most new hardware (c) University of Wisconsin, CS 559

  22. Yet More Texture Stuff • There is a texture matrix: apply a matrix transformation to texture coordinates before indexing texture • There are “image processing” operations that can be applied to the pixels coming out of the texture • There are 1D and 3D textures • Mapping works essentially the same • 3D textures are very memory intensive, and how they are used is very application dependent • 1D saves memory if the texture is inherently 1D, like stripes (c) University of Wisconsin, CS 559

  23. Procedural Texture Mapping • Instead of looking up an image, pass the texture coordinates to a function that computes the texture value on the fly • Renderman, the Pixar rendering language, does this • Available in a limited form with vertex shaders on current generation hardware • Advantages: • Near-infinite resolution with small storage cost • Idea works for many other things • Has the disadvantage of being slow in many cases (c) University of Wisconsin, CS 559

  24. Other Types of Mapping • Environment mapping looks up incoming illumination in a map • Simulates reflections from shiny surfaces • Bump-mapping computes an offset to the normal vector at each rendered pixel • No need to put bumps in geometry, but silhouette looks wrong • Displacement mapping adds an offset to the surface at each point • Like putting bumps on geometry, but simpler to model • All are available in software renderers like RenderMan compliant renderers • All these are becoming available in hardware (c) University of Wisconsin, CS 559

  25. The Story So Far • We’ve looked at images and image manipulation • We’ve looked at rendering from polygons • Next major section: • Modeling (c) University of Wisconsin, CS 559

  26. Modeling Overview • Modeling is the process of describing an object • Sometimes the description is an end in itself • eg: Computer aided design (CAD), Computer Aided Manufacturing (CAM) • The model is an exact description • More typically in graphics, the model is then used for rendering (we will work on this assumption) • The model only exists to produce a picture • It can be an approximation, as long as the visual result is good • The computer graphics motto: “If it looks right it is right” • Doesn’t work for CAD (c) University of Wisconsin, CS 559

  27. Issues in Modeling • There are many ways to represent the shape of an object • What are some things to think about when choosing a representation? (c) University of Wisconsin, CS 559

  28. Categorizing Modeling Techniques • Surface vs. Volume • Sometimes we only care about the surface • Rendering and geometric computations • Sometimes we want to know about the volume • Medical data with information attached to the space • Some representations are best thought of defining the space filled, rather than the surface around the space • Parametric vs. Implicit • Parametric generates all the points on a surface (volume) by “plugging in a parameter” eg (sin cos, sinsin, cos) • Implicit models tell you if a point in on (in) the surface (volume) eg x2 + y2 + z2- 1 = 0 (c) University of Wisconsin, CS 559

  29. Techniques We Will Examine • Polygon meshes • Surface representation, Parametric representation • Prototype instancing and hierarchical modeling • Surface or Volume, Parametric • Volume enumeration schemes • Volume, Parametric or Implicit • Parametric curves and surfaces • Surface, Parametric • Subdivision curves and surfaces • Procedural models (c) University of Wisconsin, CS 559

More Related