E N D
1. Vertex and Pixel Programs Soon Tee Teoh
CS 116B
2. Remember the Graphics Pipeline?
3. The Standard Graphics Pipeline What we have done so far is called a fixed-function pipeline.
The per-vertex operations are fixed. The lighting functions are fixed to the Phong lighting model.
Similarly, the per-fragment operations are fixed. The Gouroud shading model is used if smooth shading is requested.
However, with the new generation of graphics processors, the vertex processor and fragment processor are now both programmable.
4. Programmable Shaders We now have
Programmable Vertex Shaders (or Vertex Programs), and
Programmable Fragment Shaders (or Fragment Programs)
These are written in high-level language, compiled, and loaded to the GPU.
At the time each vertex undergoes per-vertex operations, the vertex program is run on the vertex.
Similarly for the fragment program.
Some high-level shading languages include
Microsofts High-Level Shading Language (HLSL),
Cg (C for Graphics), and
GLSL (OpenGL Shading Language)
We introduce Cg
Cg works with both OpenGL and Microsofts DirectX
Download Cg from http://developer.nvidia.com/object/cg_toolkit.html
5. Cg Cg is a high-level language for the GPU.
Syntactically, it is similar to C.
Write your Cg code in its own file, for example myvertexshader.cg
In your C/C++ main source code, include the Cg header files, so that you can link your C/C++ code to your Cg code:
#include <Cg/cg.h>
#include <Cg/cgGL.h>
Then, in your C/C++ code, call the following two functions to create and load your Cg program:
cgCreateProgramFromFile(
);
cgGLLoadProgram(
);
You can create and load multiple Cg programs. To use a particular Cg program that you have loaded, call
cgGLBindProgram(
);
cgGLEnableProfile(
);
6. Cg Program Inputs Varying inputs are used for data that is specified with each element of the stream of input data.
For example, the varying inputs to a vertex program are the per-vertex values that are specified in vertex arrays.
For a fragment program, the varying inputs are the interpolants, such as texture coordinates.
Uniform inputs are used for values that are specified separately from the main stream of input data, and dont change with each stream element.
For example, a vertex program typically requires a transformation matrix as a uniform input.
Often, uniform inputs are thought of as graphics state.
7. Vertex Program outputs The following binding semantics are available in all Cg vertex profiles for output from vertex programs: POSITION, PSIZE, FOG, COLOR0COLOR1, and TEXCOORD0TEXCOORD7.
In OpenGL, these predefined names implicitly specify the mapping of the inputs to particular hardware registers.
All vertex programs must declare and set a vector output that uses the POSITION binding semantic. This value is required for rasterization.
Note that values associated with some vertex output semantics, in particular POSITION, are intended for and are used by the rasterizer. These values cannot actually be used in the fragment program, even though they appear in the input.
8. Cg Example: Vertex Lighting
9. Passing uniform inputs from C program to Cg program
10. Passing Normal and Position To perform fragment (Phong) rendering, the fragment shader needs to know the position and normal of the fragment.
Since the fragment program cannot read the POSITION input, we need to use another parameter to pass position.
So, we pass the position and normal in the TEX_COORD0 and TEX_COORD1 parameters respectively.
Note that these texture coordinates parameters are automatically bi-linearly interpolated when passed from vertex to fragment programs.
11. Cg Example: Fragment Lighting
12. Vertex vs. Fragment Lighting
13. Passing Matrix Information from OpenGL to Cg In the examples provided by the Cg download, the matrix calculations are all manually programmed.
But, suppose we want to use the OpenGL projection and modelview matrices and pass them to the Cg shaders.
From http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=47
14. Notes on Other Examples Reflection Cube Map
This example loads a DDS cubemap texture. It is just 6 images
DDS is a Microsoft Direct3D format, supports compression
Then, the vertex program calculates the reflected ray from eye to vertex
Then, the fragment program uses the reflected ray to index the cubemap to find the color
Additionally, this program also implements software transformations, mouse rotations
Press '+' or '-' to increase or decrease reflectivity
Fragment program uses the "lerp" function to interpolate native and reflected colors
Refraction Cube Map
Same as Reflection Cube Map, except in the vertex program, the refracted ray is calculated instead of the reflected ray
Cg has in-built reflect( ) and refract( ) functions to reflect and refract vectors respectively
15. Reflection and Refraction
16. Reflection Vertex and Fragment Programs
17. Refraction Vertex and Fragment Programs
18. Chromatic Dispersion
19. Chromatic Dispersion Vertex Program
20. Chromatic Dispersion Fragment Program
21. History First chip capable of programmable shading: Nvidias GeForce 3 (March 2001)
October 2002: ATI Radeon 970 Direct3D 9.0 accelerator, vertex and pixel shaders capable of looping and floating point arithmetic.
Early application: Pixel shading used for bump mapping for example.
22. GPGPU General-purpose computing on Graphics Processing Units
Although GPU was designed for graphics, programmers can use it to perform non-graphics functions traditionally performed on the CPU.
This is due to programmable stages and high-precision arithmetic on rendering piplelines
23. Stream Processing GPU is has a specific design
It is particularly suited for stream processing.
GPUs can process independent vertices and fragments massively in parallel.
A GPU runs program on these data in parallel very efficiently.
An ideal GPU application has a large dataset, high parallelism and minimal dependency.
Applications include Digital Image Processing, Fast Fourier Transform, Video Processing, Ray-tracing, Neural Networks, Cryptography, Grid-Computing
24. Standard GPU Computational Resources Programmable processors: Vertex and fragment piplelines
Rasterizer: Interpolates per-vertex properties such as texture coordinates and color to create fragments
Texture unit (a read-only memory interface)
Frame buffer (a write-only memory interface)