200 likes | 319 Views
Emerging Technologies for Games Capability Testing and DirectX10 Features. CO3303 Week 13. Today’s Lecture. Hardware Capabilities Techniques and Capabilities Geometry Shader Stage Stream Output Stage DirectX Resources. Hardware Capabilities.
E N D
Emerging Technologies for GamesCapability Testing andDirectX10 Features CO3303 Week 13
Today’s Lecture • Hardware Capabilities • Techniques and Capabilities • Geometry Shader Stage • Stream Output Stage • DirectX Resources
Hardware Capabilities • All lab material so far has made assumptions about graphics capabilities • Real world applications should not do this • Sometimes need to query the system capabilities: • At least ensure minimum spec is met • Adapt to machines of different power: • Enhancements available for high-end hardware • “Degrade gracefully” for lower spec machines • Also optimise data for abilities of given system
Typical Capabilities • Some key graphics capabilities: • Available screen resolutions, refresh rates • Depth and stencil buffer formats / bits per pixel • Anti-aliasing abilities (FSAA / MSAA) • Texture capabilities: • Minimum / maximum size • Pixel formats • Render-target / cube map / instancing etc. support • Etc. • Many render states also have capabilities to indicate the extent of their availability • E.g. Level of anisotropic filtering
Testing Capabilities • Earlier DirectX versions needed intricate capability testing • Backwards compatibility meant that you didn’t know if you were running on legacy hardware, a problem for games • However, DirectX10 and above define a minimum spec • Makes capability testing easier • No need to detect / support legacy hardware • Still need some testing to check for advanced features • Consoles are largely unaffected by such matters • Specification is fixed • Still need to check for: • Hard drive size, HDTV resolutions, peripherals (controllers, cameras, etc.)
Shaders Revisited • Shaders have been a central topic of the course • The most important area of modern graphics • Shaders also have capabilities… • Shader hardware version • Shaders compiled to machine code • Shader version defines the instruction set available • Higher shader versions have more instructions, e.g. for & if statements, and higher level functions • They also have more registers • Number of instructions in a shader, depth of nesting etc. • Should provide alternative shaders: • For high and low spec machines
Multiple Passes • Complex materials need several rendering passes • i.e. render the same polygons multiple times • Each time with a different render state/shader • Each pass blended with the ones below • Example: Earth shader used in some labs: • Pass 1: Render Earth surface – diffuse lighting, texture changes between night and day based on light level • Pass 2: Render clouds – diffuse lighting, moving UVs, blue tint at a glancing angle, alpha blend with Earth • Pass 3: Render outer atmosphere - inside out (reverse culling), exaggerated diffuse lighting, alpha blend - less alpha (i.e. more transparent) at glancing angle
Effects Files for Capabilities • Using effects files (.fx) files we can collect together shader passes and their render states into techniques • Provide a range of techniques for different hardware specifications • If any one pass in a technique fails capability testing, then degrade to a simpler technique • The DirectX effects file system makes this quite simple
Effect File Example // First technique – two passes technique EarthWithSpecular { pass P0 { SetVertexShader( CompileShader( vs_5_0, Earth1VS() )); SetGeometryShader( CompileShader( gs_5_0, Earth1GS() ) ); SetPixelShader( CompileShader( ps_5_0, Earth1PS() )); } pass P1 { SetVertexShader( CompileShader( vs_5_0, Clouds1VS() )); } } // Second simpler technique – different shaders, one pass technique EarthSimple { pass P0 { SetVertexShader( CompileShader( vs_4_0, Earth2VS() )); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, Earth2PS() )); } }
DirectX 10 Pipeline • Looked at most of the DirectX 10 pipeline in Computer Graphics • However, we didn’t focus on some of the key new features of DX10: • Geometry Shader Stage • Shaders to manipulate primitives rather than vertices or pixels • Stream-Output Stage • Send data back to GPU memory • Unified access to memory resources at all stages • Memory can be reinterpreted
Geometry Shaders • The new geometry shader stage processes primitives • Triangles or lines • Practically a geometry shader is much like a vertex shader, but working on multiple vertices at once • All three vertices of a triangle, or both points of a line • Allows techniques that can’t be done with only a single vertex • E.g. calculate a face normal (cross product of the triangle edges) • It operates on the output of the vertex shader • This is slightly counter-intuitive • Would expect to start at highest level and work down • Raises questions: which shader does the matrix transformations? • The geometry shader can also create or delete primitives • i.e. The output geometry can be different to the input
Geometry Input / Output • The geometry shader input is an array of vertices • The number depends on the primitive type used • E.g. 3 for triangles, 2 for lines • Geometry shader output is a stream of primitives • The stream type must be specified: • E.g. Triangles, lines etc. • But always output as a strip, not a list • The output stream doesn’t need to match the input • E.g. Could input triangles, but output lines (make a silhouette) • Or input points and output triangles (particle system) • Can output any number of primitives (including 0) • Allows the generation / deletion of primitives
Geometry Shader Example • Example of geometry shader function header: struct GS_VertIn { ... }; // Input vertex data (pos, normal, etc.) struct GS_VertOut { ... }; // Output vertex data (may be different) [maxvertexcount(6)] // Maximum number of output vertices (depends void MakeWireframe // on technique used in shader code) ( // Input array of 3 vertices (i.e. model data is triangles) triangle GS_VertIn inVerts[3], // Output of shader is a stream of vertices paired into lines inout LineStream<GS_VertOut> outStrip ) { // Example code on next slide }
Geometry Shader Example • Example of geometry shader code: // Output outline of triangle (i.e. wireframe) outStrip.Append( inVerts[0] );// Add vertex to output stream outStrip.Append( inVerts[1] );// Another vertex to output a line outStrip.RestartStrip(); // End of line strip, start new one outStrip.Append( inVerts[1] ); // Second line… outStrip.Append( inVerts[2] ); outStrip.RestartStrip(); outStrip.Append( inVerts[2] ); // Third line… outStrip.Append( inVerts[0] ); outStrip.RestartStrip(); // Effectively outputting a line list here to illustrate methods // Would be better in this case to output a single line strip
Primitive Types / Adjacency • List of possible input primitives • Some types include adjacency information • E.g. Look at triangle list with adjacency on diagram • Each triangle specified with six vertices • The 3 in the triangle, and the 3 adjacent to it in the model • Allows more advanced techniques • E.g. Silhouettes • Note: adjacency data must be pre-calculated for model (not automatic)
Geometry Shader Uses • Geometry shaders have a wide range of uses: • Distorting or animating geometry, especially using face normals • Face normals cannot be calculated in vertex shaders • Silhouettes using adjacency data • If one triangle faces camera (use face normal), but adjacent one faces away, then the edge between is a silhouette edge • Creating extra view-dependent geometry • Tesselation of edge on geometry for smoother silhouettes • Create “fins” for fur rendering (see later lab) • Particle systems without instancing: • Input a point list, or a line list. Generate particle geometry in shader – generate a quad for each point / line • More efficient than instancing • And many more, new ideas emerging all the time…
Geometry Shader Considerations • Geometry shaders are not needed for “traditional” geometry rendering methods • Static models, lit with standard lighting approaches • Set geometry shader to “NULL” • Performance of geometry shaders can be an issue: • Older DirectX 10 GPU’s are not efficient at generating large amounts of new geometry • So certain techniques, e.g. tessellation, are possible with geometry shaders, but not appropriate (see later lecture on tesselation) • Newer hardware is getting better, but this is a area where fall-back methods may be appropriate
The Stream Output Stage • The data output from the geometry shader can be written back into GPU memory • A very powerful new feature: • Allows the GPU to perform general purpose processing (GPGPU) • E.g. A particle system can be done in two passes on the GPU: • Pass 1: Render with GPU as normal • Pass 2: Update particle positions on GPU, writing back to memory • No CPU intervention – efficient • Esp. with geometry shader “instancing”
Stream Output Considerations • Stream-Output cannot output to the same buffer that is being input from • However, this is usually what we want to do • Work around this by using double buffering • Create two identical buffers of data • Input from one, and output to the other • Swap the buffers between frames • Often need multiple passes to render / update geometry • Some vertex data may be needed for only one or the other pass • I.e. Likely to be some data redundancy • Example in GPU particle system lab later
Resources in DirectX 10 • One of the most important changes in DirectX 10 and above is the generalisation of resources • DirectX10 considers all kinds of buffer / texture as different kinds of "resource" • A "resource" is simply a block of memory interpreted a particular way • Very flexibile: may reinterpretareas of GPU memory • E.g. Reinterpret a texture as a depth buffer (useful) • Means changes to setup of textures and buffers • Although the concepts are unchanged • There is a new resource type - constant buffers • Replaces constant tables to access shader constants