380 likes | 577 Views
Week 8 - Wednesday. CS361. Last time. What did we talk about last time? Textures Anisotropic filtering Volume textures Cube maps Texture caching and compression Procedural texturing Texture animation Material mapping Alpha mapping Bump mapping Normal maps Parallax mapping
E N D
Week 8 - Wednesday CS361
Last time • What did we talk about last time? • Textures • Anisotropic filtering • Volume textures • Cube maps • Texture caching and compression • Procedural texturing • Texture animation • Material mapping • Alpha mapping • Bump mapping • Normal maps • Parallax mapping • Relief mapping • Heightfield texturing
Radiometry • Radiometry is the measurement of electromagnetic radiation (for us, specifically light) • Light is the flow of photons • We'll generally think of photons as particles, rather than waves • Photon characteristics • Frequency ν = c/λ (Hertz) • Wavelength λ = c/ν (meters) • Energy Q = hν (joules) [h is Planck's constant]
Radiometric quantities • We'll be interested in the following radiometric quantities
Concrete examples • Radiant flux: energy per unit time (power) • Irradiance: energy per unit time through a surface • Intensity: energy per unit time per steradian
Radiance • The radiance L is what we care about since that's what sensors detect • We can think of radiance as the portion of irradiance within a solid angle • Or, we can think of radiance as the portion of a light's intensity that flow through a surface • Radiance doesn't change with distance
Photometry • Radiometry just deals with physics • Photometry takes everything from radiometry and weights it by the sensitivity of the human eye • Photometry is just trying to account for the eye's differing sensitivity to different wavelengths
Photometric units • Because they're just rescalings of radiometric units, every photometric unit is based on a radiometric one • Luminance is often used to describe the brightness of surfaces, such as LCD screens
Colorimetry • Colorimetry is the science of quantifying human color perception • The CIE defined a system of three non-monochromatic colors X, Y, and Z for describing the human perceivable color space • RGB is a transform from these values into monochromatic red, green, and blue colors • RGB can only express colors in the triangle • As you know, there are others (HSV, HSL, etc.)
Types of lights • Real light behaves consistently (but in a complex way) • For rendering purposes, we often divide light into categories that are easy to model • Directional lights (like the sun) • Omni lights (located at a point, but evenly illuminate in all directions) • Spotlights (located at a point and have intensity that varies with direction) • Textured lights (give light projections variety in shape or color) • Similar to gobos, if you know anything about stage lighting
XNA lights • With a programmable pipeline, you can express lighting models of limitless complexity • The old DirectX fixed function pipeline provided a few stock lighting models • Ambient lights • Omni lights • Spotlights • Directional lights • All lights have diffuse, specular, and ambient color • Let's see how to implement these lighting models with shaders
Ambient lights • Ambient lights are very simple to implement in shaders • We've already seen the code • The vertex shader must simply transform the vertex into clip space (world x view x projection) • The pixel shader colors each fragment a constant color • We could modulate this by a texture if we were using one
Ambient light declarations float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 0, 0, 1); float AmbientIntensity = 0.5; structVertexShaderInput { float4 Position : POSITION0; }; structVertexShaderOutput { float4 Position : POSITION0; };
Ambient light vertex shader VertexShaderOutputVertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; }
Ambient light pixel shader and technique float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } }
Directional lights in XNA • Directional lights model lights from a very long distance with parallel rays, like the sun • It only has color (specular and diffuse) and direction • They are virtually free from a computational perspective • Directional lights are also the standard model for BasicEffect • You don't have to use a shader to do them • Let's look at a diffuse shader first
Diffuse light declarations • We add values for the diffuse light intensity and direction • We add a WorldInverseTranspose to transform the normals • We also add normals to our input and color to our output float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 LightDirection = float3(1, 0, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 1.0; structVertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; structVertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; };
Diffuse light vertex shader • Color depends on the surface normal dotted with the light vector VertexShaderOutputVertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, LightDirection); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; }
Diffuse light pixel shader • No real differences here • The diffuse color and ambient colors are added together • The technique is exactly the same float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); }
Specular lighting • Adding a specular component to the diffuse shader requires incorporating the view vector • It will be included in the shader file and be set as a parameter in the C# code
Specular light declarations • The view vector is added to the declarations • As are specular colors and a shininess parameter float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 CameraPosition; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 LightDirection = float3(1, 0, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 1.0; float Shininess = 200; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 1;
Specular light structures • The output adds a normal so that the half vector can be computed in the pixel shader structVertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; structVertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; float3 Normal : TEXCOORD0; };
Specular vertex shader • The same computations as the diffuse shader, but we store the normal in the output VertexShaderOutputVertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 normal = normalize(mul(input.Normal, WorldInverseTranspose)); float lightIntensity = dot(normal, LightDirection); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); output.Normal = normal; return output; }
Specular pixel shader • Here we finally have a real computation because we need to use the pixel normal (which is averaged from vertices) in combination with the view vector • The technique is the same float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 light = normalize(LightDirection); float3 normal = normalize(input.Normal); float3 reflect = normalize(2*dot(light, normal)*normal - light); float3 view = normalize(mul(normalize(CameraPosition), World)); float dotProduct = saturate(dot(reflect, view)); float4 specular = SpecularIntensity * SpecularColor * pow(dotProduct, Shininess) * length(input.Color); return saturate(input.Color + AmbientColor * AmbientIntensity + specular); }
Point lights in XNA • Point lights model omni lights at a specific position • They generally attenuate (get dimmer) over a distance and have a maximum range • DirectX has a constant attenuation, linear attenuation, and a quadratic attenuation • You can choose attenuation levels through shaders • They are more computationally expensive than directional lights because a light vector has to be computed for every pixel • It is possible to implement point lights in a deferred shader, lighting only those pixels that actually get used
Point light declarations • We add light position float4x4 World; float4x4 View; float4x4 Projection; Float4x4 WorldInverseTranspose; float3 CameraPosition; float3 LightPosition; float4 LightColor = float4(1, 1, 1, 1); float LightRadius = 50; float4 SpecularColor = float4(1, 1, 1, 1); float Shininess = 200; float SpecularIntensity = 1; float4 AmbientColor= float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float DiffuseIntensity = 1.0;
Point light structures • We no longer need color in the output • We do need the vector to the camera from the location • We also need the world location at that fragment structVertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; structVertexShaderOutput { float4 Position : POSITION0; float3 Normal : TEXCOORD0; float3 View : TEXCOORD2; float3 WorldPosition : TEXCOORD3; };
Point light vertex shader • We compute the normal, the vector to the camera, and the world position VertexShaderOutputVertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.Normal= mul(input.Normal, WorldInverseTranspose) output.View = CameraPosition - worldPosition; output.WorldPosition= worldPosition; return output; }
Point light pixel shader • Lots of junk in here float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 Normal = normalize(input.Normal); float3 LightDirection = LightPosition– input.WorldPosition; float attenuation = pow(1.0f - saturate(length(LightDirection)/LightRadius), 2); LightDirection = normalize(LightDirection); float3 View = normalize(input.View); float DiffuseColor = dot(Normal, LightDirection); float3 Reflect = normalize(2 * DiffuseColor * Normal - LightDirection); float Specular = pow(saturate(dot(Reflect, View)), Shininess); return AmbientColor*AmbientIntensity + attenuation * LightColor* DiffuseIntensity* DiffuseColor + SpecularIntensity* SpecularColor* Specular; }
Next time… • BRDFs • Implementing BRDFs • Texture mapping and bump mapping in shaders
Reminders • Finish reading Chapter 7