780 likes | 1k Views
Advanced Lighting. With Spherical Harmonics. What Is the Algorithm. Similar to Light Propagation Volumes. Light Propagation Volumes. From Crytek and Used in Crysis 2 on Pc, Xbox 360, PS3 Uses Reflective Shadow Maps And Propagation Volumes to Speed up Rendering. Goal. Global Illumination
E N D
Advanced Lighting With Spherical Harmonics
Light Propagation Volumes • From Crytek and Used in Crysis 2 on Pc, Xbox 360, PS3 • Uses Reflective Shadow Maps • And Propagation Volumes to Speed up Rendering
Goal • Global Illumination • The Goal is to have light “Bounce” off of Surfaces as it does in the Real World • It allows the environment to look much more realistic • It has the greatest effect in environments with low ambient light
Instant Radiosity • The father of Reflective Shadow Mapping • The idea is that lights will “project” more lights onto other surfaces • Has led to many other techniques
Instant Radiosity • The “projected” lights will light other surfaces • The “Bounce” has been achieved
But • This is incredibly slow • Time to calculate becomes, number of lights multiplied by the number of projected lights to the power of the number of bounces multiplied by the number of surfaces affected • Plus the position of the projected lights is determined by ray tracing
Reflective Shadow Mapping Start by Shadow Mapping
Shadow Mapping • Create a Z-buffer or depth buffer from the perspective of the light • Using this information you can determine if a point of a surface is in shadow or not
Reflective Shadow Mapping • Transform Z-buffer to world space and create lights at the Z-buffer locations • But we need more information • So when we render the initial Z-buffer we also render colour and surface normal
Reflective Shadow Mapping • Using the colour we can determine what colour the newly created light will be • Using the surface normal we can determine what direction the new light should face
Reflective Shadow Mapping • Removes the Overhead of Raytracing to find new light positions • But • Does not remove the Overhead of Rendering hundreds or even thousands of lights • Adds the Overhead of Rendering many Z-buffers
Imperfect Shadow Mapping • Same steps as Reflective Shadow Mapping • But • When we are creating a Reflective shadow map we create many Z-buffers in many different render targets many of which are of very low resolution • Low resolution means less info of the geometry is needed
Imperfect Shadow Mapping • Instead render all the small depth buffers in one large buffer • The graphics card has to change state less • When rendering the Z-buffers use Stochastic depth Sampling
Stochastic Sampling • We only need a little information we can figure the rest out • So we take less depth samples • What we end up with is a depth texture with “holes” in it • These holes are where we did not take a sample
Imperfect Shadow Mapping • Essentially tries to remove the cost of rendering lots of shadow maps • Works well for diffuse surfaces (Regular) • Does Not work well for glossy surfaces
Many LODs • A newer technique then Imperfect shadow mapping • Again similar to reflective shadow mapping • But uses a different way of reducing the cost of rendering many depth textures
Many LODs • Both Imperfect shadow mapping and Reflective shadow mapping use level of detail to reduce the cost of rendering depth textures • But traditional LOD is good for displaying high quality • And Bad at displaying low quality
Many LODs • To solve this many LODs uses a different method • It renders points instead of triangles • The points are rendered from a Bounding Sphere Hierarchy of the geometry • Think about it the depth texture only cares about the shape of the object and the depth textures could be as small as 16 * 16 pixels or even smaller
Many LODs • With such a low resolution triangle rendering becomes inefficient it requires a pixel to be shaded dozens of times more than they need to be • If a triangle is smaller than the size of a pixel then that pixel is being written to more than four times • Point rendering will only render that pixel once
Many LODs • Reduces the cost of rendering depth textures about as much as Imperfect shadow mapping • But maintains greater quality • Works well displaying diffuse surfaces • Works well displaying glossy surfaces
Many LODs • Also is used for rendering lots of small reflections • And refractions • Both of which also require very small buffers for which point rendering becomes much more efficient than triangle rendering
Temporal Coherence • Lights don’t move around all that often • So why do we need to calculate the positions, colour and direction of the extra lights every frame • Why not keep information from previous frames
Temporal Coherence • Due to floating point precision error a slow moving reflective shadow map will “jitter” • Temporal Coherence will stop that • And lower the average cost of rendering
Temporal Coherence • Will reduce average cost of rendering depth textures • Will either increase the memory requirement or will lower the quality of the result • Will not reduce the cost of lighting
Temporal Coherence • You can either store results from previous frames (increasing memory requirements) • Or you can decide to only calculate the depth textures every X frames (reducing the quality)
But • All of these extra techniques focus on reducing the cost of calculating the depth textures • This lowers the Overhead of the creation of the “bounce” lights • None of these techniques reduce the Overhead of rendering these lights
Light Propagation Volumes • Aims to reduce the Overhead of rendering the lights • Uses reflective shadow mapping to find locations for the first set of “bounce” lights • But instead of creating lights add a virtual point light to a volume
Volume • Imagine a 3D grid and at each cell on that grid you determine what light enters and exits it • You could use that grid to determine what light hits a surface within that grid
Volume • The only way to truly know what light enters each cell is to render the entire environment into a cube map • Meaning for every cell on the grid you have to render the entire environment again • And then when you want to use that data for each pixel you are shading you have to read each and every pixel and average out the results
Volume • So instead of averaging out half an entire cube map per pixel we use Spherical Harmonics • We then have four numbers per colour channel so the equivalent of three pixels • And there is a simple directX call that will convert our cube map into the Spherical Harmonics
Volume • However we still have to render the entire environment into each grid cell • So instead we forget about getting perfectly accurate light flow we just worry about the lights that are generated from the reflective shadow map • We still want to end up with spherical harmonics at the end but we don’t want to have to render cube maps
Volume • So instead of cube maps we create three Volume textures (think of them as 3d arrays of floats) • And perform a pixel shader on them • Then perform the math to create a light in Spherical harmonics and then add it to the grid
Volume • We have then essentially rendered to spherical harmonics • But lights will now only affect surfaces in the same cell of the volume as they are in • So we must Propagate
Propagate • If we use the values in the neighbouring cells we can calculate what light from that cell should flow into this cell • If we do this multiple times then the light will flow as far as it normally would
Propagate • As we propagate the light we can also check if there is any geometry that would block the flow between two cells • Giving you a shadowing effect • And if you knew the average surface normal for that geometry then you can “bounce” the light into a corresponding cell • This bounce would also have an altered colour based off of the average colour of the blocking geometry
Propagate • The get the blocking geometry we use the geometry shader and compare each triangles world space position against the positions of the cells of the volume • This is done on the geometry shader and not the vertex shader because the vertex shader does not give a surface normal
Sampling • When we have to shade a surface we now have the lighting info (and we didn’t have to render the entire world multiple times) • Find the closest cell to the point you are shading (pixel shader) grad the spherical harmonic coefficients and .....
Sampling • What does the Spherical harmonics actually give us? • The Spherical harmonics is a low frequency representation of the light coming from all directions • Think of being in a blurry glass ball that would be a good representation of what the Spherical harmonics gives us
Sampling • So all we need to know is what light is coming in the direction of the surface normal we are shading • Or for specular the reflection angle based on viewing angle and surface normal
Light Propagation Volumes • When rendering many lights we normally use deferred rendering • In light propagation volumes if we wanted to add another light all we have to do is add it to the list and then on the GPU calculate Spherical harmonics in the pixel shader and that is the only extra overhead of adding an extra light (we do have the base cost of propagation though)
Light Propagation Volumes • In deferred lighting if we want to add a new light then for every surface it touches per pixel we need to calculate its effect and add it to the light currently effecting that pixel • With light propagation volumes adding a new light is basically only writing to three pixels where as deferred could potentially cover hundreds of pixels