250 likes | 406 Views
SI31 Advanced Computer Graphics AGR. Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More. Model-based Rendering. Conventional approach is: create 3D model of a virtual world project each object to 2D and render into frame buffer
E N D
SI31Advanced Computer GraphicsAGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Model-based Rendering • Conventional approach is: • create 3D model of a virtual world • project each object to 2D and render into frame buffer • Scene complexity is a major factor • real-time walkthroughs of complex scenes needs powerful processing • affects major application areas such as computer games and VR
Image-based Rendering • Goal: • make rendering time independent of scene complexity • Approach: • make use of pre-calculated imagery • many variations - we look just at two • Question: • where have we met pre-calculated imagery before?
Image Caching - Impostors • Basic idea: • cache image of an object rendered in one frame for re-use in subsequent frames • Technique • project bounding box of object onto image plane to get rectangular extent for that view • capture image and put in texture memory • for next view, render an ‘impostor’ which is a quadrilateral in plane parallel to initial view plane, and texture map with the original image • texture mapping uses current view so image is warped appropriately
Image Caching • Validity of impostors: • once view direction changes substantially, the impostor is no longer valid • object then re-rendered • Hierarchical image caching: • use BSP trees to cluster objects in a hierarchy • distant objects can be clustered and a single image used to render the cluster
Environment Mapping - Revision • Pre-computation: • from object at centre of scene we rendered 6 views and stored resulting images as walls of a surrounding box • caches light arriving at object from different directions • Rendering time • specular reflection calculation then bounced a viewing ray onto point on interior of box and used its colour as the specular colour of the object
Light Fields • Concept: • for every point, cache the light or radiance emanating from that point in each direction • rendering involves looking up (very large) table • five dimensions: (x,y,z) to give position and (q,f) to give direction • in ‘free’ space, radiance constant along a line, so we have a 4D light field - we pre-compute the radiance along all lines in the space
Indexing the Lines • Use two parallel planes - think of these as between viewer and scene scene t v L(u,v,s,t) viewer For each point on (u,v) grid, we have a line to every point on (s,t) grid - ie 4D set of lines - known as light slab s u
Constructing a Light Field • Place camera on (u,v) plane • for each point on grid ( ui, vj ), render scene and store image as: Imageij (sk, tl ) • giving a 2D array of images! • Do this from all six surrounding directions of the scene - ie six light slabs
Rendering • The rendering operation is now a linear look up operation on our 2D array of images • For example, any ray in a ray tracing approach will correspond to a particular 4D point (u,v,s,t) - we look up its value in the light field (using interpolation if it is not exactly on a grid point
Compression • Technique is only feasible because there is coherence between successive images • Hence the 2D array of images can be compressed by factors of over 100
virtual world real world model construction image acquisition off-line rendering model images image analysis real-time rendering image-based rendering real-time interactive flythrough Model-based versus Image-based Rendering
The Problem with Gouraud…. • Gouraud shading is established technique for rendering but has well known limitations • Vertex lighting only works well for small polygons… • … but we don’t want lots of polygons!
Solution is to pre-compute some canonical light effects as texture maps For example… Pre-Compute the Lighting
Suppose we want to show effect of a wall light Create wall as a single polygon Apply vertex lighting Apply texture map In a second rendering pass, apply light map to the wall Rendering using Light Maps
Widely used in games industry Latest graphics cards will allow multiple texture maps per pixel Light Maps
Rather than require the user to represent curved surfaces as an ‘IndexedFaceSet’ of flat polygons, some modelling systems allow representation as Bezier or spline surfaces Hearn & Baker Chap 10 Parametric Surface Representation
Rather than model objects as surfaces, some systems work in terms of solid objects - field known as Constructive Solid Geometry (CSG) Hearn & Baker, Chap 10 Primitive objects (sphere, cylinder, torus, ..) combined by operators (union, intersection, difference) Result is always a solid Rendering via ray tracing typically Constructive Solid Geometry
A very new approach is to model using volumes - with varying transparency OpenGL Volumizer adds this capability to OpenGL See: www.sgi.com/software/volumizer Mitsubishi Volume Pro 500 board www.rtviz.com Volume Graphics
Objects can be defined procedurally - ie by mathematical functions See Hearn & Baker, Chap 10 Fractals are well-known example See The Fractory: library.advanced.org/3288/ Procedural Modelling
Other Important Topics • Colour • Anti-aliasing • Animation • … and much more!
Learning More • Journals: • IEEE Computer Graphics and its Applications • Computer Graphics Forum • Computers and Graphics • Conferences: • ACM SIGGRAPH (Proceedings as ACM Computer Graphics) • Eurographics • Eurographics UK