270 likes | 374 Views
Introduction to Further Programming for 3D application. Faculty of Computing, Engineering and Technology Staffordshire University. Further Programming for 3D applications CE00849-2. Bob Hobbs. Outline. Module Details What is 3D programming? Typical Processing Steps
E N D
Introduction to Further Programming for 3D application Faculty of Computing, Engineering and Technology Staffordshire University Further Programming for 3D applications CE00849-2 Bob Hobbs
Outline • Module Details • What is 3D programming? • Typical Processing Steps • Modelling and Rendering • Applications • Summary
Module Details • Teaching Team • Bob Hobbs r.g.hobbs@staffs.ac.uk • Dr. Len Noriega l.a.noriega@staffs.ac.uk • Semester 1 15 cats • 3 Hours per week • 1 Hour Lecture Tue 12pm • 2 Hours Practical Mon 1pm, Tue 4pm & Thu 9am
Module Details Course Handbook & Lecture Notes • http://www.soc.staffs.ac.uk/rgh1 Assignment Details • 50% assignment work • 50% two hour exam
Program of Study • Week 01 Introduction • Week 02 General 3D concepts, Motion and Collision • Week 03 Manipulating the Matrix stack • Week 04 Model Importation and Binary File handling • Week 05 Model Interaction and Viewing transforms • Week 06 Windowing and GUI elements • Week 07 Lighting and Shading Techniques • Week 08 Lighting and Shading Techniques • Week 09 Textures & Texture Mapping • Week 10 LOD, Mipmapping, tesselation • Week 11 Vertex and Pixel Shading • Week 12 Assessments
Hierarchy of Models Behaviour Bio-Mechanics Physics Geometry
How does this work ? • Simulation Loop • read input sensors • update objects • render scene in display • Uses traditional 3D graphics methods to render or ‘draw’ the scene
Read Sensors Check any defined actions Update objects with sensor input Objects perform tasks Step along any defined paths Render universe Simulation Loop
Introducing OpenGL • Graphics basics: • Transform geometry (object world, world eye) • Apply perspective projection (eye screen) • Clip to the view frustum • Perform visible-surface processing (Z-buffer) • Calculate surface lighting etc. • Implementing all this is a lot of work (surprise) • OpenGL provides a standard implementation • So why study the basics?
OpenGL Design Goals • SGI’s design goals for OpenGL: • Hardware independence without sacrificing performance • Natural, concise API with some built-in extensibility • OpenGL has become a standard because: • It doesn’t try to do too much • Only renders the image, doesn’t manage windows, etc. • No high-level animation, modeling, sound (!), etc. • It does enough • Useful rendering effects + high performance • It is promoted by SGI (& Microsoft, half-heartedly)
OpenGL: Conventions • Functions in OpenGL start with gl • Functions starting with glu are utility functions (i.e., gluLookAt()) • Functions starting with glx are for interfacing with the X Windows system (i.e., in gfx.c) • Function names indicate argument type/# • Functions ending with f take floats • Functions ending with i take ints, functions that end with v take an array, with b take byte, etc. • Ex: glColor3f() takes 3 floats, but glColor4fv() takes an array of 4 floats
OpenGL: Specifying Geometry • Geometry in OpenGL consists of a list of vertices in between calls to glBegin() and glEnd() • A simple example: telling GL to render a triangle glBegin(GL_POLYGON); glVertex3f(x1, y1, z1); glVertex3f(x2, y2, z2); glVertex3f(x3, y3, z3); glEnd(); • Usage: glBegin(geomtype) where geomtype is: • Points, lines, polygons, triangles, quadrilaterals, etc...
Viewer Synthetic image will vary according to: viewing direction, viewer position, illumination, object properties, ... Projection onto 2D surface Object What is 3D rendering? Generally deals with graphical display of 3D objects as seen by viewer
Specification Rendering Graphical display such as What is 3D Computer Graphics? • 3D graphics: generation of graphical display (rendering) of 3D object(s) from specification (model(s)) Modelling
Light source Transformation Hidden surface removal Vertices Shading Viewpoint Facets Typical Processing Steps Wireframe polygonal model Solid object
Illumination model Graphics engine Object model(s) Graphical display Viewing and projection specification Typical Processing Steps Modelling: numerical description of scene objects, illumination, and viewer Rendering: operations that produce view of scene projected onto view surface
Modelling 1438 facets Human Head Model
Modelling 7258 facets Human Head Model
Modelling 2074 facets Teacher and Board Model
Rendering 1438 facets Shaded Human Head
Rendering Shaded Teacher and Board
Scene Graphs • In VR programming the structure used is a scene graph which is special tree structure designed to store information about a scene. • Typical elements include • geometries • positional information • lights • fog
Root node Fog node Light node Group node Geom node Xform node Simple scene graph
Parent Scene Graph Nodes • Content Nodes • contain basic elements of a scene • geometry • light • position • fog • Group Nodes • no content • link the hierarchy • allow grouping of nodes sharing a common state Parent Child #1 Child #2
Root Light Group “Dog” Xform T1 Geom Lampost Geom Dog Xform T2 Example Hierarchy Group “Lampost”
Applications • VR programming is used in many applications, e.g. • Entertainment (computer games, ‘movie’ special effects, ...) • Human computer interaction (GUI, ...) • Science, education, medicine (visualisation …) • Business (marketing, ...) • Art
Summary • Simulation consists of a series of scenes • Objects defined as Scenes in a scene graph which may be one object or a related collection of objects • Each iteration of the simulation loop determines actions, translations(along paths) and other inputs which affect properties of the objects • NOT animation !!! • The world is redrawn using rendering process