390 likes | 982 Views
Welcome to CSCI 480, Computer Graphics Introduction Instructor: Erin Shaw, TA: TBD Class web page http://www.isi.edu/~shaw/cs480/index.html Assignment Reading: Chapter 1of Angel textbook Project 1: Follow homework icon to project1 Online registration and survey: TBP
E N D
Welcome to CSCI 480, Computer Graphics • Introduction • Instructor: Erin Shaw, TA: TBD • Class web page • http://www.isi.edu/~shaw/cs480/index.html • Assignment • Reading: Chapter 1of Angel textbook • Project 1: Follow homework icon to project1 • Online registration and survey: TBP
What is Computer Graphics? • Computer Graphics is The technology for presenting information • Towards this end, we need to... • process, transform, and display data • ...and must take the following into account • Origin (where does it come from?) • Throughput (how much of it can I process?) • Latency (how long do I have to wait)? • Presentation (what does it look like?)
We will cover... Graphics programming algorithms Graphics data structures Color & human vision Graphical interface design & programming Geometry & modeling …..and by necessity, OpenGl It is not Image/Illus. packages Adobe Photoshop, Illus. CAD packages AutoCAD Rendering packages Renderman,Lightscape Modeling packages 3D Studio MAX Animation packages Flash, Digimation CSCI 480 is Computer Science!
CSCI 480 goals • The goal of this semester is many-fold • hands-on graphics programming experience with industry tools • emphasis on 3D and rendering • mathematical underpinnings • familiarity with the field • If only we had more time!
Who am I? • Non-faculty • Research scientist at USC’s Information Sciences Institute (since 1995) • Distance education, virtual intelligent tutors, UI design, past research in CG and digital libraries • Graphics background • MS from Program of Computer Graphics at Cornell University • Employee #12 of startup company Edsun Labs, built new CEG chip for anti-aliasing
Lecture • Graphics applications • Display architectures and devices • Images • Light • Human visual system • Camera models • APIs and OpenGL • Rendering pipeline
Graphics displays • Motivation • How to keep display screen refreshed as more data is displayed at higher resolutions? • Frame buffer (video memory) holds screen data • Buffer size (width x len) = screen resolution • Buffer depth = no. bits per pixel = no. colors • number of colors = 2number_of_bits • 1 bit per pixel: bit can be 0 or 1, or black or white • 8 bits per pixel gives 256 colors, or intensities
Graphics displays • Color systems require at least 3 times • as much memory • Typically 8-bit buffer for each color (RGB) • True color is at least 24 bits per pixel • 32 bit systems are RGBA, where A is alpha, the transparency value • Video memory is highly specialized • VideoRAM, DynamicRAM, SDRAM, etc. • article on accelerator boards
Graphics displays • Rasterization • Is the process of converting geometric entities (lines and curves) to pixel assignments in the frame buffer • Most monitors and printers are raster-based • Rasterization can be done in hardware, sofware, or a combination of the two • Rasterization is the final step of the rendering pipeline
Lecture • Graphics applications • Display architectures and devices • Images • Light • Human visual system • Camera models • APIs and OpenGL • Rendering pipeline
Display devices • More on display devices • http://www.pctechguide.com/07panels.htm • http://www.magnavox.com:81/electreference/videohandbook/tvset.html • http://144.126.176.216/Displays/c3_s1.htm • http://www.ul.ie/~flanagan/ghardw/lcd.html
Lecture • Graphics applications • Display architectures and devices • Images • Light • Human visual system • Camera models • APIs and OpenGL • Rendering pipeline
Images • We will follow the book’s top-down approach • Instead of starting with 2D and generalizing, we will jump right into 3D, using OpenGL. • 3D CG images are synthetic • That is, they do not exist physically • Often this is obvious, sometimes it is not - this is the goal of physically-based rendering • We want to create synthetic images the same way we create traditional images
Images • 3D objects are first modeled • Typically, with a CAD program that outputs either a list of polygons (vertices) or a scene graph of primitive objects. • They exist independently of a viewer • object space verse camera (eye) space
Images • 3D objects are then rendered • We account for the viewer at this stage • We are the cameras in our world (but synthetic cameras are more versatile!) • Images are 2 dimensional • Images would be black without light!
What is light? • Visible light is electromagnetic radiation • Specifically, it’s the portion of the electromagnetic spectrum that the eye can detect • Electromagnetic radiation = radiant energy • Characterized by either • wavelength (), in nanometers (nm) • frequency (f), in Hertz (Hz)
cosmic rays gamma rays x rays ultraviolet visible infrared radar radio short wave TV FM electricity (ac current) Electromagnetic spectrum Frequency (Hz) Wavelength (nm) 1016 nm 3100 miles 105 nm 3.9 x 1013 inch
Imaging systems • Three examples • Human visual system • Pinhole camera • Graphics renderer • Our goal is to understand how images are formed visually, with a camera, then on a computer
Human visual system 1. Light enters through cornea 3. Lens helps focus image on retina 2. Iris opens and closes pupil to adjust amount of light 4. Photoreceptors in retina collect light and convert energy to impulses *Reference:http://members.aol.com/osleye/Main.htm
Visual acuity • Visual acuity is defined as • A measure of the ability of the eye to distinguish detail • Four factors affect visual acuity • Size, luminance, contrast, time • The amount of light reaching the eye is • brightness (a subjective interpretation) • luminance (an objective physical quantity) • radiance (also an objective physical quantity)
Radiometry vs photometry • Radiometry • Physical measurement (all electromag energy) • Used by optical and radiation engineers • Photometry • How a human observer responds to light • Perceptual measurement (visible light only) • Used by illumination engineers and perceptual psychologists
Radiant energy (joule) Radiant power (W=J/sec) Irradiance (W/m²) Radiant Exitance (W/m²) Radiance (W/sr m²) Luminous energy (talbot) Luminous power (lm=talbot/sec) Illuminance (lm/ m²) Luminous Exitance (lm/m²) Luminance (cd/m²) Radiometric vs photometric • In Computer Graphics both are used! • *W=watt, m=meter, sr=steradian, lm=lumen, cd=candela
Photoreceptors • Rods • Occupy the peripheral retina • Responsible for detection of movements, shapes and night time vision • Cones • Responsible for fine detail and color vision. • Occupy a small portion of the retina (macula) • Three subtypes - Red, Blue and Green
Photoreceptor sensitivity 445 535 570 Red Green Red, green & blue cones are sensitive to different frequencies of light (guess which!) Blue Sensitivity of a single RGB cone
Photoreceptor sensitivity • A great image from the Architectural Science Lab at the Univ. of Western Australia, showing the three characteristic peaks of sensitivity within the red/orange, green and blue frequency bands
Human visual system • What to take away from this discussion • The basic system of image formation • The basis for the RGB computer color model is the tristimulus theory of vision based on the sensitivity curves • That there is a lot of processing after an image is formed that we will not (cannot) model
Pinhole camera • Simplistic model • box with small hole on one side • hole allows only one ray of light to enter center at origin d is the distance to the image plane
Pinhole camera • Align the distance to the image plane along the z axis, giving z = -d • Find the image point using similar triangles yp/-d = y/z, or yp= -yd/z xp/-d = x/z, or xp= -xd/z The point (xp,yp,,d) is the projection point of (x,y,z)
Pinhole camera • The field of view (or angle), , is the angle made by the largest object whose projection will fit on the view plane = 2 tan-1 h/2d
Pinhole camera • The depth of field is the distance from the lens that an object is in focus • In a pinhole camera, the depth of field is infinite (a perfect lens!), but there is little light (an exception is sunlight) so images are dark • If we replace the pinhole with a lens • we can capture brighter images • we can vary the focal length, d, which in turns changes the field of view (the zoom of a camera), which affects the depth of field
Synthetic camera model • The camera system we’ll use in CG is • analogous to the other imaging systems • The focal length determines the projection plane (the image plane), e.g. the plane z = -10 • The image is projected point by point onto the projection plane • The field of view is simulated using a clipping window, or frustum • Depth of field is more difficult to simulate
How shall we model light? • Particle model at large scale • Geometrical optics • Radiometry • Wave model at small scale • Physical optics • Maxwell’s equations
APIs Application program Application programming interfaces (APIs) shield users from implementation details Graphics library (API) Hardware OpenGL, PHIGS, Direct3D, VRML, and Java3D are all graphics APIs Display
OpenGL • For a synthetic camera model, OpenGL • API functions must allow user to specify: • Objects • polygons, points, spheres, curves, surfaces • code that defines a polygon in OpenGL Object Type • glBegin(GL_POLYGON); • glVertex3f(0.0, 0.0, 0.0); • glVertex3f(0.0, 1.0, 0.0); • glVertex3f(0.0, 0.0, 1.0); • glEnd(GL_POLYGON); XYZ coordinates of the 3D points
OpenGL • Note: • Both modeling and rendering can be performed in OpenGL • Often, the two are separate, and performed by different applications, ie. AutoCAD (modeler) and Lightscape (renderer) • Rendering packages typically take as input the output of the popular CAD modelers
OpenGL • For a synthetic camera model, OpenGL • API functions must allow user to specify: • Viewer (Camera) • see figure 1.23 on p. 23 in textbook • position, orientation, focal length • Light sources • location, color, direction, strength • Material properties • color, transparency, reflectivity, smoothness, etc.
Pipeline architecture • Pipelines • Data goes to processing step a, the resulting data is passed to processing step b, and so on • Motivation • Raster displays become universal • In CG, the exact same operations are performed on millions of vertices per scene • Four major steps • Transform (4), clip (7), project (5), rasterize(7)
Rendering pipeline • Modeling coords • modeling transform • World coords (object space) • visability determination • lighting • viewing transform • View coords (eye space) • clip to hither and yon • projection transform
Rendering pipeline • Normalized device coords (clip space) • clip to left,right,top,bottom • scale and translate (workstation transform) • Device (screen) coords (image space) • hidden surface removal • rasterization • Display