770 likes | 795 Views
Explore computer graphics applications, theory, software systems like OpenGL, and their hardware components - CPU, GPU. Learn about practical uses in design, simulations, animations, and user interfaces. Discover various applications including Computer-Aided Design, Entertainment, Education & Training, Visualization, Image Processing, and Graphical User Interface.
E N D
Course code: 10CS65 | Computer Graphics and Visualization Unit-1 Introduction Engineered for Tomorrow Department: Computer Science and Engineering Date : 15.12.2013
INTRODUCTION • Applications of computer graphics • A graphics system • Images • Physical and synthetic • Imaging systems • The synthetic camera model • The programmer’s interface • Graphics architectures • Programmable pipelines • Performance characteristics • Graphics Programming • The Sierpinskigasket • Programming two-dimensional applications.
Computer Graphics • What is it? • Overview of what we will cover • A Graphics Overview • Graphics Theory • A Graphics Software System: OpenGL • Our approach will be top-down • We want you to start writing application programs that generate graphical output as quickly as possible
Computer Graphics • Computer Graphics deals with all aspects of creating images with a computer • Hardware • CPU’s • GPU • Software • OpenGL • DirectX • Applications
Computer Graphics • Using a computer as a rendering tool for the generation (from models) and manipulation of images is called computer graphics • More precisely: image synthesis
Applications of computer graphics The development of Computer Graphics has been driven by the needs of the user community and by the advances in hardware and software. • Display of Information • Design • Simulation & Animation • User Interfaces
Applications of computer graphics • Computer-Aided Design for engineering and architectural systems etc. Objects maybe displayed in a wireframe outline form. Multi-window environment is also favored for producing various zooming scales and views. Animations are useful for testing performance. • Presentation Graphics : To produce illustrations which summarize various kinds of data. Except 2D, 3D graphics are good tools for reporting more complex data. • Computer Art: Painting packages are available. With cordless, pressure-sensitive stylus, artists can produce electronic paintings which simulate different brush strokes, brush widths, and colors. Photorealistic techniques, morphing and animations are very useful in commercial art. For films, 24 frames per second are required. For video monitor, 30 frames per second are required.
Application of computer graphics • Entertainment: Motion pictures, Music videos, and TV shows, Computer games • Education and Training: Training with computer-generated models of specialized systems such as the training of ship captains and aircraft pilots. • Visualization: For analyzing scientific, engineering, medical and business data or behavior. Converting data to visual form can help to understand mass volume of data very efficiently. • Image Processing: Image processing is to apply techniques to modify or interpret existing pictures. It is widely used in medical applications. • Graphical User Interface: Multiple window, icons, menus allow a computer setup to be utilized more efficiently.
A graphics system • A Graphics system has 5 main elements: • Input Devices • Processor • Memory • Frame Buffer • Output Devices
Pixels and the Frame Buffer • A picture is produced as an array (raster) of picture elements (pixels). • These pixels are collectively stored in the Frame Buffer. • Properties of frame buffer: • Resolution – number of pixels in the frame buffer • Depth or Precision – number of bits used for each pixel • E.g.: 1 bit deep frame buffer allows 2 colors • 8 bit deep frame buffer allows 256 colors. • A Frame buffer is implemented either with special types of memory chips or it can be a part of system memory. • In simple systems the CPU does both normal and graphical processing. • Graphics processing - Take specifications of graphical primitives from application program and assign values to the pixels in the frame buffer It is also known as Rasterization or scan conversion.
A graphics system Interactive Graphics System
Output Devices • Various parts of a CRT : • Electron Gun – emits electron beam which strikes the phosphor coating to emit light. • Deflection Plates – controls the direction of beam. The output of the computer is converted by digital-to-analog converters o voltages across x & y deflection plates. • Refresh Rate – In order to view a flicker free image, the image on the screen has to be retraced by the beam at a high rate (modern systems operate at 85Hz) • 2 types of refresh: • Noninterlaced display: Pixels are displayed row by row at the refresh rate. • Interlaced display: Odd rows and even rows are refreshed alternately.
Shadow-Mask CRT • Here, just behind the phosphorus coated face of the CRT, there is a metal plate. • The shadow-mask is pierced with small round holes in a triangular pattern. • The shadow-mask tube uses three guns, grouped in a triangle or delta responsible for red, green and blue components of the light output of the CRT. • The deflection system of the CRT operates on all three electron beams simultaneously, bringing all three to the same point of focus on the shadow-mask. Where • The three beams encounter holes in the mask, they pass through and strike the phosphor. • The phosphor in tube is laid down very carefully in groups of three spots- one red, one green and one blue- under each hole in the mask, that each spot is stuck only by electrons from the appropriate gun. • The effect of the mask is thus to “shadow” the spots of red phosphor from all but the red beam, and likewise for the green and blue phosphor spots. • can therefore control the light output in each of the three component colors by modulating the beam current of the corresponding gun.
Images: Physical and Synthetic Computer graphics generates pictures with the aim of: • to create realistic images • to create images very close to “traditional” imaging methods • The Usual Approach: • Construct Raster Images • Simple 2D Entities • Points • Lines • Polygons • Define objects based upon 2D representation. • Because such functionality is supported by most present computer graphics systems, we are going to learn to create images here, rather than expand a limited model.
Objects and Viewers • Image-Formation Process (the Two Entities): • The Object • The Viewer • The object exists in space independent of any image-formation process, and of any viewer.
3D Object Graphic – A Projection Model Projection, shading, lighting models Output Image Synthetic Camera
What Now! • Both the object and the viewer exist in a 3D world. However, the image they define is 2D • Image-Formation • The Object + the Viewer’s Specifications • An Image • Future • Chapter 2 • OpenGL • Build Simple Objects. • Chapter 9 • Interactive Objects • Objects Relations w/ Each Other
Objects, Viewers & Camera Camera system • object and viewer exist in E3 • image is formed • in the Human Visual system (HSV) – on the retina • In the film plane if a camera is used • Object(s) & Viewer(s) in E3 • Pictures in E2 Transformation from E3 to E2projection
Light and Images • Much information was missing from the preceding picture: • We have yet to mention light! • If there were no light sources the objects would be dark, and there would be nothing visible in our image. • We have not mentioned how color enters the picture. • Or, what are the effects of different kinds of surfaces have on the objects.
Lights & Images Light Sources: • light sources • position • monochromatic / color • if not used scene would be very dark and flat • shadows and reflections - very important for realistic perception • geometric optics used for light modeling
Imitating real life • Taking a more physical approach, we can start with the following arrangement: • The details of the interaction between light and the surfaces of the object determine how much light enters the camera.
Light properties Light is a form of electromagnetic radiation: Visible spectrum 350 – 780 nm
Trace a Ray! • Ray tracing • building an imaging model by following light from a source • a ray is a semi-infinite line that emanates from a point and “travels” to infinity in a particular direction • portion of these infinite rays contributes to the image on the film plane of the camera • surfaces: • diffusing • reflecting • refracting
Ray Tracing • Ray tracing is an image formation technique that is based on these ideas and that can form the basis for producing computer generated images. A different approach must be used: • for each pixel intensity must be computed • all contributions must be taken into account • a ray is “followed” in the opposite direction, when intersect a surface it is split into two rays • contribution from light sources and reflection from other resources are counted
The Pinhole Camera • Looks like this: • A Pinhole camera is a box • with a small hole in the center on one side, • and the film on the opposite side • And you could take pictures with it in the old days
Pinhole Camera Box with a small hole • film plane z = - d
Pinhole Camera point (xp,yp,-d) –projection of the point (x,y,z) angle of view or field of the camera – angle ideal camera – infinite depth of field
The Human Visual System • Our extremely complex visual system has all the components of a physical imaging system, such as a camera or a microscope.
Human Visual System - HVS • rods and cones (tyčinky a čípky) excited by electromagnetic energy in the range 350-780 nm • sizes of rods and cones determines the resolution of HVS – our visual acuity • the sensors in the human eye do not react uniformly to the light energy at different wavelengths
Human Visual System - HVS Courtesy of http://www.webvision.med.utah.edu/into.html
Human Visual System • different HVS response for single frequency light – red/green/blue • relative brightness response at different frequencies • this curve is known as Commision Internationale de L’Eclairage (CIE) standard observer curve • the curve matches the sensitivity of the monochromatic sensors used in black&white films and video camera • most sensitive to GREEN colors
Human Visual System • three different cones in HVS • blue, green & yellow – often reported as red for compatibility with camera & film
Synthetic Camera Model computer-generated image based on an optical system – Synthetic Camera Model viewer behind the camera can move the back of the camera – change of the distance di.e. additional flexibility objects and viewer specifications are independent – different functions within a graphics library Imaging system
Synthetic Camera Model • The objects specification is independent of the viewer specifications. • In a graphics library we would expect separate functions for specifying objects and the viewer. • We can compute the image using simple trigonometric calculations a – situation with a camera b – mathematical model – image plane moved in front of the camera center of projection – center of the lens projection plane – film plane
Synthetic Camera Model Not all objects can be seenlimit due to viewing angle Solution: Clipping rectangle or clipping window placed inn front of the camera ad b shows the case when the clipping rectangle is shifted aside – only part of the the scene is projected
Some Adjustments • Symmetry in projections • Move the image plane in front of the lens
Constraints • Clipping • We must also consider the limited size of the image.
The Programmer’s Interface • Numerous ways for user interaction with a graphics system using input devices - pads, mouse, keyboards etc. • different orientation of coordinate systemscanvas versus OpenGL etc.
Application Programmer’s Interfaces • What is an API? • Why do you want one? • API functionality should match the conceptual model • Synthetic Camera Model used for APIs like OpenGL, PHIGS, Direct 3D, Java3D, VRML etc.
Two basic functions for drawing: moveto(x , y) – pen up lineto(x , y) – pen down moveto (0,0); lineto(1,0);lineto(1,1);lineto(0,1);lineto(0,0); { draws a rectangle } moveto(0,1); lineto(0.5,1.866); lineto(1.5,1.866); lineto(1.5,0.866); lineto(1,0);moveto(0,0); lineto(1.5,1.866); { draws a cube using oblique projection } Pen Plotter Model Typical example of “sequential access”
Three-Dimensional APIs-Synthetic Camera Model • If we are to follow the synthetic camera model, we need functions in the API to specify: • Objects • The Viewer • Light Sources • Material Properties
Objects • Objects are defined by points or vertices, line segments, polygons etc. to represent complex objects • API primitives are displayed rapidly on the hardware • usual API primitives: • points • line segments • polygons • text • The following code fragment defines a triangle in OpenGL • glBegin(GL_POLYGON); • glVertex3f(0.0,0.0,0.0); • glVertex3f(0.0,1.0,0.0); • glVertex3f(0.0,0.0,1.0); • glEND();
The Viewer Camera specification in APIs: • position – usually center of lens • orientation – camera coordinate system in center of lens camera can rotate around those three axis • focal length of lens determines the size of the image on the filmactually viewing angle • film plane - camera has a height and a width
Application Programmer’s Interface Two coordinate systems are used: • world coordinates, where the object is defined • camera coordinates, where the image is to be produced Transformation for conversion between coordinate systems or gluLookAt(cam_x, cam_y,cam_z, look_at_x, look_at_y, look_at_z,…) glPerspective( field_of_view) Lights – location, strength, color, directionality Material – properties are attributes of objects Observed visual properties of objects are given by material and light properties