350 likes | 361 Views
Formation et Analyse d’Images Session 3. Daniela Hall 3 October 2005. Course Overview. Session 1 (19/09/05) Overview Human vision Homogenous coordinates Camera models Session 2 (26/09/05) Tensor notation Image transformations Homography computation Session 3 (3/10/05)
E N D
Formation et Analyse d’ImagesSession 3 Daniela Hall 3 October 2005
Course Overview • Session 1 (19/09/05) • Overview • Human vision • Homogenous coordinates • Camera models • Session 2 (26/09/05) • Tensor notation • Image transformations • Homography computation • Session 3 (3/10/05) • Camera calibration • Reflection models • Color spaces • Session 4 (10/10/05) • Pixel based image analysis • 17/10/05 course is replaced by Modelisation surfacique
Course overview • Session 5 + 6 (24/10/05) 9:45 – 12:45 • Kalman filter • Tracking of regions, pixels, and lines • Session 7 (7/11/05) • Gaussian filter operators • Session 8 (14/11/05) • Scale Space • Session 9 (21/11/05) • Contrast description • Hough transform • Session 10 (5/12/05) • Stereo vision • Session 11 (12/12/05) • Epipolar geometry • Session 12 (16/01/06): exercises and questions
Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces
Ps0 Ps1 Ps Ps3 Ps2 Bi-linear interpolation The bilinear approach computes the weighted average of the four neighboring pixels. The pixels are weighted according to the area. D C A B
Camera calibration • Assuming that the camera performs a exact perspective projection, the image formation process can be expressed as a projective mapping from R3 to R2. • PI=MIS PS • Camera calibration: process of estimating MIS from a set of point correspondences RS PI . • Advantage: intrinsic and extrinsic camera parameters don't need to be known. They are estimated automatically. Ref: CVonline, LOCAL_COPIES/MOHR_TRIGGS/node16.html
Calibration • Construct a calibration object whose 3D position is known. • Measure image coordinates • Determine correspondences between 3D point RSk and image point PIk. • We have 11 DoF. We need at least 5 ½ correspondences.
Calibration • For each correspondence scene point RSk and image point PIk • which gives following equations for k=1, ..., 6 • from wich MIS can be computed
Calibration using many points • For k=5 ½ M has one solution. • Solution depends on precise measurements of 3D and 2D points. • If you use another 5 ½ points you will get a different solution. • A more stable solution is found by using large number of points and do optimisation.
Calibration using many points • For each point correspondence we know (i,j) and R=(x,y,z,1)T. • We want to know MIS. Solve equation with your favorite algorithm (least squares, levenberg-marquart, svd,...)
Estimation of MIS • When intrinsic (Ci, Cj, Di, Dj, F) and extrinsic camera (3d camera position and orientation) parameters are known, compute MIS directly: • If one parameter is not precisely known or you wish a stable estimation of MIS, do calibration with a large number of points.
Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces
light N e i camera g Light • N: surface normal • i angle between incoming light and normal • e angle between normal and camera • g angle between light and camera
Spectrum • Light source is characterised by its spectrum. • The spectrum consists of a particular quantity of photons per frequency. • The frequency is described by its wavelength • The visible spectrum is 380nm to 720nm • Cameras can see a larger spectrum depending on their CCD chip
light N e i camera g Albedo • Albedo is the fraction of light that is reflected by a body or surface. • Reflectance function:
Reflectance functions • Specular reflection • example mirror • Lambertian reflection • diffuse reflection, example paper, snow
Specular reflection light N e camera i g
Di-chromatic reflectance model • the reflected light R is the sum of the light reflected at the surface Rs and the light reflected from the material body RL • Rs has the same spectrum as the light source • The spectrum of Rl is « filtered » by the material (photons are absorbed, this changes the emitted light) • Luminance depends on surface orientation • Spectrum of chrominance is composed of light source spectrum and absorption of surface material.
Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces
Color perception • The retina is composed of rods and cones. • Rods - provide "scotopic" or low intensity vision. • Provide our night vision ability for very low illumination, • Are a thousand times more sensitive to light than cones, • Are much slower to respond to light than cones, • Are distributed primarily in the periphery of the visual field.
Color perception • Cones - provide "photopic" or high acuity vision. • Provide our day vision, • Produce high resolution images, • Determine overall brightness or darkness of images, • Provide our color vision, by means of three types of cones: • "L" or red, long wavelength sensitive, • "M" or green, medium wavelength sensitive, • "S" or blue, short wavelength sensitive. • Cones enable our day vision and color vision. Rods take over in low illumination. However, rods cannot detect color which is why at night we see in shades of gray. • source: http://www.hf.faa.gov/Webtraining/VisualDisplays/
Color perception • Rod Sensitivity- Peak at 498 nm. • Cone Sensitivity- Red or "L" cones peak at 564 nm. - Green or "M" cones peak at 533 nm. - Blue or "S" cones peak at 437 nm. • This diagram shows the wavelength sensitivities of the different cones and the rods. Note the overlap in sensitivity between the green and red cones.
S(λ) CCD vidicon λ 400 600 800 1000 nm Camera sensitivity • observed light intensity depends on: • source spectrum: S(λ) • reflectance of the observed point (i,j): P(i,j,λ) • receptive spectrum of the camera: c(λ) • p0 is the gain
Classical RGB camera • The filters follow a convention of the International Illumination Commission. • They are functions of λ: r(λ), g(λ), b(λ) • They are close to the sensitivity of the human color vision system.
Color bands (channels) • It is not possible to perceive the spectrum directly. • Color is a projection of the spectrum to the spectrum of the sensors. • Humans (and cameras) probe the spectrum at 3 positions.
Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces
Color spaces • RGB color space • CMY color space • YIQ color space • HLS color space
RGB color space • A CCD camera provides RGB images • The luminance axis is r=g=b (diagonal) • Each axis has 256 (8 bit) different values • RGB colors: 2563=16777216
Hering color space • Opponent color space • Is obtained from RGB space by transformation. • Luminance, C1 (red-green), C2 (red+green-blue)
CMY color space • Cyan, magenta, yellow • CMYK: CMY + black color channel
YIQ color space • This is an approximation of • Y: luminance, • I: red – cyan, • Q: magenta - green • Used US TVs (NTSC coding). Black and white TVs display only Y channel.
HLS space • Hue, luminance, saturation space. • L=R+G+B • S=1-3*min(R,G,B)/L L T S
Influence of color spaces for image analysis • According to dichromatic reflectange model: • Luminance depends on surface orientation • Spectrum of chrominance is composed of light source spectrum and absorption of surface material. • In HLS space, luminance is separated from chrominance. For object recognition robust to changes in light source direction, use only chrominance plane for identification. • In RGB space, changes in luminance influence all 3 channels. The above technique can not be used directly (do transformation to Hering space first).