1 / 35

Formation et Analyse d’Images Session 3

Formation et Analyse d’Images Session 3. Daniela Hall 3 October 2005. Course Overview. Session 1 (19/09/05) Overview Human vision Homogenous coordinates Camera models Session 2 (26/09/05) Tensor notation Image transformations Homography computation Session 3 (3/10/05)

josebaker
Download Presentation

Formation et Analyse d’Images Session 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Formation et Analyse d’ImagesSession 3 Daniela Hall 3 October 2005

  2. Course Overview • Session 1 (19/09/05) • Overview • Human vision • Homogenous coordinates • Camera models • Session 2 (26/09/05) • Tensor notation • Image transformations • Homography computation • Session 3 (3/10/05) • Camera calibration • Reflection models • Color spaces • Session 4 (10/10/05) • Pixel based image analysis • 17/10/05 course is replaced by Modelisation surfacique

  3. Course overview • Session 5 + 6 (24/10/05) 9:45 – 12:45 • Kalman filter • Tracking of regions, pixels, and lines • Session 7 (7/11/05) • Gaussian filter operators • Session 8 (14/11/05) • Scale Space • Session 9 (21/11/05) • Contrast description • Hough transform • Session 10 (5/12/05) • Stereo vision • Session 11 (12/12/05) • Epipolar geometry • Session 12 (16/01/06): exercises and questions

  4. Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces

  5. Ps0 Ps1 Ps Ps3 Ps2 Bi-linear interpolation The bilinear approach computes the weighted average of the four neighboring pixels. The pixels are weighted according to the area. D C A B

  6. Camera calibration • Assuming that the camera performs a exact perspective projection, the image formation process can be expressed as a projective mapping from R3 to R2. • PI=MIS PS • Camera calibration: process of estimating MIS from a set of point correspondences RS PI . • Advantage: intrinsic and extrinsic camera parameters don't need to be known. They are estimated automatically. Ref: CVonline, LOCAL_COPIES/MOHR_TRIGGS/node16.html

  7. Calibration • Construct a calibration object whose 3D position is known. • Measure image coordinates • Determine correspondences between 3D point RSk and image point PIk. • We have 11 DoF. We need at least 5 ½ correspondences.

  8. Calibration • For each correspondence scene point RSk and image point PIk • which gives following equations for k=1, ..., 6 • from wich MIS can be computed

  9. Calibration using many points • For k=5 ½ M has one solution. • Solution depends on precise measurements of 3D and 2D points. • If you use another 5 ½ points you will get a different solution. • A more stable solution is found by using large number of points and do optimisation.

  10. Calibration using many points • For each point correspondence we know (i,j) and R=(x,y,z,1)T. • We want to know MIS. Solve equation with your favorite algorithm (least squares, levenberg-marquart, svd,...)

  11. Estimation of MIS • When intrinsic (Ci, Cj, Di, Dj, F) and extrinsic camera (3d camera position and orientation) parameters are known, compute MIS directly: • If one parameter is not precisely known or you wish a stable estimation of MIS, do calibration with a large number of points.

  12. Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces

  13. light N e i camera g Light • N: surface normal • i angle between incoming light and normal • e angle between normal and camera • g angle between light and camera

  14. Spectrum • Light source is characterised by its spectrum. • The spectrum consists of a particular quantity of photons per frequency. • The frequency is described by its wavelength • The visible spectrum is 380nm to 720nm • Cameras can see a larger spectrum depending on their CCD chip

  15. light N e i camera g Albedo • Albedo is the fraction of light that is reflected by a body or surface. • Reflectance function:

  16. Reflectance functions • Specular reflection • example mirror • Lambertian reflection • diffuse reflection, example paper, snow

  17. Specular reflection light N e camera i g

  18. Lambertian reflection

  19. Di-chromatic reflectance model • the reflected light R is the sum of the light reflected at the surface Rs and the light reflected from the material body RL • Rs has the same spectrum as the light source • The spectrum of Rl is « filtered » by the material (photons are absorbed, this changes the emitted light) • Luminance depends on surface orientation • Spectrum of chrominance is composed of light source spectrum and absorption of surface material.

  20. Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces

  21. Color perception • The retina is composed of rods and cones. • Rods - provide "scotopic" or low intensity vision. • Provide our night vision ability for very low illumination, • Are a thousand times more sensitive to light than cones, • Are much slower to respond to light than cones, • Are distributed primarily in the periphery of the visual field.

  22. Color perception • Cones - provide "photopic" or high acuity vision. • Provide our day vision, • Produce high resolution images, • Determine overall brightness or darkness of images, • Provide our color vision, by means of three types of cones: • "L" or red, long wavelength sensitive, • "M" or green, medium wavelength sensitive, • "S" or blue, short wavelength sensitive. • Cones enable our day vision and color vision. Rods take over in low illumination. However, rods cannot detect color which is why at night we see in shades of gray. • source: http://www.hf.faa.gov/Webtraining/VisualDisplays/

  23. Color perception • Rod Sensitivity- Peak at 498 nm. • Cone Sensitivity- Red or "L" cones peak at 564 nm. - Green or "M" cones peak at 533 nm.  - Blue or "S" cones peak at 437 nm. • This diagram shows the wavelength sensitivities of the different cones and the rods. Note the overlap in sensitivity between the green and red cones.

  24. S(λ) CCD vidicon λ 400 600 800 1000 nm Camera sensitivity • observed light intensity depends on: • source spectrum: S(λ) • reflectance of the observed point (i,j): P(i,j,λ) • receptive spectrum of the camera: c(λ) • p0 is the gain

  25. Classical RGB camera • The filters follow a convention of the International Illumination Commission. • They are functions of λ: r(λ), g(λ), b(λ) • They are close to the sensitivity of the human color vision system.

  26. Color pixels

  27. Color bands (channels) • It is not possible to perceive the spectrum directly. • Color is a projection of the spectrum to the spectrum of the sensors. • Humans (and cameras) probe the spectrum at 3 positions.

  28. Session Overview • Camera calibration • Light • Reflection models • Human color perception • Color spaces

  29. Color spaces • RGB color space • CMY color space • YIQ color space • HLS color space

  30. RGB color space • A CCD camera provides RGB images • The luminance axis is r=g=b (diagonal) • Each axis has 256 (8 bit) different values • RGB colors: 2563=16777216

  31. Hering color space • Opponent color space • Is obtained from RGB space by transformation. • Luminance, C1 (red-green), C2 (red+green-blue)

  32. CMY color space • Cyan, magenta, yellow • CMYK: CMY + black color channel

  33. YIQ color space • This is an approximation of • Y: luminance, • I: red – cyan, • Q: magenta - green • Used US TVs (NTSC coding). Black and white TVs display only Y channel.

  34. HLS space • Hue, luminance, saturation space. • L=R+G+B • S=1-3*min(R,G,B)/L L T S

  35. Influence of color spaces for image analysis • According to dichromatic reflectange model: • Luminance depends on surface orientation • Spectrum of chrominance is composed of light source spectrum and absorption of surface material. • In HLS space, luminance is separated from chrominance. For object recognition robust to changes in light source direction, use only chrominance plane for identification. • In RGB space, changes in luminance influence all 3 channels. The above technique can not be used directly (do transformation to Hering space first).

More Related