1 / 56

Opto -Electronics Image Processing

Opto -Electronics Image Processing. 邵晓鹏 xpshao@xidian.edu.cn Office:029-88204271 Mobile:13571985296. Image Samples. References. Milan Sonka , Vaclav Hlavac , Roger Boyle, Image Processing, Analysis, and Machine Vision, Second Edition, 人民邮电出版社, 2002.

geranium
Download Presentation

Opto -Electronics Image Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Opto-Electronics Image Processing 邵晓鹏 xpshao@xidian.edu.cn Office:029-88204271 Mobile:13571985296

  2. Image Samples

  3. References • Milan Sonka, Vaclav Hlavac, Roger Boyle, Image Processing, Analysis, and Machine Vision, Second Edition, 人民邮电出版社,2002. • Rafael C. Ganzalez, Richard E. Woods, Digital Image Processing, Second Edition, 电子工业出版社, 2002. • John C. Russ, The Image Processing Handbook, Fifth Edition, CRC Press, 2007.

  4. Course Contents • Introduction • The digitized image and its properties • Data structures for image analysis • Image Pre-processing • Segmentation • Shape representation and description • Object recognition • Mathematical morphology • Texture • Image Understanding

  5. 1. Introduction • Introduction to Digital Image Processing • Vision allows humans to perceive and understand the world surrounding us. • Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image. • Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this projection to a lower number of dimensions incurs an enormous loss of information.

  6. In order to simplify the task of computer vision understanding, two levels are usually distinguished; low level image processing and high level image understanding • Low level methods usually use very little knowledge about the content of images. • High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.

  7. Low level digital image processing • Image Acquisition • An image is captured by a sensor (such as a TV camera) and digitized; • Preprocessing • computer suppresses noise (image pre-processing) and maybe enhances some object features which are relevant to understanding the image. Edge extraction is an example of processing carried out at this stage. • Image segmentation • computer tries to separate objects from the image background. • Object description and classification • Object description and classification in a totally segmented image is also understood as part of low level image processing.

  8. Chapter 2 The digitized image and its properties

  9. Overview: • 2.1 Basic Concepts • Image functions • The Dirac distribution and convolution • The Fourier transform • Images as a stochastic process • 2.2 Image digitization • Sampling • Quantization • Color images • 2.3 Digital image properties • Metric and topological properties of digital images • Histograms • Visual perception of the image • Image quality • Noise in images

  10. 2.1 basic concepts • Fundamental concepts and mathematical tools are introduced in this chapter which will be used throughout the course. • A signal is a function depending on some variable with physical meaning. • Signals can be • one-dimensional (e.g., dependent on time), • two-dimensional (e.g., images dependent on two co-ordinates in a plane), • three-dimensional (e.g., describing an object in space), • or higher-dimensional. • A scalar(标量) function may be sufficient to describe a monochromatic (单色的)image, while vector functions are to represent, for example, color images consisting of three component colors.

  11. Image functions • The image can be modeled by a continuous function of two or three variables; • arguments are co-ordinates x, y in a plane, while if images change in time a third variable t might be added. • The image function values correspond to the brightness at image points. • The function value can express other physical quantities as well (temperature, pressure distribution, distance from the observer, etc.). • The brightness(亮度) integrates different optical quantities - using brightness as a basic quantity allows us to avoid the description of the very complicated process of image formation.

  12. Image functions • The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points an intensity image(亮度图像). • The real world which surrounds us is intrinsically 3D. • The 2D intensity image is the result of a perspective projection(透视投影) of the 3D scene. • When 3D objects are mapped into the camera plane by perspective projection a lot of information disappears as such a transformation is not one-to-one. • Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed (病态的)problem.

  13. Image functions • Recovering information lost by perspective projection is only one, mainly geometric, problem of computer vision. • The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as • object surface reflectance properties (反射特性)given by the surface material, microstructure and marking), • Illumination(照明) properties, • and object surface orientation with respect to a viewer and light source.

  14. Image functions • Some scientific and technical disciplines work with 2D images directly; for example, • an image of the flat specimen viewed by a microscope with transparent illumination, • a character drawn on a sheet of paper, • the image of a fingerprint, etc. • Many basic and useful methods used in digital image analysis do not depend on whether the object was originally 2D or 3D. • Much of the material in this class restricts itself to the study of such methods -- the problem of 3D understanding is addressed in Computer Vision classes offered by the Computer Science department.

  15. Image functions • Related disciplines are photometry(光度测定学) which is concerned with brightness measurement, and colorimetry(比色学) which studies light reflectance(反射) or emission(散射) depending on wavelength. • A light source energy distribution C(x,y,t,lambda) depends in general on image co-ordinates (x, y), time t, and wavelength lambda.

  16. Image functions • For the human eye and most technical image sensors (e.g., TV cameras) the brightness f depends on the light source energy distribution C and the spectral sensitivity of the sensor, S(lambda) (dependent on the wavelength). • A monochromatic image f(x,y,t) provides the brightness distribution.

  17. Image functions • In a color or multispectral image, the image is represented by a real vector function f where, for example, there may be red, green and blue components. • Image processing often deals with static images, in which time t is constant. • A monochromatic static image is represented by a continuous image function f(x,y) whose arguments are two co-ordinates in the plane.

  18. Image functions • Computerized image processing uses digital image functions which are usually represented by matrices, so co-ordinates are integer numbers. • The customary orientation of co-ordinates in an image is in the normal Cartesian fashion (horizontal x axis, vertical y axis), although the (row, column) orientation used in matrices is also quite often used in digital image processing. • The range of image function values is also limited; by convention, in monochromatic images the lowest value corresponds to black and the highest to white. • Brightness values bounded by these limits are gray levels.

  19. The Dirac distribution and convolution • We can find these information in the course of Signal Systems.

  20. The Fourier transform • We will discuss this question in the next course.

  21. Images as a stochastic process • Images f(x,y) can be treated as deterministic functions or as realizations of stochastic processes. • Mathematical tools used in image description have roots in linear system theory, integral transformations, discrete mathematics and the theory of stochastic processes.

  22. 2.2 Image digitization • An image captured by a sensor is expressed as a continuous function f(x,y) of two co-ordinates in the plane. • Image digitization means that the function f(x,y) is sampled(采样) into a matrix with M rows and N columns. • The image quantitation(量化) assigns to each continuous sample an integer value. • The continuous range of the image function f(x,y) is split into K intervals.

  23. The finer the sampling (i.e., the larger M and N) and quantitation (the larger K) the better the approximation of the continuous image function f(x,y). • Two questions should be answered in connection with image function sampling: • First, the sampling period should be determined -- the distance between two neighboring sampling points in the image • Second, the geometric arrangement of sampling points (sampling grid采样栅格) should be set.

  24. 2.2.1 Sampling • A continuous image function f(x,y) can be sampled using a discrete grid of sampling points in the plane. • The image is sampled at points x = jΔx, y = kΔy • Two neighboring sampling points are separated by distance Δx along the x axis and Δy along the y axis. Distances Δx and Δy are called the sampling interval(采样间隔) and the matrix of samples constitutes the discrete image.

  25. The ideal sampling s(x,y) in the regular grid can be represented using a collection of Dirac distributions • The sampled image is the product of the continuous image f(x,y) and the sampling function s(x,y)

  26. The collection of Dirac distributions in equation 2.32 can be regarded as periodic with period x, y and expanded into a Fourier series (assuming that the sampling grid covers the whole plane (infinite limits)) . (Eq. 2.33) • where the coefficients of the Fourier expansion can be calculated as given in Eq. 2.34

  27. Noting that only the term for j=0 and k=0 in the sum is nonzero in the range of integration, the coefficients are in Eq. 2.35 • Noting that the integral in equation 2.35 is uniformly equal to one the coefficients can be expressed as given in Eq. 2.36 and 2.32 can be rewritten as Eq. 2.37. • In frequency domain then Eq. 2.38.

  28. Thus the Fourier transform of the sampled image is the sum of periodically repeated Fourier transforms F(u,v) of the image. • Periodic repetition of the Fourier transform result F(u,v) may under certain conditions cause distortion of the image which is called aliasing(混迭); this happens when individual digitized components F(u,v) overlap(重复).

  29. There is no aliasing if the image function f(x,y) has a band limited(有限带宽) spectrum(频谱) ... its Fourier transform F(u,v)=0 outside a certain interval of frequencies |u| > U; |v| > V. • As you know from general sampling theory, overlapping of the periodically repeated results of the Fourier transform F(u,v) of an image with band limited spectrum can be prevented if the sampling interval is chosen according to Eq. 2.39

  30. This is the Shannon sampling theorem that has a simple physical interpretation in image analysis: The sampling interval should be chosen in size such that it is less than or equal to half of the smallest interesting detail in the image. • The sampling function is not the Dirac distribution in real digitizers -- narrow impulses with limited amplitude(有限冲激函数) are used instead. • As a result, in real image digitizers a sampling interval about ten times smaller than that indicated by the Shannon sampling theorem is used - because the algorithms for image reconstruction use only a step function(阶越函数).

  31. Practical examples of digitization using a flatbed scanner and TV cameras help to understand the reality of sampling. Originalimage half twice 10x

  32. A continuous image is digitized at sampling points(采样点). • These sampling points are ordered in the plane and their geometric relation is called the grid(栅格). • Grids used in practice are mainly square or hexagonal (Figure 2.4).

  33. One infinitely small sampling point in the grid corresponds to one picture element (pixel像素) in the digital image. • The set of pixels together covers the entire image. • Pixels captured by a real digitization device have finite sizes. • The pixel is a unit which is not further divisible, sometimes pixels are also called points.

  34. 2.2.2 Quantization • A magnitude of the sampled image is expressed as a digital value in image processing. • The transition between continuous values of the image function (brightness) and its digital equivalent is called quantitation(量化). • The number of quantitation levels should be high enough for human perception of fine shading details in the image.

  35. Most digital image processing devices use quantitation into k equal intervals(等间隔). • If b bits are used ... the number of brightness levels is k=2b. • Eight bits per pixel are commonly used, specialized measuring devices use twelve and more bits per pixel.

  36. 2.3 Digital image properties • Metric and topological(拓扑) properties of digital images • Histograms(直方图) • Visual perception of the image • Image quality • Noise in images

  37. 2.3.1Metric and topological properties of digital images • Some intuitively clear properties of continuous images have no straightforward analogy(直接的类似推广) in the domain of digital images.

  38. 2.3.1.1Metric properites of digital images • Distance is an important example. • The distance between two pixels in a digital image is a significant quantitative measure. • The distance between points with co-ordinates (i,j) and (h,k) may be defined in several different ways;

  39. distance • the Euclidean distance is defined by Eq. 2.42 • city block distance ... Eq. 2.43 • chessboard distance Eq. 2.44

  40. Pixel adjacency(邻接性) is another important concept in digital images. • 4-neighborhood • 8-neighborhood (Fig. 2.6) • It will become necessary to consider important sets consisting of several adjacent pixels – regions(区域).

  41. Region is a contiguous(连通的) set. • Contiguity paradoxes(悖论) of the square grid ... Fig. 2.7, 2.8

  42. One possible solution to contiguity paradoxes is to treat objects using 4-neighborhood and background using 8-neighborhood (or vice versa). • A hexagonal grid solves many problems of the square grids ... any point in the hexagonal raster(光栅) has the same distance to all its six neighbors.

  43. Border(边界) R is the set of pixels within the region that have one or more neighbors outside R ... inner borders, outer borders exist. • Edge(边缘) is a local property of a pixel and its immediate (直接的) neighborhood --it is a vector(矢量) given by a magnitude and direction. • The edge direction is perpendicular to the gradient (梯度)direction which points in the direction of image function growth. • Border and edge ... the border is a global concept related to a region, while edge expresses local properties of an image function.

  44. Crack(裂缝) edges ... four crack edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge is that of increasing brightness, and is a multiple(倍数) of 90 degrees, while its magnitude is the absolute difference(差) between the brightness of the relevant pair of pixels. (Fig. 2.9)

  45. 2.3.1.2 Topological properties of digital images • Topological properties of images are invariant to rubber sheet transformations(橡皮面变换). • Stretching(伸展) does not change contiguity of the object parts and does not change the number of holes(孔) in regions. • One such image property is the Euler--Poincare characteristic defined as the difference(差) between the number of regions and the number of holes in them.

  46. Convex hull(凸包) is used to describe topological properties of objects. • The convex hull is the smallest region which contains the object, such that any two points of the region can be connected by a straight line, all points of which belong to the region.

  47. 2.3.2 Histograms • Brightness histogram provides the frequency of the brightness value z in the image.

  48. Algorithm 2.1 • Assign zero values to all elements of the array h. • For all pixels (x,y) of the image f, increment h(f(x,y)) by one. int h[256]={0}; for(i=0; i<M;i++) for(j=0; j<N; j++) h[f[i][j]]++;

  49. Histograms may have many local maxima(极值) ... • histogram smoothing(直方图平滑)

More Related