1 / 43

Chapter 4

Chapter 4. Edge/line detection and segmentation. CONTENT. Edge/Line Detection Gradient Operators Roberts Operator Sobel Operator Prewitt Operator Laplacian Operators Segmentation Region Growing and Shrinking Clustering Techniques Boundary Detection. Compass Masks

lacey
Download Presentation

Chapter 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 Edge/line detection and segmentation

  2. CONTENT • Edge/Line Detection • Gradient Operators • Roberts Operator • Sobel Operator • Prewitt Operator • Laplacian Operators • Segmentation • Region Growing and Shrinking • Clustering Techniques • Boundary Detection • Compass Masks • Kirsch compass masks • Robinson compass masks

  3. Edge/Line Detection • Edge detection operator are often implemented with convolution masks. • ED operators are often discrete approximations to differential operators. • ED operator may return magnitude and direction information, some return magnitude only. • The Hough transform is used for line finding, but can be extended to find arbitrary shapes. • Edge direction and lines are perpendicular to each other, because the edge direction is the direction of change in gray level.

  4. Figure 4.2-1 Edges and lines are perpendicular. The line shown here is vertical and the edge direction is horizontal. In this case the transition from black to white occurs along a row, this is the edge direction, but the line is vertical along a column Linee Edge Edge

  5. Edge/Line Detection (cont’d) • To deal with noise, must make trade off between sensitivity and accuracy in edge detection (Figur 4.2.2). • Potential edge points are found by examining the relationship a pixel has with its neighbors; an edge implies a change in gray level. • Edges may exit anywhere and be defined by color, texture, shadow, etc., and may not necessarily separate real world objects. • A real edge in an image tends to change slowly, compared to the ideal edge model which is abrupt (figure 4.2.4).

  6. TYPES OF EDGES • Variation of Intensity / Gray Level • Step Edge • Ramp Edge • Line Edge • Roof Edge

  7. Steps in Edge Detection • Filtering – Filter image to improve performance of the Edge Detector wrt noise • Enhancement – Emphasize pixels having significant change in local intensity • Detection – Identify edges - thresholding • Localization – Locate the edge accurately, estimate edge orientation

  8. Figure 4.2-2 Noise in images requires tradeoffs between sensitivity and accuracy for edge detectors. a) noisy image, b) edge detector too sensitive, many edge points found that are attributable to noise, c) edge detector not sensitive enough, loss of valid edge points, d) reasonable result obtained by compromise between sensitivity and accuracy, may mitigate noise via postprocessing

  9. Roberts operator • a simple approximation to the first derivative. • Marks edge points only; does not return any information about the edge orientation. • the simplest of the edge detection operator and will work best with binary image. • There are two form of the Roberts operator. • The first consists of the square root of the sum of the difference of the diagonal neighbors squared

  10. The second form of the Roberts operator is the sum of the magnitude of the differences of the diagonal neighbors, • often used in practice due to it computational efficiency-it is typically faster for a computer to find an absolute value than to find square roots.

  11. Roberts Operator - Example • The output image has been scaled by a factor of 5 • Spurious dots indicate that the operator is susceptible to noise The primary disadvantage of the Roberts operator is its high sensitivity to noise, because very few pixels are used to approximate the gradient.

  12. Sobel operator • approximates the gradient by using a row and a column mask, which will approximate the first derivative in each direction. • The Sobel edge detection mask look for edge in both the horizontal and vertical directions, and then combine this information into a single metric. VERTICAL EDGE (Gy) HORIZONTAL EDGE (Gx)

  13. These mask are each convolved with the image. • At each pixel location we now have two numbers: s1, corresponding to the result from the vertical edge mask, and s2, from the horizontal edge mask. • We use the number to compute two metrics, the edge magnitude and the edge direction, defined as follows:

  14. GX mask highlights the edges in the horizontal direction while the GY mask highlights the edges in the vertical direction. After taking the magnitude of both, the resulting output detects edges in both directions.

  15. Sobel Operator - Example • Compare the output of the Sobel Operator with that of the Roberts Operator: • The spurious edges are still present but they are relatively less intense compared to genuine lines • Roberts operator has missed a few edges • Sobel operator detects thicker edges Outputs of Sobel (top) and Roberts operator

  16. Prewitt • similar to the Sobel, but with different mask coefficients. • The mask are each convolved with the image. • At each pixel location we find two numbers: p1, corresponding to the result from the vertical edge mask, and p2, from the horizontal edge mask. • We use these results to determine two metric, the edge magnitude and the edge direction

  17. VERTICAL EDGE HORIZONTAL EDGE

  18. As with the Sobel edge detector, the direction Iies 90 degrees from the apparent direction of the line or curve. • The Prewitt is easier to calculate than the SobeI, since the only coefficients are 1’s, which make it easier to implement in hardware. • However, the Sobel is defined to place emphasis on the pixels closer to the mask center, which may be desirable for some applications.

  19. Gradient (First Order Derivative) Methods - Summary • Noise – simple edge detectors are affected by noise – filters can be used to reduce noise • Edge Thickness – Edge is several pixels wide for Sobel operator– edge is not localized properly • Roberts operator is very sensitive to noise • Sobel operator goes for averaging and emphasizes on the pixel closer to the center of the mask. It is less affected by noise and is one of the most popular Edge Detectors.

  20. Laplacian operators • The three Laplacian masks presented below represent different approximations of the Laplacian, which is the two dimensional version of the second derivative. • Unlike the Sobel and Prewitt edge detection masks, the Laplacian masks are rotationally symmetric, which means edges at all orientations contribute to the result.

  21. They are applied by selecting one mask and convolving it with the image. • The sign of the result (positive r negative) from two adjacent pixel locations provides directional information, and tell us which side of the edge is brighter.

  22. Compass Masks • The Kirsch and Robinson edge detection mask are called compass masks since they are defined by taking a single mask and rotating it to the eight major compass orientations: North, Northwest, West, Southwest, South, Southeast, East, and Northeast.

  23. Kirsch Compass Masks • Taking a single mask and rotating it to 8 major compass orientations: N, NW, W, SW, S, SE, E, and NE. • The edge magnitude = The maximum value found by the convolution of each mask with the image. • The edge direction is defined by the mask that produces the maximum magnitude.

  24. Kirsch Compass Masks (Cont.) • The Kirsch masks are defined as follows: • EX: If NE produces the maximum value, then the edge direction is Northeast

  25. The edge magnitude is defined as the maximum value found at each point by the convolution of each of the masks with the image. • The edge direction is defined by the mask that produces the maximum magnitude; for instance, k0corresponds to a horizontal edge, whereas k5 corresponds to a diagonal edge in the Northeast/Southwest direction (remember edges are perpendicular to the lines). • the last four mask are actually the same as the first four, but flipped about a central axis.

  26. Robinson compass masks • are used in a manner similar to the Kirsch masks, but are easier to implement, as they rely only on coefficients of 0, 1, and 2, and are symmetrical about their directional axis-the axis with the zeros which corresponds to the line direction. • We only need to compute the results on four of the masks; the result from the other four can be obtained by negating the results from the first four.

  27. Robinson Compass Masks • Similar to the Kirsch masks, with mask coefficients of 0, 1, and 2:

  28. Robinson compass masks • The edge magnitude is defined as the maximum value found at each point by the convolution of each of the mask with the image. • The edge direction is defined by the mask that produces the maximum magnitude. • It is interesting to note that masks N and SE are the same as the Sobel masks. • can see that any of the edge detection masks can be extended by rotating them in a manner like the compass mask , which will allow to extract explicit information about edge in any direction.

  29. Assignment 3 - Edge Map In Matlab Program • Implement all methods in this presentation • Set up edge detection mask(s) • Use convolution method (filter2 function) • Calculate edge magnitude • Show the result of edge map • No calculation of edge direction Submit on the 2nd March 2011

  30. Segmentation • The goal of segmentation is to find region that represent objects or meaningful parts of object . • lmage segmentation methods look for regions that have some measure of homogeneity within themselves, or some measure of contrast with objects on their border. • Three categories for image segmentation method : (1) region growing and shrinking, (2) clustering methods, (3) boundary detection.

  31. Region Growing and Shrinking • Region growing and shrinking method segment the image into regions operating principally in the row and column, (r, c), based image space . • Iocal, in which small area of the image are processed at a time • Global, with the entire image considered during processing. • Combine local and global technique, such as split n merge, are referred to a state space technique and use graph structures to represent the regions and their boundaries.

  32. Region Growing and Shrinking • The data structure most commonly used for splitting n merging of regions is the quadtree. • A tree is a data structure which has nodes that point to (connect) the elements. The top eIement is called the parent, and the connected element are called children. • In a quadtree each node can have four children

  33. R1 R2 R31 R32 R4 R34 R33 (a) R R3 R4 R1 R2 R33 R34 R31 R32 (b) Figure 4.3-1 Quadtree Data Structure. a) A partitioned image where Ri represents different regions, b) The corresponding quadtree data structure

  34. Clustering Techniques • Individual elements are placed into groups; these groups are based on some measure of similarity within the group. • Diff between region growing is that domains other than the row and coloumn, (r,c) based image space may be considered as the primary domain for clustering

  35. Clustering Techniques • The simplest method is to divide the space of interest into region by selecting the center or median along each dimension and splitting it there; this can be done iterativeIy, until the space is divided into the specific number of region needed. • This method is used in the SCT /Center and PCT/Median segmentation algorithms. • This method will only effective if the space using and the entire algorithm is designed intelligently because the center or median split alone may not find good cluster .

  36. Recursive region splitting is a clustering method that has become a standard technique. • This method uses thresholding of histograms technique to segment the image. • A set of histogram is calculated for a specific set of features, and then each of these histograms is searched for distinct peaks (Figure 4.3.5) • The best peak is seIected and the image is split into regions based on this thresholding of the histogram.

  37. One of the first algorithms based on the concepts proceeds as follows: • consider the entire image as one region and compute histograms for each component of interest (for example red, green, and blue for a color image). • Apply a peak finding test to each histogram. select the best peak and put thresholds on either side of the peak. Segment the image into two regions based on this peak. • Smooth the binary thresholded image so only a single connected subregion is left • Repeat step 1-3 for each region until no new histograms have significant peak .

  38. Boundary Detection • Boundary detection, as a method of image segmentation is performed by finding the boundaries between object ,thus indirectly defining the object . • This method is usually begun by marking point that may be a part of an edge. • These points are then merged into line segments, and the line segments are then merged into object boundaries.

  39. One method to do this is to consider the histogram the edge detection results, looking for the best valley manually (Figure 4.3.11). • With a bimodal histogram, a histogram with two major peaks, an analytical solution is available to find a good threshold value. • A bimodal histogram is typical for computer application where we have one object against a background of high contrast.

More Related