1 / 52

5. Detection of Regions of Interest

Joonas Vanninen Antonio Palomino Alarcos. 5. Detection of Regions of Interest. One of the objectives of biomedical image analysis The characteristics of the regions are examined later in detail Segmentation is the process of dividing an image into different parts by Discontinuity Similarity

maris
Download Presentation

5. Detection of Regions of Interest

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JoonasVanninen Antonio Palomino Alarcos 5. Detection of Regions of Interest

  2. One of the objectives of biomedical image analysis • The characteristics of the regions are examined later in detail • Segmentation is the process of dividing an image into different parts by • Discontinuity • Similarity • Many of the methods can be used more generally in detection of features Detection of Regions of Interest

  3. If the gray levels of the objects of interest are known, the image can be thresholded to include only them Doesn’t generally produce uniform regions Thresholding

  4. Detection of isolated points and lines • Can be useful in noise removal and the analysis of particles • Isolated points can be detected with the following convolution mask • Can be thresholded • Straight lines can be detected with masks

  5. 5.3 Edge detection

  6. Edges • An edge: • A large change in the gray level • The change is in a particular direction, depending upon the orientation of the edge • Can be measured for example with derivatives or gradients:

  7. Gradient masks • Derivatives and gradients can be approximated from differences with convolution masks • Prewitt operators:

  8. Gradient masks (II) • Sobel operators have larger weights for the pixels in the same row / column: • Roberts operators use 2 x 2 neighborhoods to compute cross-differences

  9. Gradient masks (III) • With two masks we can get a vector value for the gradient:

  10. Laplacian operators • A second-order difference operator • Omnidirectional: sensitive to edges in all directions but can’t detect the direction of the edge • Sensitive to noise since there is no averaging • Positive and negative values for each edge • Zero crossings in between, can be used to find the local maximas of the first-order gradients

  11. Laplacian of Gaussian • The noise in an image can be reduced by first convolving it with a Gaussian: • The order of the operators can be changed:

  12. Laplacian of Gaussian (II) • The result is called the Laplacian of Gaussian operator, LoG • Often referred as the Mexican hat function • Can be approximated by the difference between two gaussians, DoG operator

  13. Uses zero-crossings of the image convolved with the LoG-operator to represent edges • Problems: • If the edges are not well separated, zero crossings may also represent local minimas (false zero crossings) • The edge localization may be poor Marr-Hildreth edge detector

  14. Different structures are visible with different scales, parameter σ of the Gaussian Ideally an edge would be seen with as many scales as possible Stability map measures the persistence of boundaries over a range of filter scales Scale-space

  15. Ideal detector for step-type edges corrupted with additive white noise by three criterias: • Detection: no false or missing edges • Localization: detected edges are spatially near the real ones • A single output for a single edge • The image is convolved with a Gaussian • The gradient is estimated for each pixel • The direction of the gradient is the normal of the edge • The amplitude of the gradient is the strength of the edge Canny edge detector

  16. Non-maximal suppression: the values of gradients that are not local maximas are set to zero • The gradients are hysteresis thresholded: a pixel is considered to be an edge pixel if • It has a gradient value larger than the higher threshold, • It has a gradient value larger than the lower threshold and it is spatially connected to another edge pixel • The zero-crossings of the second derivative in the direction of the normal can also be used • This can be used for sub-pixel accuracy Canny edge detector (II)

  17. Highpass filters in the Fourier-domain can be used to find edges High-frequency noise → use a bandpass filter LoG –filter: a high-frequency emphasising Laplacian and a Gaussian lowpass filter Use of frequency domain may be computationally advantageous if the LoG is specified with a large array (large σ) Fourier-domain methods

  18. Edges are usually not linked • The similarity of edge pixels can be measured by: • The strenght of the gradient • The direction of the gradient • Most similar pixels should be used to link edges to each other Edge linking

  19. Segmentation and Region Growing

  20. Dividing the image into regions that could correspond to ROIs is an important prerequisite apply the image analysis techniques • Computer analysis of images usually starts with segmentation • Reduces pixel data to region-based information about the objects present in the image Image Segmentation

  21. Thresholding techniques • Assumption: all pixels whose values lie within a certain range belong to the same class • Threshold may be determined based upon the histogram of the image • Boundary-based methods • Assumption: pixel values change rapidly at the boundaries between regions • Intensity discontinuities lying at the boundaries between objects and backgrounds must be detected Segmentation Techniques (I)

  22. Region-based methods • Assumption: neighboring pixels within a region have similar values • May be divided into two groups • Region Splitting and Merging • Region Growing • Hybrid techniques • Combine boundary and region criteria Segmentation Techniques (II)

  23. Optimal Thresholding • Noise modify the gray levels to distributions represented by Gaussian PDFs • Probability of erroneus classification is • Differentiating whith respect to T, equating the result to zero and taking some simplifications (σ1=σ2=σ)

  24. Region-oriented segmentation • This method partitions R (entire space of the given image) into n subregions such that: • is a connected region • for i=1,2,…,n • Results are highly dependent upon the procedure used to select the seed pixels and the inclusion criteria used

  25. Initially, we will divide the given image arbitrarily into a set of disjoint quadrants If F(Ri)=FALSE for any quadrant, subdivide that quadrant into subquadrants Iterate the procedure until no further changes are made Splitting procedure could result in adjacent regions that are similar, a merging step would be required, as follows: F(RiURk)=TRUE Iterate until no further merging is possible Splitting and merging of regions

  26. Region growing using an additive tolerance • A neighboring pixel f(m,n) is appended to the region if: • T Ξ ‘Additive Tolerance Level’ • Problem: The size and shape of the region depend on the seed pixel selected

  27. Modified Criterions • Running-mean algorithm • The new pixel is compared with the mean gray level (running mean) of the region being grown • ”Current center pixel” method • After a pixel C is appended to the region, its 4 (or 8) connected neighbours would be checked for inclusion in the region as follows:

  28. Region growing using a multiplicative tolerance • A relative difference, based upon a ”multiplicative tolerance level” (τ) could be employed: • f(m,n)Ξgray level of the pixel being checked • μRcΞ original seed pixel value • current center pixel value • running-mean gray level

  29. Last methods presents difficulties in the selection of the range of the tolerance value • Possible solution: make use of some of the characteristics of the HVS • New parameter ”Just-noticeable difference” (JND) is used: JND=L.CT • L Ξ background luminance • CT Ξ threshold contrast Region growing based upon the human visual system

  30. JND-Background Gray Level function • Determination of the JND as a function of background gray level is needed to apply this method • It is possible to determine this relationship based upon psychophysical esperiments

  31. HVS-based region-growing algorithm • It starts with a 4-connected neighbor-pixel grouping. The condition is defined as: • Removal of small regions is performed • Merging of connected regions is performed if any of two neighboring regions meet the JND condition • The procedure is iterated until no nieghboring region satisfies the JND condition

  32. Detection of Objects of Known Geometry

  33. The Hough transform • Hough domain: straight lines are characterized by the pair of parameters (m,c) • m is the slope • c is the position • Disadvantage: m and c have unbounded ranges • Parametric representation • θ limited to [0,π] (or to [0,2π]) • ρ limited by the size of the image • Limits of (ρ,θ) affected by the choice of the origin

  34. Straigh Lines Properties • If normal parameters of the line are (ρ0,θ0) • Derived properties: • Point in (x,y) space corresponds to a sinusoidal curve in the (ρ,θ) space • Point in the (ρ,θ) space correspond to a straigh line in the (x,y) space • Points in the same straigh line in the (x,y) space corresponds to curves through a common point in the (ρ,θ) space • Points on the same curve in the parameter space corrspond to lines through a common point in the (x,y) space

  35. Discretize the (ρ,θ) space into accumulator cells by quantizing ρ and θ • Accumulator cells are increased by one (new curve ’ρ=x(n)cosθ+y(n)cosθ’found) for each pixel with a value of 1 • Cordinates of points of intersection of the curves in the parameter space provide the parameters of the line Straight Lines Detection

  36. Image: Wikipedia

  37. Detection of circles • Any circle in the (x,y) space is represented by a single point in the 3D (a,b,c) parameter space • Points along the perimeter of the circle describe a circular cone in the (a,b,c) space • Algorithm for detection of straight lines may be extended for the detection of circles

More Related