1.32k likes | 1.33k Views
This chapter discusses the challenges and techniques in processing and analyzing biomedical images, including image enhancement and restoration, automated segmentation, registration and fusion of multimodality information, classification of image content, and quantitative measurement of image properties.
E N D
Computational Medical Imaging Analysis Chapter 5: Processing and Analysis Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington, KY 40506 Chapter 5: CS689
5.1a: Challenges in Comprehending Information in Biomedical Images • Image enhancement and restoration • Automated and accurate segmentation of structures and features of interest • Automated and accurate registration and fusion of multimodality or multispectral information • Classification of image content, namely tissue characterization and typing • Quantitative measurement of image properties and features, including a discussion of the “meaning” of image measurement Chapter 5: CS689
5.2a: Image Enhancement and Restoration • Image enhancement methods attempt to improve the quality of an image • We can make certain features in the image more recognizable or prominent • Examples are amplification of edges or reduction of noise to increase the contrast between regions of an image • It is also possible to increase the visibility of features at a certain scale or with a certain spectral signature Chapter 5: CS689
5.2b: Tradeoffs in Detail and Noise • There is an inherent tradeoff between amplifying detail and reducing noise when applying image enhancement techniques • Procedures that enhance visibility of detail also increase the noise; and conversely, procedures that applied to reduce noise also reduce detail. (Denoising usually makes edge blurred) • Many techniques have been developed to achieve image enhancement using both linear and nonlinear techniques (such as PDE based denoising techniques) Chapter 5: CS689
5.2b: Wavelet Denoising Results Left: Original, Right: Noise suppressed result Chapter 5: CS689
5.2b: Difference of the Previous Images Difference image between original image and noise suppressed result image Chapter 5: CS689
5.2c: Histogram Operations • The histogram of an image is a function that relates the number of pixels in an image to the range of brightness values of those pixels • Normally a 2D graph, with abscissa showing the number of pixels and ordinate showing the brightness values • Any point on the graph indicates the number of pixels in the image that have the same brightness level • It usually has one or more peaks and valleys that correspond to the gray levels of the image that are most common and least common throughout the image Chapter 5: CS689
5.2d: Histogram Illustration Chapter 5: CS689
Image Enhancement: Spatial Domain Histogram Modification
5.2e: Use of Histogram • The global statistical manipulation of image gray scale values is based on histogram matching • Evaluation of histograms can reveal that some brightness (gray) levels may be “underutilized” as far as efficient display is concerned • Histogram equalization is a response to such an evaluation and refers to “spreading out” or “stretching” the gray levels so that they are all used as evenly as possible • This manipulation can take the full advantage of the display system, may alter the original data Chapter 5: CS689
5.2f: Histogram of A Mosaic Chapter 5: CS689
5.2g: Histogram of A Scene Chapter 5: CS689
Histogram Chapter 5: CS689
5.2h: Histogram Equalization • Histogram “flattening” or “equalization” uses some ideal “flat” histogram shape for histogram matching, which will maximize contrast in the image • If the flattening step is used to preserve contrast while moving from a high-resolution gray scale to a lower resolution gray scale, this “wasted gray scale” will tend to cause loss of detail • A slight modification of histogram flattening that effectively limits the maximum slope of the two cumulative functions can be used to transmit maximum information content in the low-resolution image Chapter 5: CS689
5.2i: Illustration of Histogram Equalization Original histogram Original image Histogram of equalized image Equalized image Chapter 5: CS689
5.3a: Spatial Filtering • Spatial filtering involves the replacement of image values at each voxel location with some function of that pixel and its neighbors • A linear filter (convolution) uses a weighted sums and is reversible • A simple filter is to replace each pixel by the computed mean or average of itself and its eight closest neighbors • The size of the neighborhood (kernel of the convolution) may vary for different effect Chapter 5: CS689
5.3b: Spatial Filtering (II) • The most common goal of pixel averaging is to reduce noise in the image • An accompanying result is smoothed or blurred edges in the image • Blurring and computation time increase with increase in the size of the neighborhood (kernel) • To highlight differences between pixels in an image, each pixel may be replaced by the differences between itself and the mean of its neighborhood. This is “unsharp masking” for edge enhancement Chapter 5: CS689
f(-1,0) f(0,-1) f(0,0) f(0,1) f(1,0) f(-1,-1) f(-1,0) f(-1,0) f(0,-1) f(0,0) f(0,1) f(0,-1) f(1,0) f(1,1) Image Averaging Masks
1 2 1 2 4 2 1 2 1 Image Averaging
5.3c: Spatial Filtering (I) Kernel size 5x5 Original Chapter 5: CS689
5.3d: Spatial Filtering (II) Kernel size 9x9 Kernel size 15x15 Chapter 5: CS689
-1 -1 -1 -1 8 -1 -1 -1 -1 Laplacian: Second Order Gradient for Edge Detection
-1 -1 -1 -1 8 -1 -1 -1 -1 Laplacian: Second Order Gradient for Edge Detection
-1 -1 -1 -1 9 -1 -1 -1 -1 Image Sharpening with Laplacian
5.3e: Unsharp Masking Chapter 5: CS689
5.3f: Unsharp Masking with Scales Chapter 5: CS689
5.3g: Unsharp Masking Illustration Mean filter Kernel 3x3 Original blurred Edge enhanced image Chapter 5: CS689
5.4a: Frequency Domain Filtering • Many advanced image enhancement techniques are developed in the Fourier domain • Fourier’s theorem states that any waveform (including the 2D and 3D spatial waveforms that are images) can be expressed as the sum of sinusoidal basis functions at varying frequencies, amplitudes, and relative phases • Reducing image noise, enhancing image contrast and edge definition, and other types of operations can be performed on the Fourier transform of an image Chapter 5: CS689
Frequency Domain • Frequency domain filtering methods process an acquired image in the Fourier domain to emphasize or de-emphasize specified frequency components • Frequency components can be expressed in low and high ranges • Low-frequency range components represent shapes and blurred structures in the images • High-frequency information belongs to sharp details, edges and noise Chapter 5: CS689
5.4b: Advantages of Frequency Filtering • Operations in frequency space can often be faster than spatial convolution, especially if the convolution mask (region) is large (speed of filtering) • Frequency filtering permits certain operations that are problematic in the spatial domain, such as enhancing or suppressing specific frequencies in the image • High frequencies in an image can be suppressed using a low-pass frequency filter. This will suppress noise in the image, but also image detail Chapter 5: CS689
Frequency-Domain Methods An acquired image g(x,y) can be expressed as a convolution of the object f(x,y) with a Point Spread Function (PSF) h(x,y) of a linear spatially invariant imaging system with additive noise n(x,y) as The Fourier transform provides a multiplicative relationship of F(u,v), the Fourier transform of the object and H(u,v), the Fourier transform of the PSE u and v represent frequency domain along x-and y-directions, and G(u,v) and N(u,v) are the Fourier transforms of the acquired image g(x,y) and the noise n(x,y)
Frequency-Domain Methods The object information in the Fourier domain can be recovered by inverse filtering as where F(u,v) is the restored image in the frequency domain The inverse filtering operation provides a basis for image restoration in the frequency domain. Inverse Fourier transform of F(u,v) provides the restored image in the spatial domain. The PSF of the imaging system can be experimentally determined or statistically estimated
Low-Pass Filtering An ideal low-pass filter suppresses noise and high-frequency information providing a smoothing effect to the image A 2D low-pass filter function H(u,v) is multiplied with the Fourier transform G(u,v) of the image to provide a smoothed image where F(u,v) is the Fourier transform of the filtered image F(x,y) that can be obtained by taking an inverse Fourier transform Chapter 5: CS689
5.4c: Low-pass Filter (I) Original image Blurred image with Gaussian noise Chapter 5: CS689
5.4d: Low-Pass Filter (II) Low-pass filter with cut-off frequency 0.3 Low-pass filter with cut-off frequency 0.5 Chapter 5: CS689
Low-Pass Filtering From top left clockwise: A low-pass filter function H(u,v) in the Fourier domain, the low-pass filtered MR brain image, the Fourier transform of the original MR brain image, the Fourier transform of the low-pass filtered MR brain image
5.4e: High-Pass Filtering • High-pass filter enhances the high frequencies in the image, and increases both detail and noise • Frequency domain filtering may selectively enhance or suppress periodic patterns in the image or judicious selection of frequency filter functions, which is called band-pass filtering Chapter 5: CS689
High-Pass Filtering High-pass filtering is used for image sharpening and extraction of high-frequency information such as edges An ideal high-pass filter has a rectangular window function for the high-frequency passband An ideal 2D high-pass filter with a cut-off frequency at a distance D0 from the origin in the frequency domain is Chapter 5: CS689
5.4f: High Frequency Filtering High frequency filter with cut-off at 0.5 Applied to a clone image Image produced with Sobel operator Chapter 5: CS689
High Pass Filtering From top left clockwise. A high-pass filter function H(u,v) in the Fourier domain, the high-pass filtered MR brain Image, and the Fourier transform of the high-pass filtered MR brain image
5.5a: Image Restoration • Linear system theory is based on the supposition of linear relationships among all components of an imaging system, a reasonable assumption over the normal range of modern medical imaging systems • Linear system can be completely characterized by their response to impulse functions, which are finite amount of electrical energy occurring over zero time • The impulse response (point-spread function, or psf) can be used to predict the output of the system to any arbitrary input by the process of convolution, essentially replacing each of the points in an image with its appropriately scaled impulse response Chapter 5: CS689
5.5b: Deconvolution • The highest frequencies or sharpest details of an image are generally degraded or lost • It is possible to use the point-spread function to mathematically deblur or sharpen an image (deconvolution) • The process of debluring an image is different from image enhancement • Enhancement is to make certain things sharper or more prominent, deconvolution is to restore the image to more exactly represent its original object Chapter 5: CS689
5.5c: Methods of Deconvolution • Knowing psf is the key for successful deconvolution • Psf can be measured empirically, theoretically estimated, or make reasonable estimates of it on the fly from the acquired data • Wiener filter minimizes mean-squared-error between the true object and the restoration of the object • Iterative nonlinear restoration techniques are usually better • Blind deconvolution is used when the measurement of the psf is difficult or tedious Chapter 5: CS689
5.5d: Deconvolution (Example) Turbulences on the surface of Jupiter: original and restored Chapter 5: CS689
5.6a: Image Segmentation • Segmentation is spatial partitioning of an image into its constituent parts, or isolating specific objects in an image • Segmentation is often confused with and used interchangeably with classification • Image classification means identifying what an object in the image is, or what type of object each pixel belongs to • Segmentation: manual, automatic, semiautomatic (assisted manual) Chapter 5: CS689
5.6b: Manual Segmentation • Manual segmentation involves interactive delineation of the structure boundary in an image by a trained operator • This is often the most accurate approach if an expert is doing the work and is not fatigued or hampered by limiting interface devices • The drawbacks are time consuming, error prone, subjectively biased, and not reproducible. Multiple operators and images from different scanners increase the variability of the defined borders Chapter 5: CS689
5.6c: Manual Segmentation (Example) Chapter 5: CS689