250 likes | 308 Views
Computational Image Processing in Microscopy. Funded by NSF IOS CAREER 1553030. By Adrienne Roeder. Image processing. Uses computational algorithms, sometimes with manual intervention, to extract information (features, measurements, patterns, etc.) from digital images.
E N D
Computational Image Processing in Microscopy Funded by NSF IOS CAREER 1553030 By Adrienne Roeder
Image processing • Uses computational algorithms, sometimes with manual intervention, to extract information (features, measurements, patterns, etc.) from digital images. • Allows extraction of quantitative data from large image datasets. Image processing Developing Arabidopsis flower expressing mCitrine-ATML1 fluorescent transcription factor fusion protein Processed image detecting each sepal nucleus and quantifying mCitrine-ATML1 fluorescence Meyer HM, Teles J, Formosa-Jordan P, et al. Fluctuations of the transcription factor ATML1 generate the pattern of giant cells in the Arabidopsis sepal. eLife. 2017;6:e19131. doi:10.7554/eLife.19131.
Considerations in acquiring a microscopy image for computational processing • Simple images with the objects in one color and the background in black work best. • The structure of interest should be one color and nothing else the same color. • There should be high contrast between objects of interest and background. • Microscope settings should be adjusted to take images optimized for computational processing, not viewing. Difficult for image processing: poor contrast, multiple features in the same color range Good for image processing: split colors
Fluorescence microscopy • Advantage: great specificity. You only see fluorophores (usually fluorescent proteins or dyes you added to the sample) that you have excited with the light you shine on them good for image processing • Advantage: sample can be alive! • Limitations: bleaching, auto-fluorescence, bleed-through of one color channel into another. 48 h Fluorescence image (confocal) of an Arabidopsis seedling Living, developing Arabidopsis flower bud (confocal)
Fluorescence Excited state Excitation and emission spectra of GFP 100% excitation 80% emission Absorption of blue photon Emission of green photon 60% Normalized fluorescence 40% Ground state 20% 0% 300 350 400 450 500 550 600 GFP emission detected Wavelength (nm) 488 laser excitation • Shorter wavelength = more energy. • A fluorophore (e.g., Green Fluorescent Protein GFP) absorbs light of a shorter wavelength (e.g., blue), exciting the fluorophore, and emits light at a longer wavelength (e.g. green). • Each fluorophore has a a characteristic excitation and emission spectrum.
Fluorescence microscopy light path camera eyepiece Dichroic mirrors reflect light at some wavelengths (excitation) and let light at other wavelengths (emission) pass through. emisssion dichroic mirror excitation lenses focus Knobs stage condenser • Excitation light from a light source hits a dichroic mirror and is reflected down through the objective lens to the specimen. It excites the fluorophores in the specimen, which emit a longer wavelength emission light. The emission light enters the objective, passes through the dichroic mirror and other filters to be recorded by the camera.
detector Confocal microscopy uses a pinhole to optically section the sample detector pinhole out of focal plane light blocked laser dichroic mirror excitation emisssion in focal plane light detected light source pinhole detector lenses Sunflower pollen grain pinhole focus Knobs stage Widefield Confocal condenser Objective lens Focal plane https://www.olympus-lifescience.com/en/microscope-resource/primer/techniques/confocal/confocalintro/ • Pinholes create optical sections – light from outside the focal plane is blocked by the pinhole (diffuse green). Only in-focus light rays initiated in the focal plane pass through the pinhole (green line) to the detector. Image source: Olympus
detector Confocal microscopy images are generated by laser scanning detector pinhole laser dichroic mirror excitation eyepiece emisssion Laser illuminates one point Laser illuminates the next point Intensity = 59 Light source pinhole Intensity = 129 lenses focus Knobs stage condenser Diagram of a typical scanning pattern of the laser across the sample to capture the whole optical section. • The laser is used to excite the fluorophores at one point in the sample. • The detector (often a photomultiplier tube or PMT) quantitatively detects the amount of emission light from that point in the sample and records it as a pixel in the image. • Then the laser moves over to collect data at the next point.
Different fluorophores can be recorded in separate channels • Samples often have 2 or more fluorophores, which are spectrally distinct (e.g., GFP and chlorophyll). • Each fluorophore can be recorded in a separate channel using different excitation and/or emission wavelengths. • Different detectors can be used to simultaneously record each fluorophore or the same detector can be used sequentially. • Each channel can be displayed in a different color in the composite image. • Consider color blindness in the choice of colors (i.e., not green/red) Channel 1: nuclei Channel 2: chlorophyll Composite Excitation 488 Emission 493-556 Excitation 488 Emission 593-710 Channel 1: green Channel 2: magenta Colors assigned by user
“Z-stack” captures the 3D image • 3D image composed of a series 2D optical section images collected from a single biological sample. Z-stack Microscope control
Pixel and voxel • Pixels and voxels are the basic units of the image. • Each has an associated intensity value measured by the detector (from 0 for black to 255 for white in an 8 bit image). • Each represents a defined unit of area or volume in the sample, which is recorded in the microscope metadata. • A voxel is the 3D analog to the pixel. Pixel: 2D Voxel: 3D Green intensity value of each pixel Green intensity value of each voxel 8 46 65 208 45 142 1 voxel represents 0.755 µm × 0.755 µm × 4.0 µm 24 70 149 1 pixel represents 0.755 µm × 0.755 µm
Resolution and dots per inch • Resolution is the ability to resolve or distinguish features in the image. • The resolution is limited by intrinsic properties of the imaging system. • Printing resolution is the number of dots (pixels) per unit distance, e.g. dots per inch (DPI). • Journals generally require 300 DPI for images. • Resampling to decrease pixels is acceptable, resampling to increase pixels is not acceptable. 1024 x 1024 image 600 DPI 1024 x 1024 image 300 DPI
Maximum intensity projection • For each pixel in the image, the algorithm selects the brightest slice from the z-stack. These brightest pixels are combined to make the final image. • The maximum intensity projection often can be used to produce an overall view of the specimen, particularly the outer surface.
Image Compression • Reduces the size of the image file by filtering redundant information. • Lossless compression: The exact image can be recovered, e.g., LZW in TIFF. • Lossy compression: Information is lost, e.g., JPEG. Original image Highly compressed lossy Roeder AHK, Cunha A, Burl MC, Meyerowitz EM. (2012) A computational image analysis glossary for biologists. Development 139: 3071-3080. doi:10.1242/dev.076414.
Step 2: Pre-processing. Denoising filters • Denoising filters compare a targetpixel to the surrounding pixels and change the intensity value of the target pixel based on the neighbor values. • Mean = mean of neighbors; median = median of neighbors; Gaussian blur = the weighted average of the pixel and its neighbors, with the weight assigned according to distance. median Gaussian blur original mean
Step 3. Segmentation • Segmentation automatically delineates objects in the image for further analysis. • Segmentation is the process of partitioning an image into regions of interest, i.e., identifying each nucleus, cell, or tissue type within the image.
Thresholding segmentation • Thresholding is a simple method of segmentation. The user defines a threshold intensity and everything above it is marked as objects and everything below it is marked as background. • It is used in COSTANZA as a pre-processing step (Background Extraction). • It does not work well when different objects have different intensity. • Fiji: Image>Adjust>Threshold Roeder AHK, Cunha A, Burl MC, Meyerowitz EM. (2012) A computational image analysis glossary for biologists. Development 139: 3071-3080. doi:10.1242/dev.076414.
Gradient ascent/descent segmentation • A segmentation method for finding local image intensity maxima/minima (e.g., the center of the nucleus). • An image can be thought of as landscape with peaks and valleys in intensity. • The algorithm starts from each pixel in the image, moves to the neighboring pixel with the highest intensity and repeats the process, until no neighbor has a higher intensity. This is the local maximum. • Good for identifying objects that do not have the same intensity. • BOA = Basin of attraction = all the points that associate with the same maximum intensity COSTANZA uses gradient ascent
Watershed segmentation Fills each cell like water poured into the intensity landscape. Seed each cell Watershed segmentation of tomato shoot apex cells in MorphoGraphX software. Barbier de Reuille P, Routier-Kierzkowska AL, Kierzkowski D, et al. MorphoGraphX: A platform for quantifying morphogenesis in 4D. eLife. 2015;4:e05864. doi:10.7554/eLife.05864.001.
Edge detection segmentation • Finds edges of objects by detecting steep changes in image intensity. • Fiji: Process>Find edges • Plugin Canny edge detection (https://imagej.nih.gov/ij/plugins/canny/index.html) original edge detect Roeder AHK, Cunha A, Burl MC, Meyerowitz EM. (2012) A computational image analysis glossary for biologists. Development 139: 3071-3080. doi:10.1242/dev.076414.
Step 4: Post-processing Microscopy Original image Pre-processing Segmentation Post-processing Data analysis
Semi-automated approach • Automated segmentation programs commonly make errors that are obvious to the scientist. • Scientists often correct errors generated by the automated segmentation program by hand. • Critique: can add human bias. Hand correction Added Erased Automated segmentation Final segmentation Original image Some errors Roeder AHK, Cunha A, Burl MC, Meyerowitz EM. (2012) A computational image analysis glossary for biologists. Development 139: 3071-3080. doi:10.1242/dev.076414.
Tracking Registration Identifying corresponding features (i.e. nuclei, cells, etc.) in images from a time series. • Alignment of two images. • Often used to compare time points. Live imaging of nuclei in developing Arabidopsis flowers 0 hours 6 hours Roeder AHK, Cunha A, Burl MC, Meyerowitz EM. (2012) A computational image analysis glossary for biologists. Development 139: 3071-3080. doi:10.1242/dev.076414.
Validation Original Processed image • Carefully compare results with original image.