210 likes | 371 Views
MTF Correction for Optimizing Softcopy Display of Digital Mammograms: Use of a Vision Model for Predicting Observer Performance. Elizabeth Krupinski, PhD 1 Jeffrey Johnson, PhD 2 Hans Roehrig, PhD 1 Jeffrey Lubin, PhD 2 Michael Engstrom, BS 1
E N D
MTF Correction for Optimizing Softcopy Display of Digital Mammograms: Use of a Vision Model for Predicting Observer Performance Elizabeth Krupinski, PhD1 Jeffrey Johnson, PhD2 Hans Roehrig, PhD1 Jeffrey Lubin, PhD2 Michael Engstrom, BS1 1University of Arizona 2Sarnoff Corporation This work was supported by a grant from the NIH R01 CA 87816-01.
Rationale • MTF (Modulation Transfer Function) of monitors is inferior to radiographic film • In both vertical & horizontal directions MTF is degraded (spatial resolution lost) & moreover is non-isotropic • Horizontal by ~ 10 – 20% • Vertical by ~ 30 – 40% • Over half the contrast modulation is lost at highest spatial frequencies • Images are thus degraded both in spatial & contrast resolution • Maybe image processing can help !
Rationale • Observer trials (ROC) are ideal for evaluation, but for good statistical power • Require many images • Require many observers • Often require multiple viewing conditions • Are time-consuming • Predictive models may help decrease need for extended & multiple ROC trials • Simulate effects of softcopy display parameters on image quality • Predict effects on observer performance
JNDmetrix Model • Developed by the Sarnoff Corporation • Successful in military & industrial tasks • Computational method for predicting human performance in detection, discrimination & image-quality tasks • Based on JND (Just Noticeable Difference) measurement principles & frequency-channel vision-modeling principles • Uses 2 input images & the model returns accurate, robust estimates of visual discriminability
JNDmetrix Model • Optics: input images convolved by function approximating point spread optics of eye • Image Sampling: by retinal cone mosaic simulated by Gaussian convolution & point-sampling sequence of operations • Raw Luminance Image: converted to units local contrast & decomposed to Laplacian pyramid yielding 7 frequency band pass levels • Pyramid Levels: convolved with 8 pairs spatially oriented filters with bandwidths derived from psychophysical data
JNDmetrix Model • Pairs Filtered Images: squared & summed yielding phase-independent energy response that mimics transform in visual cortex from linear (simple cells) to energy response response (complex cells) • Transducer Phase: energy measure each pyramid level normalized by value approximating square of frequency-specific contrast detection threshold for that level & local luminance
JNDmetrix Model • Normalized Level: transformed by sigmoid non-linearity duplicating visual contrast discimination function • Transducer outputs: convolved with disk-shaped kernal & averaged to account for foveal sensitivity • Distance metric: computed from distance between vectors (m-dimensional, m = # pyramid levels x # orientations) from each spatial position • JND Spatial Map: results representing degree discriminability; reduced to single value (Q-norm)
The Study • Measure monitor’s horizontal & vertical MTF • Apply MTF correction algorithm • Based on Reiker et al. Proc SPIE 1997;3035:355-368 but using a Weiner-filtering algorithm instead of the Laplacian pyramid filter • Compensates mid to high-frequency contrast losses • Run human observer (ROC) study • Calculate area under the curve (Az) • Run JNDmetrix model on images • Calculate JNDs • Compare human & model performance
Physical Evaluation • Siemens monitor: 2048 x 2560; monochrome; P45 phosphor; Dome MD-5 video board; DICOM calibrated • Luminance: 0.8 cd/m2 – 500 cd/m2) • Input to model: each stimulus imaged on monitor by CCD camera to capture display effects
Block diagram of program for automatically finding the CRT MTF from a CCD image of a single CRT line Step 1: Input Image details like magnification, CRT pixel size and orientation of line. Step 2: Specify ROI for profiles. Step 3: Perform Fast Fourier Transform of the profiles and take their average. Step 4: Correct for finite size of pixel width. Step 5: Get a Polynomial curve fit to get normalization factor. Step 6: Divide the average FFT by this normalization factor to obtain MTF. CRT Line Profiles to find Vertical MTF CRT Line Profiles to find Horizontal MTF
Images • Mammograms from USF Database • 512 x 512 sub-images extracted • 13 malignant & 12 benign mCa++ • The mCa++ are removed using median filter • Add mCa++ to 25 normals with reduced contrast levels • 75%, 50% & 25% mCa++ by weighted superposition of signal-absent & present versions • 250 total images • Decimated to 256 x 256 (for CCD imaging)
Edited Images Original 75% mCa++ 50% mCa++ 25% mCa++ 0% mCa++
MTF Restoration • If MTF is known then digital data can be processed with essentially the inverse of the display MTF(f) before displayed: O’(f) = O(f)/MTF(f) where O(f) is the object • Displayed O’(f) on the monitor with MTF(f) will result in an image equivalent to the digital data O(f) • There is no degradation and the image on CRT display looks just like digital data I(f)=O’(f)*MTF(f)=[O’(f)/MTF(f)]*MTF(f)=O(f) (where I(f) = the displayed image)
Observer Study • 250 images • 256 x 256 @ 5 contrasts • 6 radiologists • No image processing • Ambient lights off • No time limits • 2 reading sessions ~ 1 month apart • Counter-balanced presentation • Rate confidence (6-point scale)
Human ROC Results * * * P < 0.05
Model Results * * * * * P < 0.05
Summary • MTF compensation improves detection performance significantly • JNDmetrix model predicted human performance well • High correlation between human & model results • Future improvements to model may include attention component derived from eye-position data
Model Results • Model predicted same pattern of results as human observers • MTF processing yields higher performance than without • At all lesion contrast levels • Correlation between human Az and model JND is quite high