580 likes | 878 Views
Image Quality in Digital Pathology. (from a pathologist’s perspective) Jonhan Ho, MD, MS. Disclosure. Image Quality: define/measure . Image quality is good enough if:. It has a resolution of 0.12345 μ /pixel It is captured in XYZ color space/pixel depth
E N D
Image Quality in Digital Pathology (from a pathologist’s perspective) Jonhan Ho, MD, MS
Image quality is good enough if: • It has a resolution of 0.12345 μ/pixel • It is captured in XYZ color space/pixel depth • It has a MTF curve that looks perfect • It has a focus quality score of 123 • Has a high/wide dynamic range
What is “resolution”? • Spatial resolution • Sampling period • Optical resolution • Sensor resolution • Monitor resolution • New Year’s resolution???????
Optical resolution • Theoretical maximum resolution of a 0.75 NA lens is 0.41μ. 1.30 NA – 0.23μ. • Has NOTHING to do with magnification! (we will get to that later.)
Depth of Field • As aperture widens • Resolution improves • Depth of field narrows • Less tissue will be in focus
Image quality is good enough if: • It has a resolution of 0.12345 μ/pixel • It is captured in XYZ color space/pixel depth • It has a MTF curve that looks perfect • It has a focus quality score of 123 • Has a high/wide dynamic range
Image quality is good enough if it is: • “Sharp” • “Clear” • “Crisp” • “True” • “Easy on the eyes”
Image quality is good enough if it is: • “Sharp” • “Clear” • “True”
Image quality is good enough if: • You can see everything you can see on a glass slide
Image quality is good enough if: • I can make a diagnosis from it
Image quality is good enough if: • I can make as good a diagnosis from it as I can glass slides. • This is a concordance study • OK, but how do you measure this?!?!?!?!?!
Concordance validation • Some intra-observer variability • Even more interobserver variability • Order effect • “great case” effect
Concordance validation • Case selection • Random, from all benches? • Enriched, with difficult cases? • Presented with only initial H&E? • Allow ordering of levels, IHC, special stains? • If so, how can you compare with the original diagnosis? • Presented with all previously ordered stains? • If so, what about diagnosis bias? • How old of a case to allow?
Concordance validation • Subject selection • Subspecialists? Generalists? • Do all observers read all cases, even if they are not accustomed to reading those types of cases? • Multi-institutional study • Do observers read cases from other institutions? • Staining/cutting protocol bias
Concordance validation • Measuring concordance • Force pathologist to report in discrete data elements? • This is not natural! (especially in inflammatory processes!) • What happens if 1 data element is minimally discordant? • Allow pathologist to report as they normally do? • Free text – who decides if they are concordant? How much discordance to allow? What are the criteria?
Concordance study bottom line • Very difficult to do with lots of noise • Will probably conclude that can make equivalent diagnoses • At the end, we will have identified cases that are discordant, but what does that mean? • What caused the discordances? • Bad images? If so what made them bad? • Familiarity with digital? • Lack of coffee?!?!?! • Still doesn’t feel like we’ve done our due diligence – what exactly are the differences between glass and digital?
PERCEPTION = QUALITY “Sharp, clear, true”
Psychophysics • The study of the relationship between the physical attributes of the stimulus and the psychological response of the observer
Images, image quality and observer performance: new horizons in radiology lecture. Kundel HL. Radiology. 1979 Aug;132(2):265-71
Kundel on image quality • “The highest quality image is one that enables the observer to most accurately report diagnostically relevant structures and features.”
Conspicuity index formula • K = f(Size, contrast, Edge Gradient/surround complexity) • Probability of detection = f(K)
Kundel, 1979 • “Just as a limited alphabet generates an astonishing variety of words, an equally limited number of features may generate an equally astonishing number of pictures.”
Can this apply to pathology? • What is our alphabet? MORPHOLOGY! • Red blood cells • Identify inflammation by features • Eosinophils • Plasma cells • Hyperchromasia, pleomorphism, NC ratio • Build features into microstructures and macrostructures • Put features and structures into clinical context and compare to normal context • Formulate an opinion
Advantages of feature based evaluation • Better alleviates experience bias, context bias • Can better perform interobserverconcordancy • Connects pathologist based tasks with measurable output understandable by engineers • Precedent in image interpretability (NIIRS)
NIIRS 1 “Distinguish between major land use classes (agricultural, commercial, residential)”
NIIRS 5 “Identify Christmas tree plantations”
Disadvantages of feature based evaluation • Doesn’t eliminate the “representative ROI” problem • Still a difficult study to do • How to select features? How many? • How to determine gold standard? • What about features that are difficult to discretely characterize? (“hyperchromasia”, “pleomorphism”)
Bottom line for validation • All of these methods must be explored as they each have their advantages and disadvantages • Technical • Diagnostic concordance • Feature vocabulary comparison
Image perception - Magnification • Ratio • Microscope • Lens • Oculars • Scanner • Lens • Sensor resolution • Monitor resolution • Monitor distance
270 µm pixel pitch of monitor Magnification at the monitor 1 pixel =270 µm at the monitor 1 pixel = 10 µm at the sensor 270 / 10 = ~27X ~27X magnification from sensor to monitor 1 pixel = 10 µm at the sensor 1 pixel = 0.25 µm at the sample 10/0.25 = 40X 40X magnification from object to sensor = 1080X TOTAL magnification from object to monitor This is the equivalent of a 108X objective on a microscope!!??
Near point = 10” What if the sensor was obscenely high resolution?
Other things that cause bad images • Tissue detection • Focus