460 likes | 620 Views
Visual Processing in Fingerprint Experts and Novices. Tom Busey Indiana University, Bloomington John Vanderkolk Indiana State Police, Fort Wayne. www.indiana.edu/~busey/. How Do Experts Make Identifications?. Easy Match. Hard Match. What Perceptual Abilities Support Expertise?.
E N D
Visual Processing in Fingerprint Experts and Novices Tom Busey Indiana University, Bloomington John Vanderkolk Indiana State Police, Fort Wayne www.indiana.edu/~busey/
How Do Experts Make Identifications? Easy Match Hard Match
What Perceptual Abilities Support Expertise? • Experts may learn the relevant features or dimensions, supported by naming • Tune detectors to specific characteristics of features (exclude noise) • Integrate information over larger regions of space • Superior visual memory to support matching from one print to the other
Study Fragment one second
Mask Either 200 ms or 5200 ms
Test Images Until Response
Testing Fingerprint Expertise:X-AB Sequential Matching Task example stimulus pairs:
At Study: • Study image is rotated up to 90° in either direction and brightness is jiggled up or down • Reduces reliance on low-level features like orientation of ridge flow or image brightness
At Test: • Two image manipulations designed to simulate latent prints • Added noise • Partial masking
Behavioral Data Full Images Partial Images Full Images in Noise Partial Images in Noise Experts: No effect of delay, interaction between noise and partial masking.
Behavioral Data Full Images Partial Images Full Images in Noise Partial Images in Noise Experts: Doing really well in the Full Image condition in noise. Configural processing?
Partial Masking original inverse Summation Recovers Original Fingerprint Semi-Transparent Masks Fingerprint Partially Masked Fingerprints
Logic of Partial Masking One Half Both Halves Linear Summation Recovers Original Fingerprint One Half Partially Masked Fingerprints
Evidence for Configural Processing: Multinomial Modeling To test for configural processing, we can use the accuracy rate in the partial image condition to make a prediction for the full image condition, assuming no configural processing. If performance in the full image condition exceeds the prediction, we have evidence that is consistent with configural processing.
Evidence for Configural Processing: Multinomial Modeling To test for configural processing, we can use the accuracy rate in the partial image condition to make a prediction for the full image condition, assuming no configural processing. If performance in the full image condition exceeds the prediction, we have evidence that is consistent with configural processing. Based on a Multinomial Processing Tree implimentation of a probability summation prediction.
Evidence for Configural Processing: Multinomial Modeling Experts in noise: We predict performance in the full image condition to be about 75% correct. Instead it is around 90%. Experts are doing better with the whole image than we predict they would do based on partial-image performance. This is configural processing at work.
Configural Processing in Faces: The ‘Thatcher Illusion’ Features are perceived individually, image looks ok. (Thomson, 1980)
Configural Processing in Faces: The ‘Thatcher Illusion’ Features are perceived individually, image looks ok. Features are perceived in context, image looks grotesque. (Thomson, 1980)
Configural Processing in Faces: The ‘Thatcher Illusion’ Features are perceived individually, image looks ok. Features are perceived in context, image looks grotesque. (Thomson, 1980)
Configural Processing in Faces: The ‘Thatcher Illusion’ Features are perceived individually, image looks ok. Features are perceived in context, image looks grotesque. (Thomson, 1980)
EEG Recording Basics • Record from the surface of the scalp • Amplify 20,000 times • Electrical signals are related to neuronal firing, mainly in post-synaptic potentials in cortex. • Very small signals, very noisy data.
EEG and Configural Processing Faces produce a strong component over the right hemisphere at about 170 ms after stimulus onset, which is called the N170. Inverted faces cause a delay of 10-20 ms in the N170. Trained objects (Greebles) show a delay in the N170 component with inversion, but only after training. Data from Rossion, Gauthier, Tarr, Despland, Bruyer, Linotte & Crommelinck (2000) Data from Rossion, Gauthier, Goffaux, Tarr & Crommelinck (2002)
Fingerprints have an orientation • Experts always view fingerprints with the tip pointing upwards.
An Obvious Experiment: Show upright and inverted fingerprints to Fingerprint examiners and novices. If experts process fingerprints configurally, we should see a delayed N170 to inverted fingerprints. Also test faces to replicate the face inversion effect in our subjects. Test both identification and categorization tasks.
The Bottom Line • Experts perform better than Novices in all conditions • Better in noise • Better at longer delays • Really good when have both halves present at test • Attributed to configural processing • Supported by EEG recording • Only for Experts show an effect of inversion on the N170 when viewing fingerprints • Places strong constraints on the locus of expertise • Perceptual in nature- N170 reflects late stages of perceptual processing • Can't be due to demand characteristics • Lots of plausible perceptual and cognitive models that suggest that this kind of perceptual expertise would help in actual fingerprint examinations
Fingerprint experts demonstrate strong performance in an X-AB matching task, robustness to noise and evidence for configural processing when stimuli are presented in noise. This latter finding was confirmed using upright and inverted fingerprints in an EEG experiment. Experts showed a delayed N170 component for inverted fingerprints in the same channel that they show a delayed N170 for inverted faces. Summary of Experiments Experts appear to be processing upright fingerprints in part using configural processing, which stresses relational information and implies dependencies between individual features. In the case of fingerprints, configural processing may come from idiosyncratic feature elements instead of well-defined features such as eyes and mouths.