1 / 30

Quantitative Methods for Forensic Footwear Analysis

Quantitative Methods for Forensic Footwear Analysis. Presented by: Martin Herman Other Team Members: Gunay Dogan, Eve Fleisig, Janelle Henrich, Sarah Hood, Hari I yer, Yooyoung Lee, Steven Lund, Gautham Venkatasubramanian Information Technology Laboratory, NIST January 25, 2018.

axl
Download Presentation

Quantitative Methods for Forensic Footwear Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantitative Methods for Forensic Footwear Analysis Presented by: Martin Herman Other Team Members: Gunay Dogan, Eve Fleisig, Janelle Henrich, Sarah Hood, Hari Iyer, Yooyoung Lee, Steven Lund, Gautham Venkatasubramanian Information Technology Laboratory, NIST January 25, 2018

  2. Funding Support • National Institute of Justice • IAA# DJO-NIJ-17-RO-0202 • NIST • Internal funds

  3. MOTIVATION • 2009 NAS; 2016 PCAST: • Footwear identifications are largely subjective • Questions about reliability • Questions about scientific validity • Need for quantitative assessments of footwear evidence • Need for increased objectivity of footwear analysis • Need to improve measurement science underpinnings of forensic footwear analysis through quantitative analysis • Need for algorithmic approaches for quantitative analysis by the forensic footwear community

  4. GOALS • Develop quantitative, objective methods for footwear impression comparisons • High degree of repeatability & reproducibility • Easier to measure accuracy with objective methods • Scientifically defensible • Provide software tools for practitioners to use in casework

  5. Quantitative Footwear Impression Comparisons:Approach • For use by examiners in evidence evaluation • FRStat for fingerprints (DFSC) – currently in use • Proof of concept demonstration Current Examiner Comparison Process COMPARISON Crime Scene Impressions Test Impressions Conclusion plus Report Suspect Shoe

  6. Quantitative Comparisons: End-to-End Proof of Concept Crime Scene Impression Image Feature Extraction Feature-based Matching Generate Comparison Score Score Test Impression Casework comparison score Case-relevant ground-truth-known mated and non-mated image pairs (crime scene, test impression) Generate Score Distribution & ROC charts Plot Casework Score on charts Charts, summaries, conclusions, and/or error rates

  7. Proposed Examiner Comparison Process COMPARISON – Examiner Considers Additional Information: Score Distribution/ROC Charts and Error Rates Crime Scene Impressions Test Impressions Conclusion plus Report Suspect Shoe

  8. Data: Staged Crime Scene

  9. Data: Augmented Crime Scene

  10. Crime Scene Impression Image Feature Extraction Feature-based Matching Generate Comparison Score Score Test Impression Casework comparison score Generate Score Distribution & ROC charts Plot Casework Score on charts Case-relevant ground-truth-known mated and non-mated image pairs (crime scene, test impression) Charts, summaries, conclusions, and/or error rates

  11. Image Feature Extraction • Existing algorithms and literature for automated shoeprint matching limited to database retrieval • Performance not adequate for evidence evaluation • Difficulties in automatically identifying outsole features (design, wear, size and RACs) in crime scene images • Partial data, occlusions, smearing, noise, low contrast, cluttered background, multiple impressions, etc. • Our Approach: Hybrid human/computer feature extraction

  12. GUI for Image Mark-UpaaSame SourcekStaged Crime Scene Augmented Test Impression

  13. Auto Adjust

  14. Auto Adjust - Circle

  15. Copy & Paste with Auto Adjust

  16. Find Along Path

  17. Find Along Path - Circles

  18. Find Parallel

  19. Find Parallel – Concentric Circles

  20. Crime Scene Impression Image Feature Extraction Feature-based Matching Generate Comparison Score Score Test Impression Casework comparison score Generate Score Distribution & ROC charts Plot Casework Score on charts Case-relevant ground-truth-known mated and non-mated image pairs (crime scene, test impression) Charts, summaries, conclusions, and/or error rates

  21. Feature-based Matching:Three Preliminary Algorithms • MCM_Dist • Maximum Clique Matching based on feature point distance differences between two impressions • DT_MCM_Dist • Delaunay Triangulation Maximum Clique Matching based on feature point distance differences • Comparison score: (Number of maximum cliques) / (total number of cliques) • CT (Comparable Triangles) • Number of comparable triangles based on triangular area and angle differences

  22. Crime Scene Impression Image Feature Extraction Feature-based Matching Generate Comparison Score Score Test Impression Casework comparison score Generate Score Distribution & ROC charts Plot Casework Score on charts Case-relevant ground-truth-known mated and non-mated image pairs (crime scene, test impression) Charts, summaries, conclusions, and/or error rates

  23. Score Distributions (DT_MCM_Dist case) Maximum separation is the goal Kernel Density Estimation

  24. Crime Scene Impression Image Feature Extraction Feature-based Matching Generate Comparison Score Score Test Impression Casework comparison score Generate Score Distribution & ROC charts Plot Casework Score on charts Case-relevant ground-truth-known mated and non-mated image pairs (crime scene, test impression) Charts, summaries, conclusions, and/or error rates

  25. Casework ExampleSimulated Crime Scene Test Impression Comparison score = 0.303 with DT_MCM_DIST algorithm Original images courtesy of Ron Mueller

  26. Kernel Density Estimation At the casework score of 0.303:

  27. Some Possible Ways to Summarize Results • Score distribution and ROC charts are created using a case-relevant reference data set – known pairs of mated and non-mated impressions that are representative of impression pairs obtained under conditions similar to the current crime scene. • The casework score is greater than 75% of mated pair scores, while 2.2% of non-mated pairs have scores higher than the casework score. • If the casework pair were considered a match, then • All pairs with higher scores would also be matches. • Therefore at least 2.2% of the non-mates would be mislabeled. • All non-mates with higher scores would be false matches, giving a FPR ≥ 2.2%. • If the casework pair were considered a non-match, then • All pairs with lower scores would also be non-matches. • This means that at least 75% of the mates would be mislabeled. • All mates with lower scores would be false non-matches, giving a FNR ≥ 75%. • Apply Kernel Density Estimation (or similar method) to histograms and obtain Likelihood Ratio as ratio of heights at casework score. LR = 13.23 Different modeling methods will result in different LRs.

  28. Some Possible Ways to Summarize Results • Score distribution and ROC charts are created using a case-relevant reference data set – known pairs of mated and non-mated impressions that are representative of impression pairs obtained under conditions similar to the current crime scene. • The casework score is greater than 75% of mated pair scores, while 2.2% of non-mated pairs have scores higher than the casework score. • If the casework pair were considered a match, then • All pairs with higher scores would also be matches. • Therefore at least 2.2% of the non-mates would be mislabeled. • All non-mates with higher scores would be false matches, giving a FPR ≥ 2.2%. • If the casework pair were considered a non-match, then • All pairs with lower scores would also be non-matches. • Therefore at least 75% of the mates would be mislabeled. • All mates with lower scores would be false non-matches, giving a FNR ≥ 75%. • Apply Kernel Density Estimation (or similar method) to histograms and obtain Likelihood Ratio as ratio of heights at casework score. LR = 13.23 Different modeling methods will result in different LRs.

  29. Some Possible Ways to Summarize Results • Score distribution and ROC charts are created using a case-relevant reference data set – known pairs of mated and non-mated impressions that are representative of impression pairs obtained under conditions similar to the current crime scene. • The casework score is greater than 75% of mated pair scores, while 2.2% of non-mated pairs have scores higher than the casework score. • If the casework pair were considered a match, then • All pairs with higher scores would also be matches. • Therefore at least 2.2% of the non-mates would be mislabeled. • All non-mates with higher scores would be false matches, giving a FPR ≥ 2.2%. • If the casework pair were considered a non-match, then • All pairs with lower scores would also be non-matches. • Therefore at least 75% of the mates would be mislabeled. • All mates with lower scores would be false non-matches, giving a FNR ≥ 75%. • Apply Kernel Density Estimation (or similar method) to histograms and obtain Score-based Likelihood Ratio as ratio of heights at casework score. SLR = 13.23 Different modeling methods will result in different SLRs.

  30. Understanding the Results The comparison score obtained for the casework pair of impressions, along with score distribution and ROC charts, plus a careful description of the case-relevant reference dataset, can be used to help make weight of evidence assessments.

More Related