190 likes | 207 Views
This analysis delves into SATE 2010 findings, highlighting the improvement of security quality through tool and manual evaluations. It explores weaknesses, vulnerability categories, and the significance of CVEs. The study emphasizes the importance of combining automated tools and human analysis to enhance software security.
E N D
SATE 2010 Analysis Aurélien Delaitre, NIST aurelien.delaitre@nist.gov October 1, 2010 The SAMATE Project http://samate.nist.gov/
Outline • What tools find • What people find • CVEs • Manual analysis
Building on SATE 2009 SATE 2010 SATE 2010 SATE 2009 SATE 2009
Security Quality Insignificant SATE 2010 Improving categories True Insignificant SATE 2009
Improving the guidelines 45 lines → 314 lines Considering weakness types Better uniformity in evaluations
Decision process Security Context ... Quality Path ... ... Type Unknown Insignificant ... Bug False
Sampling Warnings of each class of severity 1 - 4
CVEs Key elements of the path for matching: Blocks of code Sink or upflow path elements But not exhaustive
Example /* Dialect Index */ dialect = tvb_get_letohs(tvb, offset); if (si->sip && si->sip->extra_info_type==SMB_EI_DIALECTS) { dialects = si->sip->extra_info; if (dialect <= dialects->num) { dialect_name = dialects->name[dialect]; } } if (!dialect_name) { dialect_name = "unknown"; }
Manual analysis Dovecot for C Pebble for Java • Used a slightly later version
Dovecot No remotely exploitable vulnerability found Fuzzing Threat modeling Code review
Pebble Pen. test Threat modeling Code review Several vulnerabilities found
Tools ∩ humans No human findings for Dovecot No matches for Chrome and Wireshark
Interpretation CVEs ∩ tool findings = ∅ CVEs Tool findings All weaknesses
Interpretation CVE descriptions ∩ tool findings = ∅ CVE descriptions CVEs Tool findings All weaknesses