410 likes | 652 Views
CVE Behind the Scenes: The Complexity of Being Simple. Steve Christey The MITRE Corporation July 11, 2001. © 2001 The MITRE Corporation. Common Vulnerabilities and Exposures (CVE). CVE at a glance Criteria for a good CVE CVE submission stage Content decisions CVE candidate stage
E N D
CVE Behind the Scenes:The Complexity of Being Simple Steve Christey The MITRE Corporation July 11, 2001 © 2001 The MITRE Corporation
Common Vulnerabilities and Exposures (CVE) • CVE at a glance • Criteria for a good CVE • CVE submission stage • Content decisions • CVE candidate stage • Reserving candidate numbers • CVE entry stage • Comments on CVE names • Top 10 vulnerability types • Managing diverse perspectives
CVE-1999-0067 Description: CGI phf program allows remote command execution through shell metacharacters. References: CERT:CA-96.06.cgi_example_code XF:http-cgi-phf BID:629 CVE at a Glance http://cve.mitre.org
CVE Enables Detailed Product Comparisons • CVE can normalize how vulnerabilities are counted • But how should it normalize them?
Vulnerability Scanner CVE-1 CVE-2 CVE-3 Security Bulletins CVE-1 CVE-2 Attack CVE-3 CVE-1 Attack CVE-2 Attack CVE-1 Attacks CVE-1 CVE-3 • CVE-1: that system is not vulnerable, so don’t send an alert Web Sites • CVE-2: my scanner must work well IDS • CVE-3: my IDS must work well Using CVE in the Enterprise My scanner can’t find CVE-3, and I need patches for CVE-1 I need to fix these vulnerabilities CVE-3 CVE-1 is on my network My IDS can’t detect attacks on CVE-2
Criteria for a Good CVE • From “Towards a Common Enumeration of Vulnerabilities” (Mann/Christey, 2nd Workshop on Research with Security Vulnerability Databases, Purdue, January 21-22, 1999) 1. Enumerate and discriminate between all known vulnerabilities 2. Assign a standard, unique name to each vulnerability 3. Be publicly "open" and shareable without distribution restrictions 4. Exist independently of the multiple perspectives of what a vulnerability is • Evolving design considerations • Be as simple as possible • Minimize workload and overlap with databases • Support the lookup of CVE names • Involve diverse players across the security community
CVE Editorial Board Members(As of June 4, 2001) Response Teams Ken Armstrong - CanCERT Bill Fithen - CERT/CC Shawn Hernan - CERT/CC Scott Lawler - DOD-CERT John Rhodes - DOE-CIAC Tool Vendors Andy Balinsky - Cisco Scott Blake - BindView Tim Collins - NFR Renaud Deraison - Nessus John Flowers - nCircle Andre Frech - ISS Kent Landfield - infoAssure Jim Magdych - NAI David Mann - BindView Craig Ozancin - AXENT Paul Proctor - CyberSafe Mike Prosser - Symantec Marcus Ranum - NFR Tom Stracener - nCircle Bill Wall - Harris Kevin Ziese - Cisco Academic/Educational Matt Bishop - UC Davis Computer Security Lab Eric Cole - SANS Pascal Meunier - Purdue University CERIAS Alan Paller - SANS Institute Gene Spafford - Purdue University CERIAS Information Providers Russ Cooper - NTBugtraq Elias Levy - Bugtraq, Security Focus Peter Mell - NIST Ron Nguyen - Ernst and Young Ken Williams - eSecurityOnline.com OS Vendors Troy Bollinger - IBM Casper Dik - Sun David LeBlanc - Microsoft Other Security Experts Kelly Cooper - GTE Internetworking Steve Northcutt - SANS Larry Oliver - IBM Adam Shostack - Zero-Knowledge Systems Stuart Staniford - Silicon Defense MITRE Dave Baker, Steve Christey, Bill Hill http://cve.mitre.org/board/
Criterion #1:Enumerate and discriminate between all known vulnerabilities
Issue: What is a Vulnerability? • CVE was originally called “Common Vulnerability Enumeration” • Security tools included many “non-vulnerabilities” • “Terminological warfare” by Editorial Board in August 1999 • 2 main debates • What is a vulnerability? • Should CVE include things that aren’t vulnerabilities? • Primary example: running finger (CVE-1999-0612) • “Stepping stone” but not directly exploitable • Various alternate terms were debated • “Exposure” wasn’t being used that often back then, and there was a strong need to keep the CVE acronym, so... • See: • http://cve.mitre.org/about/terminology.html • http://cve.mitre.org/board/archives/1999-08/threads.html Vulnerability definitions vary widely! Criterion #1: Enumerate and discriminate between all known vulnerabilities
CAN-1999-0205 Denial of service in Sendmail 8.6.11 and 8.6.12 Issue: What is a Real Vulnerability? • ~50% of all issues are not publicly acknowledged by the vendor • http://cve.mitre.org/board/archives/2000-09/msg00038.html • Many vulnerabilities are found in obscure software by unknown researchers without independent confirmation • Resource-intensive to verify every report • Many sources focus on comprehensiveness and timeliness • If it’s reported but it may not be real, should it be added to CVE? • It will at least be reviewed • How much verification is necessary? • Extreme example • Could not be replicated by vendor • Checked by multiple tools (which may only compare banners) Criterion #1: Enumerate and discriminate between all known vulnerabilities
Issue: What is a Known Vulnerability? • Only include “publicly known” vulnerabilities • Rely on data sources to provide MITRE with vulnerability lists • 10 sources for legacy issues (1999 and earlier) • 8500 total items provided • 4 periodic vulnerability summaries used for new issues • See http://cve.mitre.org/cve/datasources.html for details • Some vulnerabilities are not announced through normal public sources that allow peer review Criterion #1: Enumerate and discriminate between all known vulnerabilities
Conversion • Convert items in database/tool to submission format • Assign temporary ID’s to each submission Matching • Find most similar submissions, candidates, and entries based on keywords Refinement • Combine all matched submissions into groups • Use each group to create candidates Identifying Known Vulnerabilities:The CVE Submission Stage • Sources provide MITRE with their lists of all known vulnerabilities • MITRE’s CVE Content Team processes submissions Criterion #1: Enumerate and discriminate between all known vulnerabilities
Source A Source B Source C • Well-formatted text • Uses symbolic ID’s • Excel spreadsheet • Uses numeric ID’s (row number) • Web page • Uses numeric ID’s A:2 ftp-pasv A:1 iis-dos B:1 17 B:2 179 C:1 19 C:2 23 A:3 sendmail-headers A:4 cgi-overflow B:3 524 B:4 29 C:3 46 C:4 21 A:5 cde-privilege-drop Submission Conversion • Data format is often unique to the sources • Specialized scripts convert a source to standard submission format • Extract description, references, source’s own unique identifiers Criterion #1: Enumerate and discriminate between all known vulnerabilities
Convert description, short name, references into keywords • Periods, hyphens, etc. are NOT treated as word separators • Standardize keywords using thesaurus CVE Thesaurus buffer overflow buffer overrun pfdisplay pfdispaly wuftpd wu-ftp wu-ftpd wuftp Raw Submission IE MSIE IE4 internet_explorer Normalizer • Maps similar terms to a single, standard term • Reduce chance of missing a match • Example: “pfdisplay” • A common mistake since the misspelled word is the actual name of the program! Normalized Submission Normalizing Keywords Criterion #1: Enumerate and discriminate between all known vulnerabilities
B:1 17 A:1 iis-dos C:1 19 Match: Match: Match: A:3 sendmail-headers B:3 524 C:4 21 B:1 17 A:1 iis-dos C:2 23 C:2 23 A:2 ftp-pasv B:3 524 B:2 179 CAN-1999-1234 CAN-1999-1234 CVE-1999-1547 Submission Matching • Each submission is matched against all submissions, candidates, and entries • Information retrieval techniques provide list of 10 closest matches • Metric favors rare keywords over common ones • Tends to favor application names and version numbers • Can overly favor “incidentally rare” terms • Human marks which matches are correct Criterion #1: Enumerate and discriminate between all known vulnerabilities
Determine submission groups by inferring match relationships • B:1 = A:2 C:1 = B:1 (A:2, B:1, C:1) • Ensure that submissions all describe same problem • Matching can be inaccurate • Deep analysis is a bottleneck B:1 17 C:1 19 CAN-1999-1234 B:3 524 A:1 iis-dos A:2 ftp-pasv • Verify that submissions are duplicates of the candidate (or entry) • Collect references • Write description • Apply content decisions Submission Refinement Submissions with CVE names could reduce much of this effort! Criterion #1: Enumerate and discriminate between all known vulnerabilities
Some Challenges in Refinement • Identifying duplicate issues • Can be difficult to trace a problem in multiple software packages back to a common codebase • Differences in reporting between researcher and vendor • Lack of sufficient details in advisories • Even credits to a particular person aren’t always sufficient! • A problem when similar issues are discovered at the same time • Change logs can be vague • Diffs (when available) may not provide sufficient context • Delays between initial discovery and advisory • Rediscoveries of previously reported problems • Understanding unique, application-specific vulnerabilities • Managing the volume of information • Writing a good description (compact but with the right keywords) • Writing and following good consistency rules Criterion #1: Enumerate and discriminate between all known vulnerabilities
Content Decisions • Explicit guidelines for content of CVE entries • Ensure and publicize consistency within CVE • Provide “lessons learned” for researchers • Document differences between vulnerability “views” • Three basic types • Inclusion: What goes into CVE? What doesn’t, and why? • Level of Abstraction: One or many entries for similar issues? • Format: How are CVE entries formatted? • Difficult to document • “[It’s] like trying to grasp wet corn starch” (Board member) Incomplete information is the bane of consistency - and content decisions! Criterion #1: Enumerate and discriminate between all known vulnerabilities
Example Content Decision: SF-LOC(Software Flaws/Lines of Code) • Older versions of this CD distinguished between problems of the same type • “Split-by-default” approach generated “too many” candidates • Also “unfair” to vendors with source code or detailed reports • Once produced 8 candidates where other tools and databases would have created only 1 vulnerability record • Affected by amount of available information • Especially source code and exploit details • For all candidates affected by SF-LOC, see: • http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=CD:SF-LOC Create separate entries for problems in the same program that are of different types, or that appear in different software versions. Criterion #1: Enumerate and discriminate between all known vulnerabilities
Avirt Mail 4.0 and 4.2 allows remote attackers to cause a denial of service and possibly execute arbitrary commands via a long "RCPT TO" or "MAIL FROM" command. CAN-2000-0971 2 failure points Auction Weaver CGI script 1.03 and earlier allows remote attackers to read arbitrary files via a .. (dot dot) attack in the fromfile parameter. CAN-2000-0686 2 failure points Auction Weaver CGI script 1.03 and earlier allows remote attackers to read arbitrary files via a .. (dot dot) attack in the catdir parameter. CAN-2000-0687 Directory traversal vulnerability in Arrowpoint (aka Cisco Content Services, or CSS) allows local unprivileged users to read arbitrary files via a .. (dot dot) attack CAN-2001-0020 Arrowpoint (aka Cisco Content Services, or CSS) allows local users to cause a denial of service via a long argument to the “show script,” “clear script,” “show archive,” “clear archive,” “show log,” or “clear log” commands. CAN-2001-0019 SF-LOC Examples • CAN-2001-0019 is clearly different than CAN-2001-0020 • But a single patch fixes both problems • CAN-2001-0019 could be 1, 2, or 6 vulnerabilities 6 failure points Criterion #1: Enumerate and discriminate between all known vulnerabilities
Why CAN-2001-0019 Could Identify 1, 2, or 6 Vulnerabilities • 3 different source code scenarios • Without actual source, can’t be sure which scenario is true • Even with source, there are different ways of counting • Multiple format string problems are especially difficult to distinguish if (strcmp(cmd, "show") == 0) { if (strcmp(arg1, "script") == 0) { strcpy(str, long_input); show_script(str); } elsif (strcmp(arg1, "archive") == 0) { strcpy(str, long_input); show_archive(str); } elsif (strcmp(arg1, "log") == 0) { strcpy(str, long_input); show_log(str); } } elsif (strcmp(cmd, "clear") == 0) { if (strcmp(arg1, "script") == 0) { strcpy(str, long_input); show_script(str); } elsif (strcmp(arg1, "archive") == 0) { strcpy(str, long_input); show_archive(str); } elsif (strcmp(arg1, "log") == 0) { strcpy(str, long_input); show_log(str); } } strcpy(arg, long_input); if (strcmp(cmd, "show") == 0) { process_show_command(arg); } elsif (strcmp(cmd, "clear") == 0) { process_show_command(arg); } if (strcmp(cmd, "show") == 0) { strcpy(str, long_input); process_show_command(str); } elsif (strcmp(cmd, "clear") == 0) { strcpy(str, long_input); process_clear_command(str); } Criterion #1: Enumerate and discriminate between all known vulnerabilities
CVE-1999-0068 CGI PHP mylog script allows an attacker to read any file on the target server CVE-1999-0346 CGI PHP mlog script allows an attacker to read any file on the target server Example Content Decision: SF-EXEC(Software Flaws in Multiple Executables) • Could be a cut-and-paste error, or use of library code • Example If the same flaws appear in multiple executables in the same package at the same time, then combine them into a single entry. • Both scripts had a vulnerable “<include>” command that didn’t filter out bad characters • Should “include” have filtered the characters in the file name? • Or should they have been filtered before the include was called? • For all candidates affected by SF-EXEC, see: • http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=CD:SF-EXEC Criterion #1: Enumerate and discriminate between all known vulnerabilities
Other Example Abstraction CD’s • CF-PASS • Should there be one large entry for all default passwords? • What about undocumented or back door passwords? • What about guessable passwords? • Other unnamed examples • Should separate entries be created for DDoS tools, Trojan Horses, worms, and viruses? • How to handle insecure auditing settings, access permissions, or privilege assignments? How to define “insecure”? • Is it the software application’s responsibility to restrict memory usage, or is it the responsibility of the admin to use OS-specific parameters? Abstraction content decisions directly impact CVE-based metrics, especially for configuration problems and malicious code. Criterion #1: Enumerate and discriminate between all known vulnerabilities
Buffer overflow in qpopper 3.0 beta versions allows local users to gain privileges via a long LIST command. Buffer overflow in ICQ 99b 1.1.1.1 client allows remote attackers to execute commands via a malformed URL within an ICQ message. CAN-2000-0096 CAN-2000-0046 Microsoft Outlook 2000 does not properly process long or malformed fields in vCard (.vcf) files, which allows attackers to cause a denial of service. CAN-2000-0756 AOL Instant Messenger (AIM) clientallows remote attackers to cause a denial of service via a message with a malformed ASCII value. CAN-2000-0190 Buffer overflow in VCard handler in Outlook 2000 and 98, and Outlook Express 5.x, allows an attacker to execute arbitrary commands via a malformed vCard birthday field. CAN-2001-0145 Example Inclusion CD’s • EX-BETA: “Exclude problems in beta products, unless the product is in permanent beta, or it has received wide distribution.” • EX-DOS-CLIENT: “Exclude denial of service problems in clients unless the DoS extends beyond the client itself.” • But what if a buffer overflow causes the DoS? Criterion #1: Enumerate and discriminate between all known vulnerabilities
Criterion #2:Assign a standard, unique name to each vulnerability
B:1 17 C:1 19 To Source A ftp-pasv = CAN-YYYY-NNNN iis-dos = CAN-1999-1234 A:2 ftp-pasv To Source B 17 = CAN-YYYY-NNNN 524 = CAN-1999-1234 Backmap To Source C 19 = CAN-YYYY-NNNN CAN-1999-1234 B:3 524 A:1 iis-dos Candidate Stage: Assignment • Assign new number (CAN-YYYY-NNNN) • YYYY is the year in which the number was assigned; NNNN is a counter for that year CAN-YYYY-NNNN • Backmap: internal ID’s mapped to candidate names, sent back to provider • Submissions removed Criterion #2: Assign a standard, unique name to each vulnerability
Candidate Stage: Reservation • Candidate numbers can be reserved for inclusion in initial public announcements of vulnerabilities • Number is immediately available to all parties • Simplifies correlation and cross-referencing • Available to all researchers, vendors, and third parties • Current participants include well-known researchers, security vendors, and software vendors • ~150 candidates reserved since April 2000 • Outreach being conducted to other vendors and researchers • Candidate Numbering Authorities (CNA’s) are authorized to reserve pools of candidate numbers from MITRE for other parties • Vendors or third parties The candidate reservation process must be designed to minimize the impact on responsible vulnerability disclosure practices. Criterion #2: Assign a standard, unique name to each vulnerability
Candidate Numbering Authority Researcher Request Candidate MITRE CAN POOL CAN-YYYY-NNNN • Primary CNA • Accessible to researchers via getcans@mitre.org • Educate CNA about content decisions • Update CVE web site when candidate is publicly announced • Track potential abuses • Request candidate from CNA • Provide candidate number to vendor and other parties • Include candidate number in initial public announcement • Notify MITRE of announcement • Perform due diligence to avoid duplicate or incorrect candidates • Should work with affected vendor to increase confidence in correctness of the candidate • Obtain pool of candidate numbers from MITRE • Define requirements for researchers to obtain a candidate • Assign correct number of candidate numbers • Ensure candidate is shared across all parties • Do not use candidates in “competitive” fashion Candidate Reservation Process Criterion #2: Assign a standard, unique name to each vulnerability
Know something? Gonna tell everyone? Get a CAN! getcans@mitre.org Criterion #2: Assign a standard, unique name to each vulnerability
CAN-YYYY-NNNN • Clustering (date of discovery, OS, service type, etc.) • Published on CVE web site • Editorial Board members vote on candidate • ACCEPT, MODIFY, REVIEWING, NOOP (No Opinion), RECAST (change level of abstraction), REJECT Proposal • Add references, change description • Change level of abstraction • Significant changes may require another round of voting Modification • ACCEPT or REJECT (Requires sufficient votes) • At least 2 weeks after initial proposal • 4 days for last-minute feedback Interim Decision • ACCEPT or REJECT • Convert CAN-YYYY-NNNN to CVE-YYYY-NNNN • Report final voting record • Create new CVE version Final Decision Candidate Stage: Proposal Through Final Decision Criterion #2: Assign a standard, unique name to each vulnerability
CVE-YYYY-NNNN Publication • Publish new CVE version and difference report • Minor modifications • Add references • Change description Modification • New information may force a re-examination of the entry • Level of abstraction may need to be changed • May be a duplicate • May not be a problem after all Reassessment • May need to “delete” an existing entry (e.g. duplicate entries) • But, some products may still use this number • Register the “deletion” but keep entry available for review Deprecation Entry Stage Criterion #2: Assign a standard, unique name to each vulnerability
CVE Growth Status (as of May 28, 2001) • 1510 entries • 1120 candidates Criterion #2: Assign a standard, unique name to each vulnerability
What’s in a Name? • Some benefits with the current naming scheme • Compact • Candidate/entry status encoded within the name • Most CAN-YYYY-NNNN will become CVE-YYYY-NNNN • Removes debate about what a “good” name is • Some issues • Year segment can be misunderstood as year of discovery • Name is not atomic in most search engines, thus difficult to find • Changing a CAN to a CVE can incur maintenance costs • Maximum 10,000 candidates per year (CAN-10K problem) • Once public, names must not disappear without explanation • Deprecated entries, rejected candidates Any change to the CVE naming scheme will impact many users. Criterion #2: Assign a standard, unique name to each vulnerability
Criterion #3:Be publicly open and shareable, without distribution restrictions
What’s Open • Editorial Board mailing list archives and meeting summaries • Candidate votes and comments from Board members • Mailing lists available for CVE news and data updates • Various download formats • Reference maps from common sources to CVE names • CERT/CC advisories, *Bugtraq posts, vendor bulletins, etc. • Challenges • Minimize potential “competition” with other information sources • Can’t include confidence levels, OS, risk levels, etc. • Challenges by vendors • Documenting evolving approaches (e.g. content decisions) • Changing reference names or lack of clear advisory names • Examples: Cisco, OpenBSD, SuSE, Debian • Translations into other languages • Managing expectations Criterion #3: Be publicly open and shareable, without distribution restrictions
Top Ten Vulnerability Types in CVE(Issues publicized between Jan 2000 and April 2001) 1540 total CVE entries and candidates analyzed (yes, that’s 100 per month) CVE content decisions directly affect these statistics. Criterion #3: Be publicly open and shareable, without distribution restrictions
AltaVista search engine allows remote attackers to read files above the document root via a .. (dot dot) in the query.cgi CGI program. CVE-2000-0039 The lreply function in wu-ftpd 2.6.0 and earlier does not properly cleanse an untrusted format string, which allows remote attackers to execute arbitrary commands via the SITE EXEC command. Buffer overflow in QPC QVT/Net Popd 4.20 in QVT/Net 5.0 allows remote attackers to cause a denial of service, and possibly execute arbitrary commands, via (1) a long username, or (2) a long password. CAN-2001-0443 CVE-2000-0573 CVE-1999-0077 Predictable TCP sequence numbers allow spoofing. CVE-1999-0032 Buffer overflow in BSD-based lpr package allows local users to gain root privileges. Education of a Vulnerability Analyst(In 3 Acts) • The early days: Insufficient details to discriminate between similar vulnerabilities • The middle days: Dealing with emerging terminology • Today: Include uncertainty, more details, support abstraction Criterion #3: Be publicly open and shareable, without distribution restrictions
Criterion #4:Exist independently of the multiple perspectives of what a vulnerability is
One CVE per patch Include Fix info Perfectionist Fast and furious, and fix things later Sysadmin Slow and steady, get it right the first time One CVE per bug Speed Demon Include Wireless Focus on CVE Compatibility Include Telecom Catch up on older problems before focusing on new ones Exploits Signatures Use strong theoretical principles Attacks Give me everything I want And nothing I don’t New First Academic Researcher Miscellaneous IDS Give me CAN’s for new issues ASAP Include Confidence Levels Legacy First And Risk Levels Based on my own needs Include Viruses Change the Naming Scheme XML Format Everyone Focus on CVE Compatibility Only include software flaws Security Manager Include whatever may conflict with security policy Traditionalist Or at the latest, now Yesterday Anti- Admin Exclude insecure configurations Scanner Some Perspectives Criterion #4: Exist independently of the multiple perspectives of what a vulnerability is
Managing Perspectives Listen to the users Re-prioritize when possible Consult the Editorial Board Maximize utility CVE Grow a thick skin Document and educate when you can Please everyone some of the time Criterion #4: Exist independently of the multiple perspectives of what a vulnerability is
For More Information http://cve.mitre.org