1 / 29

Towards a Taxonomy of Vulnerability Scanning Techniques

Towards a Taxonomy of Vulnerability Scanning Techniques. Adam Shostack Bindview Development adam@bindview.com. Overview. Audience Goals Taxonomies Exploit Testing Inference Methods. Audience. This talk is for users of security scanners Better understand the tools

idania
Download Presentation

Towards a Taxonomy of Vulnerability Scanning Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards a Taxonomy of Vulnerability Scanning Techniques Adam Shostack Bindview Development adam@bindview.com

  2. Overview • Audience • Goals • Taxonomies • Exploit Testing • Inference Methods

  3. Audience • This talk is for users of security scanners • Better understand the tools • Be more effective in using them • Also for designers of scanning tools • Open a dialog between tool creators • Be able to discuss things at an abstract level

  4. Goals • To understand how security scanners find vulnerabilities • Understand the constraints on the tools • Create an engineering discussion • Greg Hoglund will explain why scanners suck, (track B, 4:00) so I won’t bother

  5. Taxonomies • Means of organizing information • Good taxonomies allow you to say interesting things about the groups • Bad taxonomies have poor “borders” • The classification decisions are not clear or reproducible by different classifiers • Even good taxonomies may have anomalies • Duck-billed Platypus

  6. Starting Points • Exploit testing • Banner checking

  7. Finding /cgi-bin/phf • This is the classic, easy scan GET /cgi-bin/phf?q=;cat%20/etc/passwd • Reliable • Somewhat non-intrusive • Intuitively correct

  8. SATAN and Exploits • Reliance on banners • 250 SMTP cs.berkeley.edu Sendmail 4.1 Ready for exploit at 1/1/70 • Lookup sendmail 4.1 in database • Less reliable • Less intrusive • Intuitively worrisome

  9. Terminology • Vulnerability: A design flaw, defect, or misconfiguration which can be exploited by an attacker • Vulnerability scanners don’t use the term in the academic senses of the word • Problem: synonym for vulnerability, less loaded with semantic baggage • Why are we confident the system has the $DATA vulnerability?

  10. Terminology • Test: Algorithm for finding problem by exploit • We test for PHF • Inference: algorithm for finding problem without exploiting • We infer this sendmail has the debug problem

  11. Exploits (1) • GET /cgi-bin/phf?q=;cat%20/etc/passwd • This exploits the problem • Disproves the Safety Hypothesis • We see the results in the main TCP stream • This makes the check much more reliable • So why not always do this?

  12. Exploits (2) • Sometimes can not see the results instream • Need an alternate means of observation • Inherently less reliable • Majordomo Reply-To: bug • Exploit goes via mail queue, may take hours

  13. Risk Models • Safety Assumed • Many exploit tests work this way • Reduces false positives • Risk Assumed • Can work well with indirect, inference • Disprove Majordomo Reply-To by proving that host does not accept mail • Both are effective tools

  14. Banners In Exploit Tests • Can reduce false positive rates from misinterpreting results • Can reduce impact of testing by only testing “expected vulnerable” systems • Correctness of the technique depends on the definition of the vulnerability • “A web server gives out source when %20 appended” or “IIS Web server gives out source…?”

  15. Impact of Testing • Trying to violate the security of the system • Doing things the software author didn’t expect • This has a substantial effect on the system • Leave core files • Fill Logs • Add root accounts • Make copies of /etc/shadow off the host • Stack smashing attacks crash the service

  16. Impact of Testing • We haven’t starting talking about trying to test for Denial of Service problems teardrop land bonk killcisco

  17. DOS Testing (daemons) • Connect, attack, reconnect • Indirect observation technique • Fails under inetd • May fail because of other factors • Look carefully at connection • Learn a lot from RST vs. FIN vs. RST+PUSH

  18. DOS Testing (daemons) • Hard to test again if you’re not sure what you saw • Some daemons die on connect/close • Strobe found a plethora of these • So did nmap • Systems can fail for reasons unrelated to the check being performed

  19. “Was that a Production Server?” • Most tools try very hard to avoid this problem • It’s a huge drag on sales and support to crash targets (hosts or services) without warning you a dozen times

  20. Less Intrusive Methods • Inference • Versioning • Port Status • Protocol Compliance • Behavior • Presence of /cgi-bin/file • Credentials

  21. Inference • If exploit will crash the target • If output can not reliably be parsed • If exploit is still secret • Discovered by company, and not disclosed • No full disclosure debate please; companies do this • If exploit violates the rules • More applicable to consultants, custom tools • Distinctions more clear cut

  22. Versioning • Very effective when banner information is hard to change • named • ssh • Sendmail’s banner is not hard to change • Usually uses if (banner matches && version < N) sorts of logic

  23. Port Status • Declare risk if you can connect • Can be a policy violation in itself • Can be used when additional probing will not reveal more information, i.e. overflows in rpc-mountd • Gets interesting when done through a firewall, or with UDP

  24. Protocol Compliance • Exercise the server than port-status, thus more reliable, intrusive • Declare vulnerability based on results • Useful and correct when policy is “no web servers”

  25. Behavior • Examine edges of protocol for implementation details • Infer software information from results • Demonstrate that software under examination behaves differently from the software which has the vulnerability

  26. Credentials • Things needed to login • UNIX login/password • NT account name/password • “Login” on NT network means sending credentials with API calls • Does not include public, anonymous, guest

  27. Credentials (2) • Very non-intrusive (except at install time) • Very reliable • Once logged in: • ask for SW version information • MD5 files • Call APIs to gather data

  28. Conclusions • Overview of techniques based on: • Exploit • Inference • Credentials • Pros and Cons of various techniques • This is a work in progress • Lots of interesting work

  29. Towards a Taxonomy of Vulnerability Scanning Techniques Adam Shostack Bindview Development adam@bindview.com

More Related