1 / 45

Users Are Not Dependable How to make security indicators that protect them better

Users Are Not Dependable How to make security indicators that protect them better. Min Wu, Simson Garfinkel, Robert Miller MIT Computer Science and Artificial Intelligence Lab. User Is Part Of System. “ Weakest link ” in operational security systems

yul
Download Presentation

Users Are Not Dependable How to make security indicators that protect them better

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Users Are Not DependableHow to make security indicators that protect them better Min Wu, Simson Garfinkel, Robert Miller MIT Computer Science and Artificial Intelligence Lab

  2. User Is Part Of System • “Weakest link” in operational security systems • If attackers can easily trick users into compromising their security, they do not have to try hard to directly attack the system. • A typical attack: Phishing

  3. Security Indicators • “Look for the lock at the bottom of your browser and ‘https’ in front of the website address.”

  4. Security Indicators • “Look for the lock at the bottom of your browser and ‘https’ in front of the website address.”

  5. More Security Indicators

  6. More Security Indicators Spoofstick

  7. More Security Indicators Netcraft Toolbar

  8. More Security Indicators Trustbar

  9. More Security Indicators eBay Account Guard

  10. More Security Indicators Spoofguard

  11. Outline • Introduction of security indicators • Anti-phishing user study • Web authentication using cell phones • Conclusions

  12. Security Toolbar Abstractions SpoofStick Neutral-Information Toolbar Netcraft Toolbar eBay Account Guard System-Decision Toolbar SpoofGuard Positive-Information Toolbar TrustBar

  13. Study Scenario • We set up dummy accounts as John Smith at various websites • “You are the personal assistant of John Smith. John is on vacation now. During his vacation, he sometimes sends you emails asking you to do some tasks for him online.” • “Here is John Smith’s profile.”

  14. Study Scenario • Users dealt with 20 emails forwarded by John Smith. • 5 emails were phishing emails. • Most of the emails were about managing John’s wish lists at various sites

  15. Main Frame

  16. Address bar frame http://tigermail.co.kr/cgi-bin/webscrcmd_login.php

  17. Toolbar frame Status bar frame

  18. Attack Types 1. Similar-name attack 2. IP-address attack 3. Hijacked-server attack 4. Popup-window attack 5. Paypal attack bestbuy.com  www.bestbuy.com.ww2.us bestbuy.com  212.85.153.6 bestbuy.com  www.btinternet.com

  19. Security Toolbar Display vs. Legitimate Site Phishing Site

  20. Attack Pattern

  21. Recruitment • 30 users • Recruited at MIT, paid $15 for one hour • 10 for each toolbar • Average age 27 [18-50] • 14 females and 16 males • 20 MIT students, 10 not Neutral-Information Toolbar System-Decision Toolbar Positive-Information Toolbar

  22. Spoof Rates With Different Toolbars

  23. Spoof Rates With Different Attacks p = 0.052 (ANOVA)

  24. Why Did Users Get Fooled? • 20 out of 30 got fooled by at least one attack. Among the 20 users • 17 (85%) claimed web content is professional or familiar; 7 (35%) depended on security-related content • 12 (60%) explained away odd behaviors • “I have been to sites that use plain IP addresses.” • “Sometimes I go to a website, and it directs me to another site with a different address.” • “Yahoo may have just opened a branch in Brazil and thus registered there.” • “I must have mistakenly triggered the popup window.”

  25. Results • Users did not rely on security indicators • Depended on web content instead • Cannot distinguish poorly designed websites from malicious phishing attacks

  26. Outline • Introduction of security indicators • Anti-phishing user study • Web authentication using cell phones • Authentication protocol • User study • An improved protocol • Conclusions

  27. Authentication Using Cell Phones • Prevent people’s passwords from being captured by public computers • Use trusted cell phone to authenticate login sessions from untrusted public computers • Checking security indicator is part of the authentication protocol

  28. Authentication Protocol

  29. Authentication Protocol Login attempt

  30. Authentication Protocol “FAITH” Login attempt “This login session is named ‘FAITH’.” “Do you approve login session named ‘FAITH’?” “FAITH”

  31. Authentication Protocol “FAITH” Login attempt “This login session is named ‘FAITH’.” “Do you approve login session named ‘FAITH’?” “FAITH”

  32. Authentication Protocol “FAITH” Login attempt “This login session is named ‘FAITH’.” “Do you approve login session named ‘FAITH’?” “I approve ‘FAITH’.” “FAITH”

  33. Authentication Protocol “FAITH” Login attempt “This login session is named ‘FAITH’.” Log in “Do you approve login session named ‘FAITH’?” “I approve ‘FAITH’.” “FAITH”

  34. User Interface

  35. Duplicated attack Blocking attack Attack Types

  36. User Study • Log in to Amazon.com with a personal computer and a cell phone • 6 logins in a row • Attacks were randomly selected and assigned to the 5th or the 6th login • 20 users • Recruited at MIT, paid $10 for one hour • Average age 25 [18 - 43] • 9 females and 11 males • 16 MIT students, 4 not

  37. Results • Duplicated attack: 36% (4 successful out of 11 attacks) • “There must be a bug in the proxy since the session name displayed in the computer does not match the one in the cell phone.” • Blocking attack: 22% (2 successful out of 9 attacks) • “The network connection must be really slow since the session name has not been displayed.” • Users failed to follow the protocol • Cannot distinguish system failures from malicious attacks

  38. An Improved Protocol Thanks to Steve Strassman from Orange™

  39. Duplicated Attack Blocking attack Under Attacks

  40. Results • Login by choosing a correct session name has zero spoof rate! • 9 duplicated attacks and 11 blocking attacks • There was little chance that the attacker’s list included the user’s session name in the browser • Users were forced to attend to the security indicator

  41. Conclusions • Security indicator checking scheme fails • Users ignore advice (34% spoof rate) • Users do not follow instructions (30% spoof rate) • Users cannot distinguish “bugs” from “attacks” • Security indicator is not part of the user’s “critical action sequence”

  42. Lesson Learned • Moving the security indicator into the critical action sequence can better protect users

  43. Users Cared About Security • 18 out of 30 uncheck “remember me” • 13 out of 30 logged out (or tried to) after at least one task

  44. Legitimate Site Phishing Site

More Related