620 likes | 639 Views
Usable Security: Part 2. Dr. Kasia Muldner. Security. Usable Security. Human-Computer Interaction. Who is the weakest link?. Some thoughts…. "To err is human, to really foul things up you need a computer." - Paul Ehrlich "Don't make me think." - Steve Krug
E N D
Usable Security: Part 2 Dr. Kasia Muldner
Security Usable Security Human-Computer Interaction
Some thoughts… • "To err is human, to really foul things up you need a computer." - Paul Ehrlich • "Don't make me think." - Steve Krug • "Everything should be made as simple as possible, but not simpler." - Albert Einstein
Outline • Review • Security challenges • Usability guidelines • Heuristic evaluation • Part 1. Evaluating deletion & sanitization • Part 2. Phishing: evaluating a solution
What you should learn? • Security challenges • Usability guidelines • How to apply them via heuristic evaluation • All about phishing (ok maybe not all) Why should you learn this? • Local reason: material will be on assignment and/or test • Global reason: usable security is a hot topic in industry & academia
Review of security challenges • Secondary task how to motivate users? • Wide range of users how do we support different types of users? • Negative impact of errors how do we mitigate the damage?
Jacob Nielson’s Usability Guidelines U1. Visibility of system status • The system should always keep users informed about what is going on, via appropriate feedback within reasonable time. U2. Match between system and the real world • The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. U3. Aesthetic and minimalist design • Dialogues should not contain information or feedback which is irrelevant or rarely needed. U4. Help users recognize, diagnose, and recover from errors • Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. U5. Error prevention • Use a careful design which prevents errors from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. U6. User control and freedom • Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. U7. Recognition rather than recall • Making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. U8. Consistency and standards • Different words, situations, or actions should NOT mean the same thing. Follow platform conventions. U9. Flexibility and efficiency of use • Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. U10. Help and documentation • Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Principles for secure systems In addition to Nielson’s 10, users should: S1. Be reliably made aware of the security tasks they must perform [Nielson’s # 5, maybe #2?] S2. Be able to figure out how to successfully perform those tasks [Nielson’s #5] S3. Not make dangerous errors [Nielson’s #5] S4. Be sufficiently comfortable with the interface to continue using it [Nielson’s #7-#9] S5. Have sufficient feedback to accurately determine the current state of the system [Nielson’s #1]
Path of Least Resistance Match the most comfortable way to do tasks with the least granting of authority. Active Authorization Grant authority to others in accordance with user actions indicating consent. Revocability Offer the user ways to reduce others' authority to access the user's resources. Visibility Maintain accurate awareness of others' authority as relevant to user decisions. Self-Awareness Maintain accurate awareness of the user's own authority to access resources. Trusted Path Protect the user's channels to agents that manipulate authority on the user's behalf. Expressiveness Enable the user to express safe security policies in terms that fit the user's task. Relevant Boundaries Draw distinctions among objects and actions along boundaries relevant to the task. Identifiability Present objects and actions using distinguishable, truthful appearances. Foresight Indicate clearly the consequences of decisions that the user is expected to make. Principles for secure systems (2)
How do we evaluate a system to see if it follows guidelines? • Heuristic Evaluation • An Expert goes through a User Interface • Frequency • Impact • Persistence • Inspects elements (dialogs, feedback, etc) • Compares with usability principles • Comes up with a list of problems (explains why they are problems) • Decides on the severity of the problems (minor, major, catastrophe) • Typically, the expert takes several passes… Open-ended (just do it) Give the expert a script of what to do
Heuristic evaluation √ Pros: • Quick & Dirty (do not need to design experiment, get users, etc) • Good for finding obvious usability flaws x Cons: • Experts are not the “typical” user!
Outline • Review • Security challenges • Usability guidelines • Heuristic evaluation • Part 1. Evaluating deletion & sanitization • Part 2. Phishing: Evaluating a solution
Next up: don’t lie to the user! • Focus: deletion & sanitization • Slides for this portion of the lecture are based on “Best Practices for Usable Security in Desktop Software” presentation by Simson L. Garfinkel
Deletion and sanitization • Why study deletion? • Affects everybody: we all have private or security-critical information that needs to be deleted. • Lots of lore, not a lot of good academic research.
Desktop systems & “delete”… 1. Click on icon you want to delete 2. Drag it to the trash 3. Trash icon changes
4. Double click Recycle Bin icon to look at contents 5. Right-click for empty 6. Confirm empty 7. File is gone
Group Activity: Heuristic Evaluation • An Expert (you) goes through a User Interface • Inspect elements (dialogs, feedback, etc) • Compare with usability principles • Evaluate pros and cons of delete functionality according to guidelines • [optional] Decide on the severity of the problems (minor, major, catastrophe)
Recovery after confirmation… Can you get back a file after you empty the trash? Sure!
File can be recovered with “undelete” or forensic efforts. Tossed files randomly get shredded Intentionally overwritten file cannot be recovered from disk. Special utilities overwrite slack space. The Paradox of “Delete” Delete Overwrites file blocks. Unlinks file from directory. Put blocks on free list. Allow space to reused. “Shred” “Toss” Thanks to Clay Bennett at Christian Science Monitor
“Remembrance” study: 200 hard drives purchased more than 1/3 had data that been deleted but could be recovered! Hypothesis: data was there because of usability failures… Sanitization is a Big Problem
Approach #1: Distinguish “Toss” from “Shred” • Following publication of “Remembrance,” Apple added “Secure Empty Trash” to MacOS 10.3. • “Secure Empty” takes much longer than regular empty. ≈5 min instead of 5 sec
Other Problems… … Users may not know what “Secure Empty Trash” means… What is a “Secure Empty Trash” ???
Redesign the interaction • Make “shred” an explicit operation at the interface. (simulation)
What about “whoops?” • “Darn! I didn’t mean to hit shred.” Don’t use a “swat box”: (“this action cannot be undone…”) Instead: (simulation) (simulation)
Best Practices • Distinguish “toss” from “shred.” • Don’t use a “swat box” to confirm an action that can’t be undone! • It’s easier to beg for forgiveness than ask for permission • Let people change their minds. • “Polite Software Is Self-Confident” (Cooper, p. 167) ≠
What else do you clear? • “Files” can be tossed or shredded… • “History” is cleared… Clear History “Erase my tracks.”
IE: Clearing history 1. Select “Internet Options” 2. Select “Clear History” 3. Confirm (no “undo”)
Safari: Clearing history • Safari makes it easier. • Give the ability to remove personal information where it is displayed… • It’s obvious because you see it!
Interaction puns • One action means two things… • Many actions for one thing… Clear History Clear Cache “Erase my tracks.” Clear Cookies
Cache and Cookies are not obvious… We’ve had a huge public education campaign to teach people about the “cache…” • Where’s the cache?... What’s a Cache?
Each History item points to its entry in the “cache”… …disk blocks… Clearing the history could automatically clear the cache. …
But what about “Secure Empty Trash?” • “Clear History,” “Clear Cache” and “Reset Browser” don’t sanitize! • The privacy protecting features give a false sense of security.
Best Practices Follow the guidelines; help users to: S2. Be able to figure out how to successfully perform those tasks (if you mean delete, delete!) S3. Not make dangerous errors S5. Have sufficient feedback to accurately determine the current state of the system
Outline • Review • Security challenges • Usability guidelines • Heuristic evaluation • Part 1. Evaluating deletion & sanitization • Part 2. Phishing: Evaluating a solution
Next up: • A class of security attacks that target end-users rather than computer systems themselves. • Some slides are based on existing ones; credit on the bottom
A Recent Email… Images from Anti-Phishing Working Group’s Phishing Archive; Slide from “Pholproff Phishing Prevention” by B. Parno, C. Kuo, A Perrig
Images from Anti-Phishing Working Group’s Phishing Archive; Slide from “Pholproff Phishing Prevention” by B. Parno, C. Kuo, A Perrig
The next page requests: • Name • Address • Telephone • Credit Card Number, Expiration Date, Security Code • PIN • Account Number • Personal ID • Password Slide from “Pholproff Phishing Prevention” by B. Parno, C. Kuo, A Perrig
But wait… WHOIS 210.104.211.21: Location: Korea, Republic Of Even bigger problem: I don’t have an account with US Bank! Images from Anti-Phishing Working Group’s Phishing Archive; Slide from “Pholproff Phishing Prevention” by B. Parno, C. Kuo, A Perrig
Phishing They demand authentication from us… but do we also want authentication from them?
What is phishing? Phishing attacks use both social engineering and technical subterfuge to steal consumers' personal identity data and financial account credentials (http://www.antiphishing.org)
Characteristics of a phishing attack • Social Engineering. Phishing exploits individuals’ vulnerabilities to dupe victims into acting against their own interests. (Lure) • Automation. Computers are used to carry out phishing attacks on a massive scale. • Electronic Communication. Phishers use electronic communications networks (primarily the Internet). • Impersonation. A phishing attack requires perpetrators to impersonate a legitimate firm or government agency. Slide “A Leisurely Lunch Time Phishing Trip” by Patrick Cain
Phishing is NOT: • Internet-based worms • Virus-email • Relatives stealing your wallet • Spam Slide from“A Leisurely Lunch Time Phishing Trip” by Patrick Cain
Phishing Techniques • The cuckoo's egg: mimic a known institution (relies on graphical similarity) • Or narrow your focus: • Socially-aware mining: • E-mail is from a “known” individual • Context-aware attacks • Your bid on e-bay has won…
Why is Phishing Successful? • Some users trust too readily • Users cannot parse URLs, domain names or PKI certificates • Users are inundated with requests, warnings and pop-ups Slide based on one in “Pholproff Phishing Prevention” by B. Parno, C. Kuo, A Perrig
Impact of Phishing Hundreds of millions of $$$ cost to U.S. economy (e.g., 2.4 billion in fraud just for bank-related fraud) Affects 1+ million Internet users in U.S. alone What about privacy! The problem is growing… the number of phishing attacks doubled from 2004->2005 (from 16,000 to 32,000) Slide based on one in “iTrustPage: Pretty Good Phishing Protection” S. Saroiu, T. Ronda, and A. Wolman