340 likes | 468 Views
Trust Online and the Phishing Problem: why warnings are not enough. M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz) Department of Computer Science & SECReT Doctoral Training Centre, UCL. <event name> 22/09/2011. Outline.
E N D
Trust Online and the Phishing Problem: why warnings are not enough M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz) Department of Computer Science & SECReT Doctoral Training Centre, UCL <event name> 22/09/2011
Outline • Basics of trust • 2 lab studies on an anti-phishing tool and security warnings • … which explain why current signals don’t work • What can we do? • Design • Communication to user
What is trust? Trust is only required in the presence of risk and uncertainty “… willingness to be vulnerable, based on positive expectations about the actions of others” M. Bacharach & D. Gambetta 2001. Trust as Type Detection. In: Castelfranchi, C. & Tan, Y. Trust and Deception in Virtual Societies.
Ignore these at your peril … • trust = split-second assessment, rather than thorough risk analysis and assurance • reliance = after several successful transactions, no perceived vulnerability = split of a split-second assessment
How do we decide when to trust? • People assessment of transaction partner’s ability and motivation [Deutsch, 1956] • We look for cues (trust signals) that indicate these • This assessment can be based on • cognitive elements (rational) • affective reactions(pre-cognitive)
TRUSTEE TRUSTOR
TRUSTOR TRUSTEE 1 Signals
TRUSTOR TRUSTEE Outside Option 1 Signals 2a Trusting Action 2b Withdrawal RISK
TRUSTOR TRUSTEE Outside Option 1 Signals 2a Trusting Action 2b Withdrawal RISK 3b Defection 3a Fulfilment
Dis-embedding Interaction is stretched over time and space and involves complex socio-technical systems [Giddens, 1990] … pervasive in modern societies (e.g. catalogue shopping) So – what’s so special about trust online? • Increased risk • Privacy (more data required) • Security (open system) • Own ability (errors) • Increased uncertainty • Inexperienced with decoding cues • Fewer surface cues available • Traditional cues no long useful J. Riegelsberger, M. A. Sasse, & J. D. McCarthy: The Mechanics of Trust. Int J of Human-Computer Studies 2005.
Study 1: phishing • Passive phishing indicators (Spoofstick etc.) have limited effect • Users don’t look at indicators • Users don’t know what indicators mean • Require users to disrupt their main task • Time-consuming and error-prone R. Dhamija et al.: Why Phishing Works. Procs ACM CHI 2006 Schechter et al.: The Emperor’s New Security Indicators IEEE Security & Privacy 2007
Are active anti-phishing tools better? • Example: SOLID by First Cyber Security • Traffic Light approach: • Passive indicator when no risk exists • Becomes active when a risk is identified
Safe Website Green
“Extreme Caution” • Shows up only when the website the users attempt to visit is certainly unsafe • Presents three options: • Redirection to the authentic website (Default option) • Close the window • Proceed to the risky site
Results – Active Warning • “Extreme Caution” window resulted to 17 out of 18 participants visiting the genuine website. • Clear information • Right timing • Context-specific • Safe Default is important. • Users clicked “OK” without fully understanding the meaning of the message they have been presented with • They were redirected to the genuine website
Results – did they still take risks? • Tool reduced number of participants taking risks, • But: some still take risks
Why do users ignore the recommendation? • Price = main factor for ignoring the tool Need And Greed Principle (Stajano & Wilson: Understanding Scam Victims Comm ACM March 2011) • General advice like “If it is too good to be true, it usually is” doesn’t work
“I know better …” • Participants believe they can rely on their own ability to identify scam websites, and ignore the tool • Past experience with high false-positives creates a negative attitude towards security indicators • Cormac Herley: security tools/advice offering a poor cost-benefit will be rejected by users C. Herley: So Long, And No Thanks for all the Externalities Procs NSPW 2009
Other trust cues • Perceived familiarity (reliance) • Mentioning other entities – Facebook and Twitter logos • Ads – “Why would anyone pay to advertise on a dog site?”, mention of charities • Lots of info, privacy policies, and good design
Symbols of trust • arbitrarily assigned meaning • specifically created to signify the presence of trust-warranting properties • must be difficult to forge (mimicry) and sanctions in the case of misuse • expensive • trustor has to know about their existence and how to decode them. At the • trustees need to invest in emitting them and in getting them known
Symptoms of trust • not specifically created to signal trust-warranting properties – rather, by-products of the activities of trustworthy actors • e.g. trustworthy online retailer has large customer base, repeat business • exhibiting symptoms of trust incurs no cost for trustworthy actors, whereas untrustworthy actors would have to invest effort mimic those signals
Study 2: pdf warnings Most common file types in targeted attacks in 2009. Source: F-Secure (2010)
The experiment • Two conditions: between-subjects design • Participant task: reading two articles and evaluating their summaries • choosing the first article: no warning • choosing the second article: a warning with each article the participants tried
General results • 120 participants (64 female, mean age 25.7) χ2=1.391 p=0.238 df=1
Gender differences • Women were more cautious and less likely to download an article with a warning
Eye-tracking data • Fixation time in seconds • By warning type • 6.13 for generic warnings • 6.33 for specific warnings • By subsequent reaction • 6.94 for those who subsequently refused to download • 5.63 for those who subsequently downloaded the article No significant difference between the length of fixation, all participants were fairly attentive to the warning regardless of the text, but just took different decisions
Hypothetical vs. observed behaviour Generic warning Specific warning
Reasons for ignoring warning • Desensitisation (55 participants): past experience of false positives
Reasons for ignoring warning • Trusting the source (29) “It depends on what the source was, if I was getting it from a dodgy website, I probably wouldn’t download it. But if something was sent to me by a friend or a lecturer or I was downloading it from a library catalogue, I would have opened it anyway.”
Reasons for ignoring warning • Trusting anti-virus (18) I trusted that the anti-virus on my computer would pick anything up. • Trusting PDF (15) I don’t think PDF files can have this kind of harm in them. It says ‘PDF files can harm your computer’ and I know they can’t.
Why security warnings don’t work • Warnings are not reliable and badly designed • more noise than signals • interrupt users’ primary task • pop-ups are associated with adverts and updates = ANNOYING!!! • Users have misconceptions: • about risks and indicators • about their own competence
Conclusions: What can be done? • Re-design the interaction: eliminate choice, automatically direct users to safe sites • More effective trust signalling: develop symptoms of trust and protect symbols better • Get rid of useless warnings • Better communication about risks, correct misconceptions about trust signals
Good Human Factors – by a security person • The system must be substantially, if not mathematically, undecipherable; • The system must not require secrecy and can be stolen by the enemy without causing trouble; • It must be easy to communicate and remember the keys without requiring written notes, it must also be easy to change or modify the keys with different participants; • The system ought to be compatible with telegraph communication; • The system must be portable, and its use must not require more than one person; • Finally, regarding the circumstances in which such system is applied, it must be easy to use and must neither require stress of mind nor the knowledge of a long series of rules. Auguste Kerckhoffs, ‘La cryptographie militaire’, Journal des sciences militaires, vol. IX, pp. 5–38, Jan. 1883, pp. 161–191, Feb. 1883.