530 likes | 671 Views
Voluntary action against cybercrime. Measuring and increasing its impact. Tyler Moore Lyle School of Engineering, Southern Methodist University, Dallas, TX, USA Michel van Eeten Faculty of Technology, Policy and Management , Delft University of Technology, NL
E N D
Voluntary action against cybercrime Measuring and increasing its impact Tyler Moore Lyle School of Engineering, Southern Methodist University, Dallas, TX, USA Michel van Eeten Faculty of Technology, Policy and Management , Delft University of Technology, NL Talk describes joint work with Johannes M. Bauer (Michigan State), Hadi Asghari (TU Delft), Richard Clayton (Cambridge), Shirin Tabatabaie (TU Delft), and Marie Vasek (SMU) Contact: tylerm@smu.edu London Action Plan Meeting, Montreal 22 October 2013
Motivation and context • Wicked content and actors pervade cyberspace • Websites (distribute malware, host phishing,…) • End-user machines (botnets,…) • Most cleanup carried out by private actors voluntarily • Incentives of Internet intermediaries to cooperate largely determines effectiveness of response • Victim • Requesting party (often the victim, security cos.) • Party receiving notice (e.g., ISPs, hosting providers)
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaningwebsites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaning websites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
Takeaways from comparing website takedown efforts • Incentive on the party requesting content removal matters most • Banks are highly motivated to remove phishing websites • Banks overcome many international jurisdictions and no clear legal framework to remove phishing pages • Banks' incentives remain imperfect: they only remove websites directly impersonating their brand, while overlooking mule-recruitment websites • Lack of data sharing substantially hampers cleanup speed • Technology chosen by attacker has small impact • Full details: http://lyle.smu.edu/~tylerm/weis08takedown.pdf
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaning websites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
Research questions on end-user infections • To what extent are legitimate ISPs critical control points for infected machines? • To what extent do they perform differently relative to each other, in terms of the number of infected machines in their networks? • How do countries perform compared to each other? • Which intermediary incentives work for and against security?
Methodology • Using different longitudinal data sources of infected machines, each with several hundred million IP addresses • Spam trap data • Dshield IDS data • Conficker sinkhole data • For each IP address, look up country and ASN • Map ASNs to ISPs (and non-ISPs) in 40 countries (~200 ISPs cover ~90% market share in wider OECD) • Connect data on infected machines with economic data (e.g., # subscribers of ISP) • Compensate for known measurement issues
Research questions on end-user infections • To what extent are legitimate ISPs critical control points for infected machines? • To what extent do they perform differently relative to each other, in terms of the number of infected machines in their networks? • How do countries perform compared to each other? • Which intermediary incentives work for and against security?
Percentage of all infected machines worldwide located in top infected ISP networks (2009) <19>
Percentage of all infected machines worldwide located in top infected ISP networks (2010) <20>
Number and location of infected machines over time (2010, spam data)
Findings (1) – ISPs are control points • Data confirms that ISPs are key intermediaries • Over 80% of infected machines in wider OECD were located within networks of ISPs • Concentrated pattern: just 50 ISPs control ~50% of all infected machines worldwide • In sum: leading, legitimate ISPs have the bulk of infected machines in their networks, not ‘rogue’ providers
Research questions • To what extent are legitimate ISPs critical control points for infected machines? • To what extent do they perform differently relative to each other, in terms of the number of infected machines in their networks? • How do countries perform compared to each other? • Which intermediary incentives work for and against security?
Findings (2) – ISPs differ significantly • ISPs of similar size vary by as much as two orders of magnitude in number of infected machines • Even ISPs of similar size in the same country can differ by one order of magnitude or more • These differences are quite stable over time and across different data sources
Stability of most infected ISPs over time 30 ISPs are in the top 50 in all four years Overlap of the 50 ISPs with the highest number of infected machines(2008-2011, spam data)
Stability of most infected ISPs over time 24 ISPs are in the top 50 in all four years Overlap of the 50 ISPs with the highest number of infected machines per subscriber(2008-2011, spam data)
Most infected ISPs across all datasets 26 ISPs are in the top 50 most infected networks in all three data sources Overlap of the top 50 ISPs with the highest number of infected machines across datasets (2010, absolute metrics)
Research questions • To what extent are legitimate ISPs critical control points for infected machines? • To what extent do they perform differently relative to each other, in terms of the number of infected machines in their networks? • How do countries perform compared to each other? • Which intermediary incentives work for and against security?
Research questions • To what extent are legitimate ISPs critical control points for infected machines? • To what extent do they perform differently relative to each other, in terms of the number of infected machines in their networks? • How do countries perform compared to each other? • Which intermediary incentives work for and against security?
What explains the huge variation in infection rates? • Even good ISPs tackle only a fraction of the bots in their network • Evidence from recent study of the Dutch market suggests ISPs contact less than 10% of the customers that are infected at any point in time – this is after Dutch ISPs signed the Anti-Botnet Treaty • This discrepancy is partially because ISPs do not widely collect data on infected machines in their networks • This situation is similar or worse in many other countries
contacting / quarantining ~ 900 customers (~5%) contacting / quarantining ~ 1000 customers (~6%) <33>
Impact of telco regulation on security • Engagement of ISPs by telecom regulators and law enforcement improves security • For example, countries where regulators participate in London Action Plan (LAP) have lower infection rates Notes: Statistical significance at 1% (***) and 5% (**); n.a.: not available.
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaning websites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
Reputation metrics as incentives • Market for security is hampered by information asymmetry between intermediaries and customers • We often can’t tell which intermediaries are performing better than their peers/competitors • This weakens the incentives to invest in security • Reliable reputation metrics might change this • Example: poor security ranking of Germany as a country led to Botfrei
Reputation metrics as incentives • NL government commissioned TU Delft to develop reputation metrics on botnet infections for the Dutch market, in collaboration with the ISPs • NL government also asked us to not make the results public, but share them only with the group of ISPs working in the anti-botnet treaty • Did the metrics have an impact? • Looking at the worst performer in mid 2010
Infection rates at main Dutch providers, before and after reputation metrics <39>
More information on TU Delft work “Economics of Malware” (OECD, 2008)http://goo.gl/6HS4d “Role of ISPs in Botnet Mitigation” (OECD, 2010)http://goo.gl/4UZQF “ISPs and Botnet Mitigation: A Fact-Finding Study on the Dutch Market (Dutch government, 2011) http://goo.gl/etFZj
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaning websites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
Cleanup of hacked websites distributing malware is coordinated and carried out by volunteers Security companies Search engines Non-profit organizations Web hosts and site owners Malware cleanup process Detect a website distributing malware Notify the website owner and hosting provider of infection if compromised, or hosting provider and registrar if purely malicious Search engines might block results until malware is removed Voluntary cleanup of webservers distributing malware
“SBW Best Practices For Badware Reporting” We designed an experiment to assess the effectiveness of malware notices in remediating malware Investigated malware URLs submitted to StopBadware’s Community Feed 10—12/2011 Randomly assigned URLs to 3 groups Control: no report Minimal report: URL, IP, short description of malware, date/time detected Full Report: detailed description of malware (specific bad code, special information needed to deliver malware) Follow up 1, 2, 4, 8, 16 days after initial report day Do malware notices work?
Everything in the minimal notice plus detailed evidence of infection Example detailed notice
Takeaways from malware notification experiment • Reporting works • 40% cleaned up 1 day after receiving full report, vs. 18% w/o notice • Fuller reports better than concise reports • But only the first report matters • Concise reports a waste of time • Experimental design could serve as a template for evaluating other notification regimes • Full details: http://lyle.smu.edu/~tylerm/cset12.pdf
Agenda • Empirical investigation of efforts to combat online wickedness • Notice and take-down regimes for cleaning websites • End-user machine infections and ISPs’ response • Mechanisms to improve cleanup • Reputation metrics to encourage ISP action • Notifications to remove malware from webservers • Future opportunities and experiments to improve notification-driven voluntary action
How can we further improve cleanup of infected end-user machines? • What we’ve learned so far • ISPs are crucial intermediary with huge variation in infection rates • Incentive on requesting party is key • Incident data is a prerequisite for cleanup • Most intermediaries don’t have strong incentive to look hard for more comprehensive incident data • Many collaborative data-sharing efforts and notification experiments • Pull vs. push mechanisms for notification • Countries (e.g., US, NL, AU, DE) trying different approaches