740 likes | 950 Views
Privacy-preserving collaborative network anomaly detection. Haakon Ringberg. Unwanted network traffic. Problem Attacks on resources ( e.g. , DDoS, malware) Lost productivity ( e.g. , instant messaging) Costs USD billions every year Goal : detect & diagnose unwanted traffic
E N D
Privacy-preserving collaborative network anomaly detection Haakon Ringberg
Unwanted network traffic • Problem • Attacks on resources (e.g., DDoS, malware) • Lost productivity (e.g., instant messaging) • Costs USD billions every year • Goal: detect & diagnose unwanted traffic • Scale to large networks by analyzing summarized data • Greater accuracy via collaboration • Protect privacy using cryptography Haakon Ringberg
Challenges with detection • Data volume • Some commonly used algorithms analyze IP packet payload info • Infeasible at edge of large networks Network Haakon Ringberg
Challenges with detection AnomalyDetector • Data volume • Attacks deliberately mimic normal traffic • e.g., SQL-injection, application-level DoS1 I’m not sure about Beasty Network Let me in! 1[Srivatsa TWEB ’08], 2[Jung WWW ’02] Haakon Ringberg
Challenges with detection • Data volume • Attacks deliberately mimic normal traffic • e.g., SQL-injection, application-level DoS1 • Is it a DDoS attack or a flash crowd?2 • A single network in isolation may not be able to distinguish Network 1[Srivatsa TWEB ’08], 2[Jung WWW ’02] Haakon Ringberg
Collaborative anomaly detection • “Bad guys tend to be around when bad stuff happens” CNN.com I’m just not sure about Beasty :-/ I’m just not sure about Beasty :-/ FOX.com Haakon Ringberg
Collaborative anomaly detection • “Bad guys tend to be around when bad stuff happens” • Targets (victims) could correlate attacks/attackers1 CNN.com Fool us once, shame on you. Fool us, we can’t get fooled again! “Fool us once, shame on you. Fool us, we can’t get fooled again!”2 FOX.com 1[Katti IMC ’05], [Allman Hotnets ‘06], [Kannan SRUTI ‘06], [Moore INFOC ‘03]2George W. Bush Haakon Ringberg
Corporations demand privacy • Corporations are reluctant to share sensitive data • Legal constraints • Competitive reasons I don’t want FOX to know my customers CNN.com FOX.com Haakon Ringberg
Common practice AT&T Sprint Every network for themselves! Haakon Ringberg
System architecture • -like system • Greater scalability • Provide as a service AT&T Sprint • Collaboration infrastructure • For greater accuracy • Protects privacy N.B. collaboration could also be performed between stub networks Haakon Ringberg
Dissertation Overview 11 Haakon Ringberg
Chapter I: scalable signature-based detection at individual networks 12 • Work with at&t labs: • Nick Duffield • Patrick Haffner • Balachander Krishnamurthy
Background: packet & rule IDSes Enterprise • Intrusion Detection Systems (IDSes) • Protect the edge of a network • Leverage known signatures of traffic • e.g., Slammer worm packets contain “MS-SQL” (say) in payload • or AOL IM packets use specific TCP ports and application headers
Background: packet and rule IDSes • Leverage existing community • Many rules already exist • CERT, SANS Institute, etc • Classification “for free” • Accurate (?) Benefits • A predicate is a boolean function on a packet feature • e.g., TCP port = 80 • A signature (or rule) is a set of predicates
Background: packet and rule IDSes • Too many packets per second • Packet inspection at the edge requires deployment at many interfaces Drawbacks • A predicate is a boolean function on a packet feature • e.g., TCP port = 80 • A signature (or rule) is a set of predicates Network
Background: packet and rule IDSes • Too many packets per second • Packet inspection at the edge requires deployment at many interfaces • DPI (deep-packet inspection) predicates can be computationally expensive Drawbacks • A predicate is a boolean function on a packet feature • e.g., TCP port = 80 • A signature (or rule) is a set of predicates • Packet has: • Port number X, Y, or Z • Contains pattern “foo” within the first 20 bytes • Contains pattern “bar” within the last 40 bytes
Our idea: IDS on IP flows How well can signature-based IDSes be mimicked on IP flows? • Efficient • Only fixed-offset predicates • Flows are more compact • Flow collection infrastructure is ubiquitous • IP flows capture the concept of a connection
Idea • IDSes associate a “label” with every packet • An IP flow is associated with a set of packets • Our system associates the labels with flows
Snort rule taxonomy Relies on features that cannot be exactly reproduced in IP flow realm
Simple translation • Our systems associates the labels with flows • Simple rule translation would capture only flow predicates • Low accuracy or low applicability 20 Snort rule: • dst port = MS SQL • contains “Slammer” Slammer Worm Only flow predicates: • dst port = MS SQL
Machine Learning (ML) • Our systems associates the labels with flows • Leverage ML to learn mapping from “IP flow space” to label • e.g., IP flow space = src port * # packets * flow duration ifraised : # packets otherwise src port
Boosting h1 h2 h3 Hfinal sign Boosting combines a set of weak learners to create a strong learner
Benefit of Machine Learning (ML) Slammer Worm Snort rule: Only flow predicates: ML-generated rule: • dst port = MS SQL • contains “Slammer” • dst port = MS SQL • dst port = MS SQL • packet size = 404 • flow duration • ML algorithms discover new predicates to capture rule • Latent correlations between predicates • Capturing same subspace using different dimensions
Evaluation • Border router on OC-3 link • Used Snort rules in place • Unsampled NetFlow v5 and packet traces • Statistics • One month, 2 MB/s average, 1 billion flows • 400k Snort alarms
Accuracy metrics 25 1 - p AP of p FP per TP p 25 • Receiver Operator Characteristic (ROC) • Full FP vs TP tradeoff • But need a single number • Area Under Curve (AUC) • Average Precision (AP)
Classifier accuracy 5 FP per 100 TP • Training on week 1, testing on week n • Minimal drift within a month • High degree of accuracy for header and meta 43 FP per 100 TP
Variance within payload group • Accuracy is a function of correlation between flow and packet-level features
Computational efficiency • Machine learning (boosting) • 33 hours per rule for one week of OC48 • Classification of flows • 57k flows/sec 1.5 GHz Itanium 2 • Line rate classification for OC48 Our prototype can supportOC48 (2.5 Gbps) speeds:
Chapter II: Evaluating the effectiveness of collaborative anomaly detection 29 • Work with: • Matthew Caesar • Jennifer Rexford • Augustin Soule
Methodology Identify attacks in IP flow traces Extract attackers Correlate attackers across victims 30 2) 3) 1)
Identifying anomalous events Use existing anomaly detectors1 IP scans, port scans, DoS e.g., IP scan is more than n IP addresses contacted Minimize false positives Correlate with DNS BL IP addresses exhibiting open proxy or spambot behavior 31 1[Allan IMC ’07], [Kompella IMC ’04]
Cooperative blocking A set ‘S’ of victims agree to participate Beasty is blocked following initial attack Subsequent attacks by Beasty on members of ‘S’ are deemed ineffective Beasty is very bad! 32
DHCP lease issues Dynamic address allocation IP address first owned by Beasty Then owned by innocent Tweety Should not block Tweety’s innocuous queries 33 10.0.0.1 ?
DHCP lease issues Dynamic address allocation IP address first owned by Beasty Then owned by innocent Tweety Should not block Tweety’s innocuous queries 34 • Update DNS BL hourly • Block IP addresses for a period shorter than most DHCP leases1 1[Xie SIGC ’07]
Methodology IP flow traces from Géant DNS BL to limit FP Cooperative blocking of attackers for Δ hours Metric is fraction of potentially mitigated flows 35
Blacklist duration parameter Δ Collaboration between all hosts Majority of benefit can be had with small Δ 36
Number of participating victims Randomly selecting n victims to collaborate in scheme Reported number average of 10 random selections 37
Number of participating victims Collaboration between most victimized hosts Attackers are more like to continue to engage in bad action “x” than a random other action 38
Chapter conclusion Repeat-attacks often occur within one hour Substantially less than average DHCP lease Collaboration can be effective Attackers contact a large number of victims 10k random hosts could mitigate 50% Some hosts are much more likely victims Subsets of victims can see great improvement 39
Chapter III: Privacy-preserving collaborative anomaly detection 40 • Work with: • Benny Applebaum • Matthew Caesar • Michael J Freedman • Jennifer Rexford
Privacy-Preserving Collaboration • Protect privacy of • Participants: do not reveal who suspected whom • Suspects: only reveal suspects upon correlation CNN Secure Correlation E( ) E( ) E( ) FOX Google Haakon Ringberg
System sketch • Trusted third party is a point of failure • Single rogue employee • Inadvertent data leakage • Risk of subpoena MSFT Google Secure Correlation FOX CNN Haakon Ringberg
System sketch • Trusted third party is a point of failure • Single rogue employee • Inadvertent data leakage • Risk of subpoena • Fully distributed impractical • Poor scalability • Liveness issues MSFT Google FOX CNN Haakon Ringberg
Recall: • Participant privacy • Suspect privacy Split trust CNN Proxy DB FOX • Managed by separate organizational entities • Honest but curious proxy, DB, participants (clients) • Secure as long as proxy and DB do not collude Haakon Ringberg
Recall: • Participant privacy • Suspect privacy Protocol outline • Clients send suspect IP addrs (x) • e.g., x = 127.0.0.1 • DB releases IPs above threshold Client / Participant x Proxy But this violates suspect privacy! DB
Recall: • Participant privacy • Suspect privacy Protocol outline • Clients send suspect IP addrs (x) • DB releases IPs above threshold Client / Participant H(x) Hash of IP address Proxy Still violates suspect privacy! DB
Recall: • Participant privacy • Suspect privacy Protocol outline • Clients send suspect IP addrs (x) • IP addrs blinded w/Fs(x) • Keyed hash function (PRF) • Key s held only by proxy • DB releases IPs above threshold Client / Participant Fs(x) Keyed hash of IP address Proxy Still violates suspect privacy! DB
Recall: • Participant privacy • Suspect privacy Protocol outline • Clients send suspect IP addrs (x) • IP addrs blinded w/EDB(Fs(x)) • Keyed hash function (PRF) • Key s held only by proxy • DB releases IPs above threshold Client / Participant Encrypted keyed hash of IP address EDB(Fs(x)) Proxy But how do clients learn EDB(Fs(x))? DB
Recall: • Participant privacy • Suspect privacy Protocol outline • Clients send suspect IP addrs (x) • IP addrs blinded w/EDB(Fs(x)) • Keyed hash function (PRF) • Key s held only by proxy • EDB(Fs(x)) learned throughsecure function evaluation • DB releases IPs above threshold Client / Participant EDB(Fs(x)) x Fs(x) Proxy s DB Possible to reveal IP addresses at the end
Protocol summary • Clients send suspects IPs • Learns Fs(x) usingsecure function evaluation • Proxy forwards to DB • Randomly shuffles suspects • Re-randomizes encryptions • DB correlates using Fs(x) • DB forwards bad Ips to proxy Client EDB(Fs(3)) Ds (Fs(3)) = 3 1 2 Fs(3)