270 likes | 402 Views
Eclipse Attacks on Overlay Networks: Threats and Defenses. By Atul Singh, et. al. Outline. Eclipse Attack Description Existing Defenses New Defenses Effectiveness Evaluation Conclusion Resources. Eclipse Attack Description. Overlay network
E N D
Eclipse Attacks on Overlay Networks: Threats and Defenses By Atul Singh, et. al Presented by Samuel Petreski March 31, 2009
Outline • Eclipse Attack Description • Existing Defenses • New Defenses • Effectiveness Evaluation • Conclusion • Resources
Eclipse Attack Description • Overlay network • Decentralized graph of nodes on edge of network • Each node maintains a neighbor set • Typically limited control over membership • Eclipse Attack • Malicious nodes conspire to hijack and dominate the neighbor set of correct nodes • Eclipse correct nodes from each other • Control data traffic through routing
Eclipse Attack Description (cont.) • Unstructured Overlays • Little constraints on neighbor selection • Easy to bias neighbor discovery • Random walks • Learning from other neighbors • Structured Overlays • Constrained routing table to bound number of hops • Typically, long-distance hops are less restrictive and more susceptible
Eclipse Attack Description (cont.) • Eclipse Attack • Can perform an Eclipse attack with a Sybil attack • A Sybil attack is not required for an Eclipse attack • In Gnutella, malicious nodes can only advertise other malicious nodes during neighbor discovery
Existing Defenses • Central Authority (BitTorrent tracker) • Constrained Routing Tables (CRT) • Certified, random-unique ID for every node • Neighbors consist of picking nodes with IDs closest to a specified point • Lacks proximity optimizations • Proximity Constraints • Select node with lowest delay (but satisfies constraints) • Attacker may be able to manipulate this
New Defenses • Degree Bounds • Eclipse attackers will have a high in-degree in the overlay • Every other node has an average in-degree • Enforcing Degree Bounds • Use centralized membership service • Distributed auditing of neighboring nodes by checking backpointer lists
New Defenses (cont.) • Checking backpointer lists • Periodically, a node x challenges each of its neighbors for its backpointer list • If the list is too large or does not contain x, the auidt fails and the node is removed • Periodically, a node x also checks its backpointer list to make sure each node on the list has a correct neighbor set/routing table size
New Defenses (cont.) • Checking backpointer lists (cont.) • Node x includes a random nonce in the challenge to ensure replies are fresh and authentic • The auditee node sends back the nonce and digitally signs the response • Node x checks the signature and the nonce before accepting the reply
New Defenses (cont.) • Anonymous Auditing • Use an anonymizer node to perform the audit via • Ex: Node x picks a random node y, called anonymizer, to relay the challenge to node z • Case 1: z is malicious, y is correct • Case 2: z is malicious, y is malicious • Case 3: z is correct, y is correct • Case 4: z is correct, y is malicious
New Defenses (cont.) • Anonymization Analysis • Assume node y is malicious with probability f • Probability of a correct node be detected as malicious • Probability of a malicious node passing the audit
New Defenses (cont.) • Marking Malicious/Correct Suspicious • More malicious nodes make it harder to detect them • Correct nodes may also be marked as suspicious
New Defenses (cont.) • Discovery of Anonymizer Nodes • a) randomly • b) Node closest to H(x) • c) Random node among the L closest to H(x)
Effectiveness Evaluation • Effectiveness Evaluation Questions • How serious are Eclipse attacks on structured overlays? • How effective is the existing defense based on PNS against Eclipse attacks? • Is degree bounding a more effective defense? • What is the impact on degree bounding on the performance of PNS? • Is distributed auditing effective and efficient at bounding node degrees?
Effectiveness Evaluation • Experimental Setup • MSPastry (b = 4 and l = 16) • GT-ITM trans-stub topolgy of 5050 routers • Measure pair-wised latency values from the King tool • Set f = 0.2 • Malicious Nodes • Misroute join messages to malicious nodes • Supply only malicious nodes as references • Have only good nodes in routing table (16 per row)
Effectiveness Evaluation • With PSN turned off (GT-ITM) • 70% on 1000 node-overlay, 80% on 5000 • 90% for top-row on 1000, 100% on 5000
Effectiveness Evaluation • With PSN turned on (King) • As overlay size increases, PSN becomes less effective • In real Internet, large fraction of nodes lie in small latency band • Easier to hijack top row of routing table (less restrictive) • Also the most dangerous because it tends to be the first hop for sending its own message
Effectiveness Evaluation • Effectiveness of Degree Bounding • Used oracle to maintain idealized degree-bounding • Effective decreases with larger overlays and looser in-bound restrictions • Increase of 25% average delay with degree-bounding (8% with bound increased to 32) due to tighter constraints on neighbor selection f = 0.2, t=1: ft/(1-f) = 0.25
Effectiveness Evaluation • Effectiveness of Auditing • Neighbor nodes randomly audited every 2 minutes • It takes 24 challenges to audit a node • 2000 node simulation • Churn rate: 0%, 5%, 10%, 15% per hour • Target environment is low to moderately high churn • When malicious nodes reply, they reply with a random subset that follow bounding limits
Effectiveness Evaluation • In-Degree Distribution • Before auditing has started, malicious nodes are able to obtain high in-degrees • After 10 hours of operating (assuming static) all nodes had in-degree <= 16
Effectiveness Evaluation • Reducing Fraction of Malicious Nodes • Auditing starts 1.5 hours into simulation • Correct nodes always enforce in-degree bound of 16 per row • Top Row Analysis shows that high churn requires more auditing Entire Routing Table Top Row
Effectiveness Evaluation • Communication Overhead of Auditing • Overhead includes everything (Pastry overlay w/ PSN, Secured routing, and Auditing) and is (4.2 msg/node/sec) • Overhead of Auditing rate of once per 2 mins is (2 msg/node/sec) • Spike is due to every node searching for anonymizer nodes it will use
Effectiveness Evaluation • Effectiveness Evaluation Questions • How serious are Eclipse attacks on structured overlays? • How effective is the existing defense based on PNS against Eclipse attacks? • Is degree bounding a more effective defense? • What is the impact on degree bounding on the performance of PNS? • Is distributed auditing effective and efficient at bounding node degrees?
Conclusion • Eclipse attack are a real threat • Possible even in structured overlays or PSN-aware networks • Doesn’t require Sybil attack to be effective • Bounding degree of nodes in network is a simple and effective measure • Distributive enforcement using anonymous auditing • Lightweight and allows PSN optimization • Limitations • Sensitive to high churn rates • High overhead for low application traffic • Doesn’t work in all cases • Requires secure routing (CRT) for locating anonymizer set
Resources • Atul Singh, et. al. Eclipse Attacks on Overlay Networks: Threats and Defenses • http://cs.unc.edu/~fabian/courses/CS600.624/slides/Eclipse.pdf • http://www.cs.uiuc.edu/class/sp06/cs591ig/Eclipse.ppt • BaptistePretre. Attacks on Peer-to-Peer Networks. • John R. Douceur. The Sybil Attack.