290 likes | 500 Views
Ddos defense by offense. Micheal Walfish, Mythili Vutukuru, Hari Balakrishnan, David Karger, and Scott Shenker Presented by Corey White. Outline. DDoS Attack Types Introduction Applicability of speak-up Design of speak-up Revisiting Assumptions Heterogeneous Requests Implementations
E N D
Ddos defense by offense Micheal Walfish, Mythili Vutukuru, Hari Balakrishnan, David Karger, and Scott Shenker Presented by Corey White
Outline • DDoS Attack Types • Introduction • Applicability of speak-up • Design of speak-up • Revisiting Assumptions • Heterogeneous Requests • Implementations • Experimental Evaluations • Objections • Conclusion
DDos Attacks Types • Types of DDos attacks • ICMP flood • Teardrop attack • Peer-to-peer attacks • Permanent denial-of-service attacks • Application level floods • Nuke • Distributed attack
DDoS AttacksTypes continued… • Reflected attack • Degradation-of-service attacks • Unintentional attack
Introduction • Goal of Speak-Up: • Defend against DDoS (Distributed Denial of Service attacks) which mimics legitimate client behavior by sending proper-looking requests via commandeered and compromised hosts. • DDoS attacks requires far less bandwidth(e.g. computational resources) and its, “in-band” harder to identify which makes it more dangerous. • Speak-Up encourages clients to send more traffic to an attacked server allowing them to capture more of the server.
Introduction continued.. • Taxonomy of defenses • Over-provision massively: purchasing enough resources for good and bad clients (not advised). • Detect and block: helps distinguish between good and bad clients (e.g. profiling by IP address, rate-limiting, or CAPTCHA-based defense. • Charge all clients in a currency: an attacked server gives a client service only after it pays in some currency (e.g. memory cycles, computational puzzle([1, 6,7,11,12] and money.)
Introduction continued.. • The Thinner is Speak-Up’s central mechanism • Protects the server from overloads and performs encouragement. • Virtual auction is Speak-Up’s main form of encouragement • The thinner makes clients automatically send a congestion-controlled stream of dummy bytes on a separate payment channel when the server is overloaded. • The thinner then selects the client that sent the most bytes when the server is ready to process requests.
Applicability of Speak-Up • Four Questions • How much bandwidth for good clients needed for Speak-Up?: Speak-up ensures that the good client gets all necessary services by ratio of their available bandwidth • How much bandwith needed to be unharmed by an attack?: depends on the server’s space capacity unattacked. • Can small website with Speak-Up still be harmed?: Yes, Speak-UP must have vast over-provisioning to fully with stand attacks by large botnets. • Does encouragement damage the network?: The increase in total traffic will be minimal and congestion control handles any additional traffic.
Applicability of Speak-Up continued… • Speak-Up defending Threat Model • The protected service needs enough bandwidth for incoming request streams (e.g. ISP) • The client needs more or the same order of magnitude in bandwidth as the attacker to withstand attacks • Advantage of Speak-Up • No pre-defined clientele: permits a variety of clients • Non-human clientele: permits bot clients • Unequal request/spoofing: charges clients for harder request when having an unequal request load.
Design of Speak-Up • Design Goals • Goal: to allocate resources to competing clients in proportion to their bandwidths • The server aims to process good requests at a rate of min(g, G/(G+B) c) request per second • If above is met then only (G/(G+B) c >= g) is needed to satisfy a good client. • Ideal server provisioning requirement: c>=g(1+B/G)def C id
Design of Speak-Up continued.. An Attacked server without Speak-Up An Attacked server with Speak-Up http://nms.lcs.mit.edu/papers/ddos-offense-sigcomm06.pdf
Design of Speak-Up continued.. • Required Mechanisms (3) • A way to limit request to server c per second: • Mechanism to reveal the bandwidth: Speak-Up needs to perform encouragement to make the client send more traffic than normal for a request when the server is under attack. • Proportional allocation mechanism: admits clients at rates proportional to their delivered bandwidth • Thinner is used to implement these mechanisms using encouragement and controls which requests the server observes.
Design of Speak-Up continued.. • Random Drops and aggressive Retries • Thinner drops request at random to reduce the rate to c. • Thinner asks the client to retry after a dropping it to expose good clients from the bad ones that are already maxed out, at a higher rate. • Silent droppings: try again later, while thinner says try again now.
Design of Speak-Up continued.. • Explicit Payment Channel • Thinner asks a requesting client to open a separate payment channel when the server is overloaded. • These clients are called contending clients • Virtual auction are held when the thinner waits for a new request and terminates the corresponding payment channel and admits the contending client that sent the most bytes. • Auctions happen 1/c seconds on average, so the price bytes per request is B+G/c.
Design of Speak-Up continued.. • Robustness to Cheating • Theorem: “In a system with regular service intervals, any client that continuously transmits an fraction of the average bandwidth received by the thinner gets at least an є/2 fraction of the service, regardless of how the bad clients time or divide up their bandwidth.”(DDoS Defense by Offense) • Theory vs. practice (Weaknesses) • bad clients can cheaply outbid good clients in intervals where the good client hasn’t paid much. • The payment channel runs over TCP which is slow and forces the good client rate to grow. • Strength: it handles adversaries well by sending few bytes when the good client’s bid is low and more when their bid grows and lets the adversary control when their bytes arrive. Which allows the adversaries attack to be absorbed better.
Revisiting Assumptions • Speak-up’s Effect on the Network • No flow between a client and thinner is anti-social behavior. • Speak-up increases the upload bandwidth despite population and still flows in download direction. These minimally affects traffic inflation. • If a good and a bad client share a TCP connection the bad client can use more bandwidth and open more parallel connections than the good w/o speak-up every time. • Thinner can handle 1.5Gbits/s of traffic and thousands of concurrent clients to allow it to be uncongested.
Heterogeneous Requests • Thinner procedures during an ongoing payment, this allows thinner to implement the following procedures: • Lets v be the currently-active request and u will be the contending request that has paid the most. • If u has paid more than v, then suspend v, admit (or resume) u, and set u's payment to zero. • If v has paid more than u, then let v continue executing but set v's payment to zero (since v has not yet paid for the next quantum). • Time-out and ABORT any request that has been suspended for some period (e.g., 30 seconds or more).
Implementation • The server processes requests with a service time which is selected uniformly and responds to a request, the thinner then returns the HTML to the client with that response. Any JavaScript capable Web browser can use the system. If the emulated server is not free, the thinner returns JavaScript to the Web client then it to automatically issues two HTTP requests: • The first, the actual request to the server, and a 1MB HTTP POST that is constructed by the browser and it holds dummy data (1MB reflects some browsers' limits on POSTs).
Implementation continued… • The second HTTP request is the payment channel. If the client wins the auction, the thinner terminates the request and gives the request to the server. If the request finishes, and the client hasn’t yet received service; this forces the thinner to return JavaScript that forces the browser to send another large POST, and this process continues. • The thinner matches the client's payments with its request with an “.id” field in both HTTP requests.
Experimental Evaluation This diagram shows good clients get a larger fraction of the server with(“ON”) speak-up than without(“OFF”). When service rate (c) is 50; 100, the allocation under speak-up is roughly proportional to the aggregate bandwidths, and when capacity is 200, all good requests are served. web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Experimental Evaluation This diagram shows the fraction of the server allocated for the good clients as a function of f .With no speak-up, a bad clients can capture a bigger fraction of the server than the good clients because they make more requests and the server, if overloaded, will randomly drops requests. With speak-up, the good clients can pay more for each of their requests. because they make fewer and can capture a fraction of the server roughly in proportion to their bandwidth web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Experimental Evaluation For the byte cost, this is measured by the number of bytes uploaded for served requests or “the price,” as recorded by the thinner. This diagram shows the average measurement for good and bad clients c varies, and G = B = 50 Mbits/s. When the server is overloaded (c = 50; 100), the price is close to the upper bound, (G + B)=c; When the server is not overloaded (c = 200), good clients pay almost nothing. web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Experimental Evaluation This diagram shows heterogeneous client bandwidth experiments with 50 LAN clients, they are all good clients. The fraction of the server allocated for the ten clients, with a bandwidth 0.5 i Mbits/s, is close to the ideal proportional allocation sought for. web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Experimental Evaluation This diagram shows good clients with longer RTTs(Round Trip Time) get a smaller share of the server than do bad clients, for them RTT doesn’t really matter. This result may seem unfair, but the effect is limited: because in this experiment, no good client gets more than double or less than half the ideal allocation. web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Experimental Evaluation This diagram shows the effect on an HTTP client of sharing a bottleneck link with speak-up clients. The graph shows means & standard deviations of end-to-end HTTP downloaded latency with & without speak-up running, for various HTTP transfer sizes (these are shown on a log scale system). web.cs.wpi.edu/~rek/Adv_Nets/Fall2006/DDOS_Offense.ppt
Objections • Bandwidth envy • High bandwidth good clients are “more better off” during attacks and using speak-up.(can claim more of the server) • ISPs could offer high bandwidth proxies to low bandwidth clients to reduce inequality of server resources. • Variable bandwidth costs • Again, have ISPs offer high bandwidth proxy to clients or let customers decide whether or not to bid on higher bandwidths. • Incentives for ISPs • The basic goodness and morals of society are relied upon to limit harmful conduct and administer correct incentives.
Objectives continued… • Solving the wrong problem • Cleaning up botnets is a good start to solve issues, but further research into the problems are necessary in the meantime. • Flash crowds (overload from good clients) • Speak-Up treats them as DDoS attacks • Doesn’t affect low bandwidth sites because speak-up wouldn’t be deployed for them.
Conclusion • Main benefits • The network elements aren’t necessary to change • Only requires modifying servers and adding thinners • Main issues • Everyone floods, which makes it alot harder to the detect bad clients. • edge networks get hurt or misused at times. • If access links to thinner are saturated, then the whole system is rendered useless for the clients.