340 likes | 682 Views
Low latency via redundancy. Ashish Vulimiri , P. Brighten Godfrey Radhika Mittal, Justin Sherry Sylvia Ratnasamy , Scott Shenker. Presented by Xuzi Zhou. Outline. Introduction System View Queuing Analysis Applications Individual View Conclusion. Introduction.
E N D
Low latency via redundancy Ashish Vulimiri, P. Brighten Godfrey Radhika Mittal, Justin Sherry Sylvia Ratnasamy, Scott Shenker Presented by Xuzi Zhou
Outline Introduction System View Queuing Analysis Applications Individual View Conclusion CS 685 Fall 2013 Paper Presentation
Introduction Why do we want low latency? CS 685 Fall 2013 Paper Presentation
Introduction People react to small differences in latency For a website: Higher latency = fewer visits = lower revenue Exponential distribution The tail of the distribution is critical Possible causes Server overload Network congestion Packet loss … About latency CS 685 Fall 2013 Paper Presentation
Introduction Use redundancy Duplicate an operation Use diverse resources Use the first received result How to reduce latency? CS 685 Fall 2013 Paper Presentation
Introduction “Less work is best” Redundancy caused higher system utilization Network bandwidth cost Computation cost Effectiveness of redundancy is unclear When redundancy improves latency? When not? What’s the gain from redundancy? Why not widely used? CS 685 Fall 2013 Paper Presentation
Outline Introduction System View Queuing Analysis Applications Individual View Conclusion CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Find the system threshold Under the threshold: Improve latency Above the threshold: Worsen latency Queuing Analysis CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Queuing Analysis Service time distributions: Deterministic Variable (Using Pareto distribution) CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW If client-side cost of redundancy is negligible: Deterministic service time is the worst case Threshold is around 25% Service time distribution is varying with tail index Threshold > 30% Queuing Analysis CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Effect of client-side overhead: Client-side overhead must be smaller than mean request latency to improve mean latency Queuing Analysis CS 685 Fall 2013 Paper Presentation
Outline Introduction System View Queuing Analysis Applications Individual View Conclusion CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Request: Download a random file from the data store Base configuration: Mean file size: 4KB File size distribution: deterministic Memory cache ratio: 0.1 # servers: 4 # clients: 10 Use Emulab nodes Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 1. Base Configuration Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 2. Mean file size 0.04 KB instead of 4 KB Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 3. Pareto file size distribution instead of deterministic Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 4. Cache:disk ratio 0.01 instead of 0.1 Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 5. EC2 nodes instead of Emulab Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 6. Mean file size 400 KB instead of 4 KB Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW 7. Cache:disk ratio 2 instead of 0.1 Application: disk-backed data store CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Test with the memcached in-memory database Normal Version Request database directly Stub Version: Call stub instead of database Return results immediately Estimate effect of client-side latency Application: memcached CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Application: memcached CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Application: memcached CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW Simulated fat-tree data center # servers: 54 Use standard data center workload: Flow size range: 1KB to 3 MB Ratio of flows with size < 10 KB: 80% Method: Every switch replicates the first few (eight) packets of each flow along an alternate route Replicated packets have lower priority Application: replication in the network CS 685 Fall 2013 Paper Presentation
SYSTEM VIEW For flows smaller than 10 KB Application: replication in the network CS 685 Fall 2013 Paper Presentation
Outline Introduction System View Queuing Analysis Applications Individual View Conclusion CS 685 Fall 2013 Paper Presentation
Individual VIEW Replicate TCP-handshake packets Send two back-to-back copies of a packet Reduce probability of packet loss In PlanetLab tests: Probability of Individual packet loss: 0.0048 Probability of back-to-back packet pair loss: 0.0007 0.0048 >> 0.0007 >>0.00482 Reduce average completion time of handshake by 25 ms in an idealized network with 3-second timeout ofr SYN/SYN-ACK packets Application: connection establishment CS 685 Fall 2013 Paper Presentation
Individual VIEW Setup: # Clients: 15 PlanetLab nodes across the continental US # DNS servers: 10 Local DNA server and famous public DNS servers # website names: 1 million Method: Every node ranks the DNS servers according to response time Query the TOP n DNS servers with a random website name, n = 1, 2, …, 10 Application: DNS CS 685 Fall 2013 Paper Presentation
Individual VIEW Application: DNS CS 685 Fall 2013 Paper Presentation
Individual VIEW Application: DNS CS 685 Fall 2013 Paper Presentation
Outline Introduction System View Queuing Analysis Applications Individual View Conclusion CS 685 Fall 2013 Paper Presentation
Conclusion Redundancy improves latency under a certain system load threshold (normally between 25% - 50%) when client-side cost of redundancy is low Redundancy offers a significant benefit in a number of practical applications, both in the Internet and in the data center. Redundancy should be used more commonly in network systems. CS 685 Fall 2013 Paper Presentation
QUESTIONS? CS 685 Fall 2013 Paper Presentation