1 / 27

Wide-area Network Acceleration for the Developing World

Addressing poor internet access in developing regions, this paper explores WAN acceleration strategies focusing on cache misses and content fingerprinting, emphasizing chunk storage and peering solutions.

seymourj
Download Presentation

Wide-area Network Acceleration for the Developing World

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wide-area Network Accelerationfor the Developing World Sunghwan Ihm (Princeton) KyoungSoo Park (KAIST) Vivek S. Pai (Princeton)

  2. POOR INTERNET ACCESS IN THE DEVELOPING WORLD Internet access is a scarce commodity in the developing regions Expensive / slow Zambia example [Johnson et al. NSDR’10] 300 people share 1Mbps satellite link $1200 per month Sunghwan Ihm, Princeton University

  3. WEB PROXY CACHING IS NOT ENOUGH Users Origin Web Servers Slow Link Local Proxy Cache • Whole objects, designated cacheable traffic only • Zambia: 43% cache hit rate, 20% byte hit rate Sunghwan Ihm, Princeton University

  4. WAN ACCELERATION:FOCUS ON CACHE MISSES Users Origin Web Servers WAN Accelerator Slow Link WAN Accelerator • Packet-level (chunks) caching • Mostly for enterprise • Two (or more) endpoints, coordinated Sunghwan Ihm, Princeton University

  5. CONTENT FINGERPRINTING Split content into chunks Rabin’s fingerprinting over a sliding window Match n low-order bits of a global constant K Average chunk size: 2n bytes Name chunks by content (SHA-1 hash) Cache chunks and pass references A B C D E Sunghwan Ihm, Princeton University

  6. WHERE TO STORE CHUNKS Chunk data Usually stored on disk Can be cached in memory to reduce disk access Chunk metadata index Name, offset, length, etc. Partially or completely kept in memory to minimizedisk accesses Sunghwan Ihm, Princeton University

  7. HOW IT WORKS Sender-Side • Users send requests • Get content from Web server • Generate chunks • Send cached chunk names + uncached raw content(compression) WAN Accelerator Origin Web Server Receiver-Side • Fetch chunk contents from local disk (cache hits) • Request any cache misses from sender-side node • As we assemble, send it to users (cut-through) WAN Accelerator Users Sunghwan Ihm, Princeton University

  8. WAN ACCELERATOR PERFORMACE Effective bandwidth (throughput) Original data size / total time Transfer: send data + refs Compression rate Reconstruction: hits from cache Disk performance Memory pressure High High Low Sunghwan Ihm, Princeton University

  9. DEVELOPING WORLD CHALLENGES/OPPORTUNITES Dedicated machinewith ample RAM High-RPM SCSI disk Inter-office content only Star topology Shared machinewith limited RAM Slow disk All content Mesh topology Enterprise Developing World Poor Performance! VS. Sunghwan Ihm, Princeton University

  10. WANAX: HIGH-PERFORMANCE WAN ACCELERATOR Design Goals Maximize compression rate Minimize memory pressure Maximize disk performance Exploit local resources Contributions Multi-Resolution Chunking (MRC) Peering Intelligent Load Shedding (ILS) Sunghwan Ihm, Princeton University

  11. SINGLE RESOLUTION CHUNKING (SRC) TRADEOFFS Q W E A R T Y U I O P Z X C V N Q W E B R T Y U I O P Z X C V N 93.75% saving15 disk reads15 index entries 75% saving3 disk reads3 index entries High compression rate High memory pressure Low disk performance Low compression rate Low memory pressure High disk performance Sunghwan Ihm, Princeton University

  12. MULTI-RESOLUTION CHUNKING (MRC) Use multiple chunk sizes simultaneously Large chunks for low memory pressure and disk seeks Small chunks for high compression rate 93.75% saving6 disk reads6 index entries Sunghwan Ihm, Princeton University

  13. GENERATING MRC CHUNKS Detect smallest chunk boundaries first Larger chunks are generated by matching more bits of the detected boundaries Clean chunk alignment + less CPU Original Data HIT HIT HIT Sunghwan Ihm, Princeton University

  14. STORING MRC CHUNKS Store every chunk regardless of content overlaps No association among chunks One index entry load + one disk seek Reduce memory pressure and disk seeks Disk space is cheap, disk seeks are limited A A B C B C *Alternative storage options in the paper Sunghwan Ihm, Princeton University

  15. PEERING Cheap or free local networks(ex: wireless mesh, WiLDNet) Multiple caches in the same region Extra memory and disk Use Highest Random Weight (HRW) Robust to node churn Scalable: no broadcast or digests exchange Trade CPU cycles for low memory footprint Sunghwan Ihm, Princeton University

  16. INTELLIGENT LOAD SHEDDING (ILS) Two sources of fetching chunks Cache hits from disk Cache misses from network Fetching chunks from disks is not always desirable Disk heavily loaded (shared/slow) Network underutilized Solution: adjust network and disk usage dynamically to maximize throughput Sunghwan Ihm, Princeton University

  17. SHEDDING SMALLEST CHUNK Move the smallest chunks from disk to network, until network becomes the bottleneck Disk  one seek regardless of chunk size Network  proportional to chunk size Total latency  max(disk, network) Time Disk Network Peer Sunghwan Ihm, Princeton University

  18. SIMULATION ANALYSIS News Sites Web browsing of dynamically generated Web sites (1GB) Refresh the front pages of 9 popular Web sites every five minutes CNN, Google News, NYTimes, Slashdot, etc. Linux Kernel Large file downloads (276MB) Two different versions of Linux kernel source tar file Bandwidth savings, and # disk reads Sunghwan Ihm, Princeton University

  19. BANDWITH SAVINGS SRC: high metadata overhead MRC: close to ideal bandwidth savings Sunghwan Ihm, Princeton University

  20. DISK FETCH COST MRC greatly reduces # of disk seeks 22.7x at 32 bytes Sunghwan Ihm, Princeton University

  21. CHUNK SIZE BREAKDOWN SRC: high metadata overhead MRC: much less # of disk seeks and index entries (40% handled by large chunks) Sunghwan Ihm, Princeton University

  22. EVALUTION Implementation Single-process event-driven architecture 18000 LOC in C HashCache (NSDI’09) as chunk storage Intra-region bandwidths 100Mbps Disable in-memory cache for content Large working sets do not fit in memory Machines AMD Athlon 1GHz / 1GB RAM / SATA Emulab pc850 / P3-850 / 512MB RAM / ATA Sunghwan Ihm, Princeton University

  23. MICROBENCHMARK Better PEER No tuning required ILS MRC • 90% similar 1 MB files • 512kbps / 200ms RTT Sunghwan Ihm, Princeton University

  24. REALISTIC TRAFFIC: YOUTUBE Classroom scenario 100 clients download 18MB clip 1Mbps / 1000ms RTT Better High Quality(490) Low Quality (320) Sunghwan Ihm, Princeton University

  25. IN THE PAPER Enterprise test MRC scales to high performance servers Other storage options MRC has the lowest memory pressure Saving disk space increases memory pressure More evaluations Overhead measurement Alexa traffic Sunghwan Ihm, Princeton University

  26. CONCLUSIONS Wanax: scalable / flexible WAN accelerator for the developing regions MRC: high compression / high disk performance / low memory pressure ILS: adjust disk and network usage dynamically for better performance Peering: aggregation of disk space, parallel disk access, and efficient use of local bandwidth Sunghwan Ihm, Princeton University

  27. sihm@cs.princeton.eduhttp://www.cs.princeton.edu/~sihm/

More Related