1 / 34

Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks

Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks. Fahad Rafique Dogar. Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen. Carnegie Mellon University. Wireless Mesh Networks (WMNs). Cost Effective Greater Coverage.

Download Presentation

Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks FahadRafiqueDogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen Carnegie Mellon University

  2. Wireless Mesh Networks (WMNs) • Cost Effective • Greater Coverage • Testbeds: RoofNet@MIT, MAP@Purdue, … • Commercial: Meraki • 100,000 users of San Francisco ‘Free the Net’ service

  3. Throughput Problem in WMNs P3 P1 • Interference • GW becomes a bottleneck

  4. Exploiting Locality through Caching P1 and P3 perform on-path caching P3 Path of the transfer: Alice, P1, P3, GW P1 P2 P2 can perform opportunistic caching On-Path + Opportunistic Caching -> Ditto

  5. Ditto: Key Contributions • Built an opportunistic caching system for WMNs • Insights on opportunistic caching • Is it feasible? • Key factors • Ditto’s throughput comparison with on-path and no caching scenarios • Up to 7x improvement over on-path caching • Up to 10x improvement over no caching Evaluation on two testbeds

  6. Outline • Challenge and Opportunity • Ditto Design • Evaluation • Related Work

  7. Challenge for Opportunistic Caching • Wireless networks experience high loss rates • Usually dealt with through link layer retransmissions • Overhearing node also experiences losses P3 Unlike P1, P2 cannot ask for retransmissions 1 2 2 1 P1 Successful overhearing of a large file is unlikely P2 Main Challenge: Lossy overhearing

  8. More Overhearing Opportunities P3 Path of the transfer: Alice, P1, P3,… P1 P2 may benefit from multi-hop transfers P2 Reduces the problem of lossy overhearing

  9. Outline • Challenge and Opportunity • Lossy Overhearing • Multiple Opportunities to Overhear • Ditto Design • Chunk Based Transfers • Ditto Proxy • Sniffer • Evaluation • Related Work

  10. Chunk Based Transfers • Motivation • Lossy overhearing -> smaller caching granularity • Idea • Divide file into smaller chunks (8 – 32 KB) • Use chunk as a unit of transfer • Ditto uses Data Oriented Transfer (DOT)1 system for chunk based transfers 1 Tolia et al,An Architecture for Internet Data Transfer. NSDI 2006.

  11. chunkID3 Data Oriented Transfer (DOT) Chunking Cryptographic Hash chunkID1 Foo.txt chunkID2 DOT Transfer Sender Receiver Request – foo.txt App App Response: chunk ids{A,B,C} chunk ids chunk ids Chunk Request DOT DOT Chunk Response

  12. An Example Ditto Transfer Receiver Sender Same as DOT App App chunk ids chunk ids Chunk request request request DOT DOT Chunk response response response Ditto Proxy Ditto Proxy

  13. Ditto Proxy Opportunistic Caching Separate TCP connection on each hop Next-Hop based on routing table information On-Path Caching

  14. Sniffer Path of the transfer: Alice, P1, P3,… P3 P1 P2 (Overhearing) • TCP stream identification through (Src IP, Src Port, Dst IP, Dst Port) • Placement within the stream based on TCP sequence number • Next Step: Inter-Stream Chunk Reassembly

  15. Inter-Stream Chunk Reassembly Look for Ditto header Chunk Boundaries Exploits multiple overhearing opportunity

  16. Outline • Challenges and Opportunities • Ditto Design • Evaluation • Testbeds • Experimental Setup • Key Results • Summary • Related Work

  17. Emulab Wireless Testbed

  18. MAP Campus Testbed (Purdue Univ.) Gateway

  19. Experimental Setup

  20. Evaluation Scenarios • Measuring Overhearing Effectiveness Observer P3 GW Example Transfer Transfer P1 P2 Observer Receiver Observer Receiver • Each observer reports the number of chunks successfully reconstructed • Each node becomes a receiver

  21. Reconstruction Efficiency Around 30% of the observers reconstruct at least 50% chunks Around 60% of the observers don’t reconstruct anything

  22. Reconstruction Efficiency Around 50% observers are able to reconstruct at least 50% chunks

  23. Zooming In --- Campus Testbed

  24. Zooming In --- Campus Testbed

  25. Shield the gateway from becoming a bottleneck

  26. Throughput Evaluation • Leaf Nodes request the same file from the gateway • e.g: software update on all nodes • Different request patterns: • Sequential, staggered • Random order of receivers • Schemes • Ditto’s comparison with On-Path and E2E

  27. Throughput Improvement in Ditto Median = 540 Kbps Campus Testbed

  28. Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Campus Testbed

  29. Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Median = 5370 Kbps Campus Testbed

  30. Evaluation Summary

  31. Related Work • Hierarchical Caching [Fan98, Das07,..] • Caching more effective on lossy wireless links • Ditto’s overhearing feature is unique • Packet Level Caching [Spring00, Afanasyev08] • Ditto is purely opportunistic • Ditto exploits similarity at inter-request timescale • Making the best of broadcast [MORE, ExOR,..] • Largely orthogonal

  32. Conclusion • Opportunistic caching works! • Key Ideas: Chunk based transfer, inter-stream chunk reconstruction • Feasibility established on two testbeds • Nodes closer to the gateway can shield it from becoming a bottleneck • Significant benefit to end users • Up to 7x throughput improvement over on-path caching • Up to 10x throughput improvement over no caching

  33. Thank you!

More Related