340 likes | 440 Views
Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks. Fahad Rafique Dogar. Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen. Carnegie Mellon University. Wireless Mesh Networks (WMNs). Cost Effective Greater Coverage.
E N D
Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks FahadRafiqueDogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen Carnegie Mellon University
Wireless Mesh Networks (WMNs) • Cost Effective • Greater Coverage • Testbeds: RoofNet@MIT, MAP@Purdue, … • Commercial: Meraki • 100,000 users of San Francisco ‘Free the Net’ service
Throughput Problem in WMNs P3 P1 • Interference • GW becomes a bottleneck
Exploiting Locality through Caching P1 and P3 perform on-path caching P3 Path of the transfer: Alice, P1, P3, GW P1 P2 P2 can perform opportunistic caching On-Path + Opportunistic Caching -> Ditto
Ditto: Key Contributions • Built an opportunistic caching system for WMNs • Insights on opportunistic caching • Is it feasible? • Key factors • Ditto’s throughput comparison with on-path and no caching scenarios • Up to 7x improvement over on-path caching • Up to 10x improvement over no caching Evaluation on two testbeds
Outline • Challenge and Opportunity • Ditto Design • Evaluation • Related Work
Challenge for Opportunistic Caching • Wireless networks experience high loss rates • Usually dealt with through link layer retransmissions • Overhearing node also experiences losses P3 Unlike P1, P2 cannot ask for retransmissions 1 2 2 1 P1 Successful overhearing of a large file is unlikely P2 Main Challenge: Lossy overhearing
More Overhearing Opportunities P3 Path of the transfer: Alice, P1, P3,… P1 P2 may benefit from multi-hop transfers P2 Reduces the problem of lossy overhearing
Outline • Challenge and Opportunity • Lossy Overhearing • Multiple Opportunities to Overhear • Ditto Design • Chunk Based Transfers • Ditto Proxy • Sniffer • Evaluation • Related Work
Chunk Based Transfers • Motivation • Lossy overhearing -> smaller caching granularity • Idea • Divide file into smaller chunks (8 – 32 KB) • Use chunk as a unit of transfer • Ditto uses Data Oriented Transfer (DOT)1 system for chunk based transfers 1 Tolia et al,An Architecture for Internet Data Transfer. NSDI 2006.
chunkID3 Data Oriented Transfer (DOT) Chunking Cryptographic Hash chunkID1 Foo.txt chunkID2 DOT Transfer Sender Receiver Request – foo.txt App App Response: chunk ids{A,B,C} chunk ids chunk ids Chunk Request DOT DOT Chunk Response
An Example Ditto Transfer Receiver Sender Same as DOT App App chunk ids chunk ids Chunk request request request DOT DOT Chunk response response response Ditto Proxy Ditto Proxy
Ditto Proxy Opportunistic Caching Separate TCP connection on each hop Next-Hop based on routing table information On-Path Caching
Sniffer Path of the transfer: Alice, P1, P3,… P3 P1 P2 (Overhearing) • TCP stream identification through (Src IP, Src Port, Dst IP, Dst Port) • Placement within the stream based on TCP sequence number • Next Step: Inter-Stream Chunk Reassembly
Inter-Stream Chunk Reassembly Look for Ditto header Chunk Boundaries Exploits multiple overhearing opportunity
Outline • Challenges and Opportunities • Ditto Design • Evaluation • Testbeds • Experimental Setup • Key Results • Summary • Related Work
Evaluation Scenarios • Measuring Overhearing Effectiveness Observer P3 GW Example Transfer Transfer P1 P2 Observer Receiver Observer Receiver • Each observer reports the number of chunks successfully reconstructed • Each node becomes a receiver
Reconstruction Efficiency Around 30% of the observers reconstruct at least 50% chunks Around 60% of the observers don’t reconstruct anything
Reconstruction Efficiency Around 50% observers are able to reconstruct at least 50% chunks
Throughput Evaluation • Leaf Nodes request the same file from the gateway • e.g: software update on all nodes • Different request patterns: • Sequential, staggered • Random order of receivers • Schemes • Ditto’s comparison with On-Path and E2E
Throughput Improvement in Ditto Median = 540 Kbps Campus Testbed
Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Campus Testbed
Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Median = 5370 Kbps Campus Testbed
Related Work • Hierarchical Caching [Fan98, Das07,..] • Caching more effective on lossy wireless links • Ditto’s overhearing feature is unique • Packet Level Caching [Spring00, Afanasyev08] • Ditto is purely opportunistic • Ditto exploits similarity at inter-request timescale • Making the best of broadcast [MORE, ExOR,..] • Largely orthogonal
Conclusion • Opportunistic caching works! • Key Ideas: Chunk based transfer, inter-stream chunk reconstruction • Feasibility established on two testbeds • Nodes closer to the gateway can shield it from becoming a bottleneck • Significant benefit to end users • Up to 7x throughput improvement over on-path caching • Up to 10x throughput improvement over no caching