160 likes | 491 Views
Decentralized Coded Caching Attains O rder-Optimal M emory-Rate T radeoff. Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen. Allerton October 2013. Video on Demand. High temporal t raffic v ariability Caching (prefetching) can help to smooth traffic .
E N D
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with UrsNiesen Allerton October 2013
Video on Demand • High temporal traffic variability • Caching (prefetching) can help to smooth traffic
Caching (Prefetching) Server • Placement phase: populate caches • Demands are not known yet • Delivery phase: reveal request, deliver content
Problem Setting N Files Server Shared Link K Users Cache Contents Size M Placement: - Cache arbitrary function of the files (linear, nonlinear, …) Delivery: -Requests are revealed to the server Question: Smallest worst-case rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions - Server sends a function of the files
Coded Caching N Files, K Users, Cache Size M • Uncoded Caching • Caches used to deliver content locally • Local cache size matters • Coded Caching [Maddah-Ali, Niesen 2012] • The main gain in caching is global • Global cache size matters (even though caches are isolated)
Centralized Coded Caching B12 B13 B23 N=3 Files, K=3 Users, Cache Size M=2 Maddah-Ali, Niesen, 2012 A12 A13 A23 Approximately Optimum C12 C13 C23 A23 A23⊕B13⊕C12 B13 C12 1/3 B13 C13 B12 C12 A13 B12 C12 A12 A12 B23 C23 B23 C23 A23 B13 C13 A23 A13 Multicasting Opportunity between three users with different demands
Centralized Coded Caching B12 B13 B23 N=3 Files, K=3 Users, Cache Size M=2 A12 A13 A23 C12 C13 C23 • Centralized caching needs • Number and identity of the users in advance • In practice, it is not the case, • Users may turn off • Users may be asynchronous • Topology may time-varying (wireless) B13 C13 B12 C12 A13 B12 C12 A12 A12 B23 C23 B23 C23 A23 B13 C13 A23 A13 Question: Can we achieve similar gain without such knowledge?
Decentralized Proposed Scheme N=3 Files, K=3 Users, Cache Size M=2 0 1 2 3 12 13 23 123 Delivery: Greedy linear encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 23 13 12 ⊕ ⊕ 0 0 0 1 12 13 123 2 12 23 123 3 13 23 123 1 2 1 3 2 3 ⊕ ⊕ ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123
Decentralized Caching Centralized Prefetching: 0 1 2 3 12 13 23 123 12 12 12 13 13 13 23 23 23 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 Decentralized Prefetching:
Comparison N Files, K Users, Cache Size M Uncoded • Local Cache Gain: • Proportional to local cache size • Offers minor gain Coded (Centralized): [Maddah-Ali, Niesen, 2012] • Global Cache Gain: • Proportional to global cache size • Offers gain in the order of number of users Coded (Decentralized)
Can We Do Better? Theorem: The proposed scheme is optimum within a constant factor in rate. • Information-theoretic bound • The constant gap is uniform in the problem parameters • No significant gains beside local and global
Asynchronous Delivery Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3
Conclusion • We can achieve within a constant factor of the optimum caching performance through • Decentralized and uncoded prefetching • Greedy and linearly coded delivery • Significant improvement over uncoded caching schemes • Reduction in rate up to order of number of users • Papers available on arXiv: • Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012) • Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff ( Jan. 2013) • Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)