220 likes | 520 Views
Coded Caching with Non-Uniform Demands. Mohammad Ali Maddah-Ali Urs Niesen. Bell Labs, Alcatel-Lucent. Least Frequently Used. N=2 Files, K=1 Users, Cache Size M=1. Populate the cache in low-traffic time. P A =2/3. Server. P B =1/ 3. Cache the most popular file(s). E[R] =P B =1 / 3.
E N D
Coded Cachingwith Non-Uniform Demands Mohammad Ali Maddah-Ali UrsNiesen Bell Labs, Alcatel-Lucent
Least Frequently Used N=2 Files, K=1 Users, Cache Size M=1 • Populate the cache in low-traffic time PA=2/3 Server PB=1/3 • Cache the most popular file(s) E[R]=PB=1/3 • Average rate is the same as miss rate User Size One Cache LFU is optimum for one cache memory in the system. LFU minimizes the miss rate.
Least Frequently Used N=2 Files, K=2 Users, Cache Size M=1 PA=2/3 PB=1/3 E[R]=1-(2/3)2=5/9 Is this optimum?
Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 A1 A2 B2 B1 A2 A2⊕B1 B1 A1 A2 B1 B2 Multicasting opportunity for users with different demand
Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 Simultaneous Multicasting Opportunity
Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 A1 A2 B2 B1 A2⊕B1 E[R]=1/2 <5/9 LFU is not optimum for cache networks! A1 A2 B1 B2 Minimizing Miss Rate is not the target. Providing multicasting opportunity is more important.
Problem Setting N Files Server Shared Link K Users Cache Contents Size M Placement: - Cache arbitrary function of the files (linear, nonlinear, …) Question: minimum average rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions Delivery: -Requests are revealed to the server - Server sends a function of the files
Decentralized Proposed Scheme N=3 Files, K=3 Users, Cache Size M=2 0 1 2 3 12 13 23 123 Delivery: Greedy Linear Encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 23 13 12 ⊕ ⊕ 0 0 0 1 12 13 123 2 12 23 123 3 13 23 123 1 2 1 3 2 3 ⊕ ⊕ ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123
Observations • Gain proportional to aggregated cache memory KM(even though isolated)! • Coding can improve the rate significantly(order of number of users for uniform demand) • This scheme approximately achieves optimum average rate, for uniform popularities. Average Rate Coded
Non-Uniform Demands • Contradicting Intuitions: • More popular file More caching memory • Symmetry of the prefetching Tractable Analysis
Idea of Grouping Group the files with approximately similar popularities Dedicate Memory Mito group i. Prefetching: Apply decentralized prefetching within each group i, with memory budget of Mi Delivery: Apply coded delivery for users demanding file from one group. M1 M2 M3 M4 M1+M2+M3+M4=M
Observations • Within each groupsame cache allocation • Files in different group different cache allocation • Symmetry within each group Analytically tractable • Losing coding between groups
Netflix Data • K=300 Users • A Simplified Grouping rule: • First and last files’ Popularities are within factor of 2
Can We Do Better? Theorem: The code proposed scheme is approximately optimum. • Converse is challenging is based • Genie-Aided Uniformization and Symmetrization • Cut-Set • Reducing the size of the problem to users with different demands.
Conclusion • For cache networks,LFU is not optimum • Miss Rateis not the most relevant metric for cache network. • Coded Caching achieves approximately optimum results • The gain of coding can be significant
Read More • Maddah-Ali and Niesen, “Fundamental Limits of Caching”, Sept 2012 (accepted for IEEE Trans. On Information theory). • Maddah-Ali and Niesen, “Distributed Caching Attains Order-Optimal Memory-Rate Trade-offs”, Jan. 2013 (Accepted to IEEE Trans. On Networking). • Niesen and Maddah-Ali“Coded Caching with Non-Uniform Demands”, Jun. 2013. (Submitted to IEEE Trans. On Information Theory). • Pedarsani, Maddah-Ali, and Niesen, “Online Coded Caching”, Nov. 2013 (Submitted to IEEE Trans. On Networking). • Karamchandani, Niesen, Maddah-Ali, and Diggavi“Hierarchical Coded Caching”, Jan. 2014.