260 likes | 412 Views
A polylog competitive algorithm for the k-server problem. 3. 1. 2. Move Nearest Algorithm. The k-server Problem. k servers lie in an n-point metric space . Requests arrive at metric points. To serve request: Need to move some server there. Goal : Minimize total distance traveled.
E N D
3 1 2 Move Nearest Algorithm The k-server Problem • k servers lie in an n-point metric space. • Requests arrive at metric points. • To serve request: Need to move some server there. Goal: Minimize total distance traveled. Objective:Competitive ratio.
. . . n 1 The Paging/Caching Problem Set of pages {1,2,…,n} , cache of size k<n. Request sequence of pages1, 6, 4, 1, 4, 7, 6, 1, 3, … a) If requested page already in cache, no penalty. b) Else, cache miss. Need to fetch page in cache (possibly) evicting some other page. Goal: Minimize the number of cache misses. K-server on the uniform metric. Server on location p = page p in cache
Previous Results: Paging Paging (Deterministic) [Sleator Tarjan 85]: • Any deterministic algorithm >= k-competitive. • LRU is k-competitive (also other algorithms) Paging (Randomized): • Rand. Marking O(log k)[Fiat, Karp, Luby, McGeoch, Sleator, Young 91]. • Lower bound Hk[Fiat et al. 91], tight results known.
K-server conjecture [Manasse-McGeoch-Sleator ’88]: There exists k competitive algorithm on any metric space. Initially no f(k) guarantee. Fiat-Rababi-Ravid’90: exp(k log k) … Koutsoupias-Papadimitriou’95:2k-1 Chrobak-Larmore’91: k for trees.
Randomized k-server Conjecture There is an O(log k) competitive algorithm for any metric. Uniform Metric: log k Polylog for very special cases (uniform-like) Line: n2/3[Csaba-Lodha’06] exp(O(log n)1/2) [Bansal-Buchbinder-Naor’10] Depth 2-tree: No o(k) guarantee
Our Result Thm: There is an O(log2 k log3 n) competitive* algorithm for k-server on any metric with n points. Key Idea: Multiplicative Updates * Hiding some log log n terms
Our Approach Hierarchically Separated Trees (HSTs)[Bartal 96]. Any Metric Allocation Problem (uniform metrics): [Cote-Meyerson-Poplawski’08] (decides how to distribute servers among children) O(log n) Allocation instances K-server on HST
Outline • Introduction • Allocation Problem • Fractional view of Randomized Algorithms • Fractional Caching Algorithm • What makes Allocation Problem harder? • The Fix
Allocation Problem Uniform Metric At each time t, request arrives at some location i Request = (ht(0),…,ht(k)) [monotone: h(0) ¸ h(1) … ¸ h(k)] Upon seeing request, can reallocate servers Hit cost = ht(ki) [ki : number of servers at i] Total cost = Hit cost + Move cost Eg: Paging = cost vectors (1,0,0,…,0) *Total servers k(t) can also change (let’s ignore this)
Allocation to k-server Thm [Cote-Poplawski-Meyerson]: An online algorithm for allocation s.t. for any > 0, i) Hit Cost (Alg) ·(1+) OPT ii) Move Cost (Alg) ·(e) OPT gives ¼O(d (1/d)) competitive k-server alg. on depth d HSTs d = log (aspect ratio) So, = poly(1/) polylog(k,n) suffices *HSTs need some well-separatedness *Later, we do tricks to remove dependence on aspect ratio We do not know how to obtain such an algorithm.
Outline • Introduction • Allocation Problem • Fractional view of Randomized Algorithms • Fractional Caching Algorithm • What makes Allocation Problem harder? • The Fix
Fractional View of Randomized Algorithms To specify a randomized algorithm: i) Prob. distribution on states at time t. ii) How it changes at time t+1. Fractional view: Just specify some marginals. Eg. Paging, actual algorithm = distribution over k-tuples but, Fractional: p1,…,pn s.t. p1 + …+ pn = k Cost: If p1,…,pn changes to q1,…,qn ,pay (1/2) i |pi – qi| Not too weak: Fractional Paging -> Randomized Paging (2x loss)
Fractional Allocation Problem xi,j: prob. of having j servers at location i (at time t) j xi,j = 1 (prob. distribution) ij j xi,j· k (global server bound) Cost: Hit cost with h(0),…,h(k) = j xi,j h(j) Moving mass from (i,j) to (i,j’) costs |j’-j| Surprisingly, fractional allocation does not give good randomized alg. for allocation problem.
A gap example Allocation Problem on 2 points Left Right Requests alternate on locations. Left: (1,1,…,1,0) Right: (1,0,…,0,0) Any integral solution must pay (T) in T steps. Claim: Fractional Algorithm pays only T/(2k) . XL,0 = 1/k xL,k = 1-1/k XR,1 = 1 No move cost. Hit cost of 1/k on left requests.
Fractional Algorithm Suffices Thm (Analog of Cote et al): Suffices to have fractional allocation algorithm with (1+,()) guarantee. Gives a fractional k-server algorithm on HST Thm (Rounding): Fractional k-server alg. on HSTs -> Randomized Alg. with O(1) loss. Thm (Frac. Allocation): Design a fractional allocation algorithm with (e) = O(log (k/)).
Outline • Introduction • Allocation Problem • Fractional view of Randomized Algorithms • Fractional Caching Algorithm • What makes Allocation Problem harder? • The Fix
0 1 0 1 0 1 0 1 1-p1 p1 1-pn pn 1-p2 p2 … Pg n Pg 1 Pg 2 Fractional Paging Algorithm Current state, p1,…,pn s.t. i pi =k. Say request at 1 arrives. Algorithm: Need to bring 1-p1 mass into p1. Rule: For each page i decrease pi/1–pi + ( = 1/k) Intuition: If pi close to 1, be more conservative in evicting. Multiplicative Update: d(1-p) / (1-p)
Potential Fn. Analysis Will Show:On(t) + (t) - (t-1) · O(log k) Off(t) Contribution of page i to : 0 if pi =0, log(k+1) if pi=1. =0 if online and offline state coincide. Suppose, page 1 is requested. Analyze in two steps: First offline moves, then online If offline moves a server increases by ·log(k+1) The claim holds easily. = 1/k
Online On = {i : pi > 0} Offline Analysis Will show:On(t) + (t) - (t-1) · O(log k) Off(t) Suppose mass enters p1. pi decreases by dpi = (1-pi+) / N Key Obs: For each page i in On\Off decreases by dpi¢ d / dpi = dpi (1/(1-pi+)) = /N At least |On|-k such pages, so potential drop >= (|On|-k)/N Claim: Potential drop ¸. Pf: N = i 2 On (1-pi+) = |On|-k + |On| ¼|On|-k.
What makes Allocation Harder? Paging: If location 1 requested, Know that OPT also has server on 1. Allocation Problem: Not so clear. Say, already 10.5 servers at location 1. Should we add even more? May be OPT just has 1 server there.
An Instructive Example Case 1: K servers, and allocation problem on 2k locations. Cost Vectors = (1,0,…,0) At Locations: 1, 2, …, 2k 1, 2,…, 2k, … . . . 1 2 2k Case 2: 2 k servers in total. 1/2 server on locations 1,…,k-1 and k+1/2 on location k. Vector (1,0,…,0) on location 1,2,…,k-1 and vector (1,1,…,1,0,…,0) on k. Right thing: Release the servers on location k. Much better solution: 1 server on each location. k+1 k
…… Each xi,j (except last j) increases / xi,j The Fix Suppose cost vector j = (,,…,,0,…,0) at location 1. (i.e. cost if · j servers, 0 otherwise) Hit cost Y= (x1,0+ …+ x1,j) Increase servers by ¼ Y Fix number: For each location i (including 1), rebalance prob. mass by multiplicative update. (location 1) 0 1 2 k j j+1 Recall j xij = 1, 8i
Proof Idea Eg: Location i contributes 3 log (1+k) to . Key observation: For every cut · j, xi,· j increases / xi,· j. Lemma: For any offline state, pays for online movement. Pf: Each cut for j > k*i, decreases potential by /|ON-k|. There are |ON-k| such cuts. Location i ON OPT
Concluding Remarks Removing dependence on aspect ratio. HST -> Weighted HST with O(log n) depth. Extend Allocation to weighted star. Main question: Can we remove dependence on n. 1. Metric -> HST 2. But even on HST (lose depth of HST)