160 likes | 505 Views
Cache Replacement Algorithm. 元智大學 資訊工程所 系統實驗室 陳桂慧 1999.05.04. Outline. Exiting document replacement algorithm Squid’s cache replacement algorithm Ideal Problem. Exiting Document Replacement Algorithm. Least-Recently-Used (LRU)
E N D
Cache Replacement Algorithm 元智大學 資訊工程所 系統實驗室 陳桂慧 1999.05.04
Outline • Exiting document replacement algorithm • Squid’s cache replacement algorithm • Ideal • Problem
Exiting Document Replacement Algorithm • Least-Recently-Used (LRU) • evicts the document which was requested the least recently. • Least-Frequently-Used (LFU) • evicts the document which is accessed least frequently. • Size [WASAF96] • evicts the largest document.
LRU-Threshold [ASAWF95] • is the same as LRU, except documents larger than a certain threshold size are never cached • Log(Size)+LRU [ASAWF95] • evicts the document who has the largest log(size) and is the least recently used document among all documents with the same log(size). • Hyper-G [WASAF96] • is a renement of LFU with last access time and size considerations;
Pitkow/Recker [WASAF96] • removes the least-recently-used document, except if all documents are accessed today, in which case the largest one is removed; • Lowest-Latency-First [WA97] • tries to minimize average latency by removing the document with the lowest download latency rst;
Hybrid, introduced in [WA97], • is aimed at reducing the total latency. • function value : the utility of retaining a given document in the cache • the smallest function value is then evicted. • a document p located at server s • cs - the time to connect with server s, • bs - the bandwidth to server s, • np - the number of times p has been requested since it was brought into the cache, • zp - the size (in bytes) of document p. • Wb and Wn are constants. Estimates for cs and bs are based on the the times to fetch documents from server s in the recent past.
Lowest Relative Value (LRV), [LRV97] • LRV take into account locality, cost and size of a document. • function value : the utility of keeping a document in the cache. • evicts the document with the lowest value. • the value is based on extensive empirical analysis of trace data. • Pi - the probability that a document is requested i + 1 times given that it is requested i times. • Di - the total number of documents seen so far which have been requested at least i times in the trace • Pi - estimated in an online manner by taking the ratio Di+1/Di, • Pi(s) - is the same as Pi except the value is determined by restricting the count only to pages of size s.
1-D(t) - the probability that a page is requested again as a function of the time (in seconds) since its last request t; D(t) = .035 log(t + 1) + .45(1-e^(-t/2e6)) • document d of size s and cost c, • i - the last request to d is the i’th request to it • t - the last request to d is the was made t second ago • d’s value in LRV V(I,t,s) = P1(s)(1-D(t))*c/s if i=1 V(I,t,s) = Pi(1-D(t))*c/s otherwise
Squid’s Cache Replacement Algorithm • LRU • When selecting objects for removal, Squid • examines some number of objects and • determines which can be removed and which cannot • If the object is currently being requested, or retrieved from an upstream site, it will not be removed. • If the object is ``negatively-cached'' it will be removed. • If the object has a private cache key, it will be removed • Finally, if the time since last access is greater than the LRU threshold, the object is removed.
LRU threshold value is dynamic calculated based on the current cache size and the low and high mark (90% - 95%). • The LRU threshold scaled exponentially between the high and low water marks. • the store swap size is near the low water mark, the LRU threshold↓ • the LRU threshold represents how long it takes to fill (or fully replace) your cache at the current request rate. ( 1~10 days ) • Squid 1.1 v.s. Squid-2 • Squid1.1 cache storage is implemented as a hash table with some number of "hash buckets." • scans one bucket at a time and sorts all the objects in the bucket by their LRU age. • Squid-2 we eliminated the need to use qsort() by indexing cached objects into an automatically sorted linked list. • every time an object is accessed, it gets moved to the top of the list.
Ideal • With the same document size, => removing the document with the lowest download latency first • With the same download latency time => removing the largest document first • The rate R↑, removing first. R = Zp / Ttot Zp - the size of document of p Ttot - total latency time
Problem • Hybrid Algorithm • The contents of squid’s access log • Elapsed time