260 likes | 380 Views
Caching in Web Memory Hierarchies. Dimitrios Katsaros Yannis Manolopoulos Data Engineering Lab Department of Informatics Aristotle Univ. of Thessaloniki , Greece. http://delab.csd.auth.gr. reverse-proxy cache. Origin server. Web performance: the ubiquitous content cache. proxy caches.
E N D
Caching in Web Memory Hierarchies Dimitrios Katsaros Yannis Manolopoulos Data Engineering Lab Department of Informatics Aristotle Univ. of Thessaloniki, Greece http://delab.csd.auth.gr
reverse-proxy cache Origin server Web performance: the ubiquitous content cache proxy caches cooperating hierarchical
Web caching benefits • Caching is important because by reducing the number of requests • the networkbandwidth consumption is reduced • the user-perceived delay is reduced(popularobjects are moved closer to clients) • the load on the origin servers is reduced (servers handle fewer requests)
Content caching is still strategic Is theoptimization of fine tuning of cache replacement a “moot point” due to the ever decreasing prices of memory? Such aconclusion is ill guided for several reasons: • First, studies have shown that thecache HR and BHR grow in a log-like fashion as a function of cachesize [3]. Thus, a better algorithm that increases HR by only several percentagepoints would be equivalent to a several-fold increase in cache size • Second, the growthrate of Web content is much higher than the rate with which memory sizes for Web cachesare likely to grow • Finally, the benefit of even a slight improvement in cache performancemay have an appreciable effect on network traffic, especially when such gains arecompounded through a hierarchy of caches
Web cache performance metrics Replacement policies aim at improving cache effectiveness by optimisingtwo performance measures: • the hit ratio: • the cost savings ratio: where • hi is the number of references toobject i satisfied by the cache, • ri is the total number of references to I, and • ci isthe cost of fetching object i in cache. The cost can be defined as: • the object size si. Then, CSR coincides with BHR (byte hit ratio) • the downloading latency ci. Then, CSR coincideswith DSR (delay savings ratio)
Challenges for a caching strategy Several factors distinguish Web caching fromcaching in traditionalcomputer architectures • the heterogeneity inobjects' sizes, • the heterogeneity in objects' fetching costs, • the depth ofthe Web caching hierarchy, and • the access patterns, which are notgenerated by a few programmed processes, but mainly originate from large humanpopulations with diverse and varying interests
What has been done to address them? (1) The majority of the replacement policies proposed so far fail to achieve a balance between (or optimize both) HR and CSR: • The recency-based policies, favour the HR, e.g., the family of GreedyDualSizealgorithms [3, 7] • The frequency-basedpolicies, favour the CSR (BHR or DSR), e.g., LFUDA[5] Exceptionsare the LUV [2] and GD* [7],which combine recency and frequency. • The drawbackof LUV is the existence of a manually tunable parameterλ, used to “select” the recency-based orfrequency-based behaviour of the algorithm. • GD* has a similar drawback, since it requires manual tuning of theparameterβ
What has been done to address them ? (2) Regarding the depth of the caching hierarchy: Carey Williamson [15] • Proved an alteration in the access pattern,which is characterized by weaker temporal locality • Proposed the use of different replacementpolicies (LRU, LFU, GD-Size) in different levels of thecaching hierarchies This solution though is not feasible and/or acceptable: • the caches are administratively independent • the adoption of a replacement policy(e.g., LFU) at any level of the hierarchy favours one performancemetric (CSR) over the other (HR)
What has been done to address them ? (3) The origin of the request streams received little attention • It is (in combination with the caching hierarchy depth) responsiblefor the large number of one-timers, objects requested only once • Only SLRU [1] deals explicitly with this factor: • Proposed the use of a small auxiliary cache to maintainmetadata for past evicted objects • This approach: • needs to heuristically determine the size of the auxiliarycache • precludes some objects from entering into the cache. Thus, it may result in slow adaptation of the cache in a changing requestpattern
Why do we need a new caching policy? • Need to optimize not only one of the two performance metrics in a heterogeneous environment, like the Web. We would like a balance between HR and CSR (balance between the average latency that the user sees and the traffic performance) • Need to dealwith the weak temporal locality in Web request streams • Need toeliminateany “administratively”tunable parameters.The existence of parameterswhose value is derived from statistical information extractedfrom Web traces (e.g., LNC-R-W3 [14] or LRV [12])is not desirable due to the difficulty of tuningthese parameters • Our contribution: CRF, a new caching policy dealing with all the particularities of the Web environment
CRF ’s design principles: BHR vs. DSR • The delay savings ratio is affected very much by the transient network and Web server conditions • Two more reasons bringabout significant variation in the connection time for identicalconnections • The persistent HTTP connections, which avoidreconnection costs, and • Connection caching [4],which reduces connection costs • We favour the size (BHR) instead of thelatency (DSR) of fetching an object as a measure of the cost
CRF ’s design principles: One-timers • We partition the cache space • Cache partitioning hasbeen followed by prior algorithms, e.g. FBR [13], butnot for the purpose of the isolation of one-timers • Only Segmented LRU [8]adopted partitioning for isolatingone-timers. Experiments showed that (in the Web) it suffers from cache pollution • The cache has two segments: R-segment andI-segment • The cache segments are allowed to growand shrink deliberately depending on the characteristics ofthe request stream • The one-timers are accommodated intothe R-segment. We do not further partition the I-segmentsince it makes very difficult to decide the segmentfrom which the victim will be selected and it incurs maintenancecost for moving the objects from one segment tothe other
CRF ’s design principles: Ranking (1) A couple of decisions must be made,which regard: • the ranking of objects within each segment, and • the selection of replacement victims These decisions mustassure 3 constraints/targets: • balance betweenhit and byte hit ratio, • protect the cache fromone-timers, but without preventing the cache from adaptingto a changing access patterns, and • because of the weak temporal locality, exploit frequency-based replacement criteria
CRF ’s design principles: Ranking (2) • Aim for the R-segment (one-timers): • accommodate as manyobjects as possible • exploit any short-termtemporal locality of the request stream • the ranking function for the R-segment:the ratio of object’s entry time over its size
CRF ’s design principles: Ranking (3) • Aim for the I-segment (heart of the cache): • provide a balancebetween HR and BHR • deal with the weak temporallocality • the ranking function for the I-segment:the productof the last inter-reference time of an object times the recencyof the object • the inter-reference time stands for thesteady-state popularity (frequency of reference) of an object • the recency stands for a transient preferenceto an object
CRF ’s design principles: Replacement victim (1) • R-victim: the candidate victim from R-segment • I-victim: the candidatevictim from the I-segment • tc : the current time • R1: thereference time of the R-victim • I1: the time of the penultimate reference tothe I-victim • I2: the time of the last reference to it • δ1 (= tc- I2) : the reference recency of the I-victim • δ2 (= tc- R1) : the reference recency of the R-victim • δ3 (= I2-I1) : the last inter-reference time of the I-victim Estimate whether or not the I-victim loses its popularity and also the potentialof the R-victim to get a second reference
CRF ’s design principles: Replacement victim (2) R-victim I-victim R-victim R-victim R-victim R-victim I-victim
CRF ’s performance evaluation • Examined CRF against LRU, LFU, Size, LFUDA, GDS, SLRU, LUV, HLRU, LNCRW3 • GDS be the representative ofthe family which includes GDS, GDSF • HRLU(6) be the representative of the HLRUfamily • LNCRW3 implemented so as to optimise theBHR instead of DSR • LUV tuning: we triedseveral values for the λ parameter, and we selectedthe value 0.01, because it gave the bestperformance for small caches and the best performancein most cases • Generated synthetic Web request streams with theProWGen tool [15]
CRF ’s performance evaluation Input parameters to ProWGen tool
Sensitivity to one-timers (aggregate) CRF’s gain-loss wrt one-timers
Sensitivity to Zipfian slope (aggregate) CRF’s gain-loss wrt Zipfian slope
Conclusions • We proposed a new replacement policy forWeb caches, the CRF policy • CRF was designed to address all the particularitiesof the Web environment • The performance evaluation confirmed that CRF is a hybridbetween recency and frequency-based policies • CRF depicts a stable and overall improved performance
References (1) • C. Aggrawal, J. Wolf and P.S. Yu. Caching on theWorld Wide Web. IEEE Transactions on Knowledgeand Data Engineering, 11(1):94–107, 1999. • H. Bahn, K. Koh, S.H. Noh and S.L. Min. Efficientreplacement of nonuniform objects in Web caches.IEEE Computer, 35(6):65–73, 2002. • L. Breslau, P. Cao, L. Fan, G. Phillips and S. Shenker. Web caching and Zipf-like distributions: Evidence and implications.Proceedings IEEE INFOCOM Conf, pp.126-134, 1999. • P. Cao and S. Irani. Cost-aware WWW proxy cachingalgorithms. Proceedings USITS Conf, pp.193–206,1997. • E. Cohen, H. Kaplan and U. Zwick. Connectioncaching: model and algorithms. Journal of Computerand System Sciences, 67(1):92–126, 2003. • J. Dilley and M. Arlitt. Improving proxy cacheperformance: analysis of three replacement policies.IEEE Internet Computing, 3(6):44–50, 1999. • S. Jiang and X. Zhang. LIRS: an efficient lowinter-reference recency set replacement policy toimprove buffer cache performance. Proceedings ACM SIGMETRICS Conf, pp.31–42, 2002. • S. Jin and A. Bestavros. GreedyDual* Web cachingalgorithm: exploiting the two sources of temporallocality in Web request streams. ComputerCommunications, 24(2):174–183, 2001.
References (2) • R. Karedla, J.S. Love and B.G. Wherry. Cachingstrategies to improve disk system performance. IEEEComputer, 27(3):38–46, 1994. • N. Megiddo and D. S. Modha. ARC: a self-tuning lowoverhead replacement cache. Proceedings USENIX FAST Conf, 2003. • A. Nanopoulos, D. Katsaros and Y. Manolopoulos. Adata mining algorithm for generalized Webprefetching. IEEE Transactions on Knowledge andData Engineering, 15(5):1155–1169, 2003. • L. Rizzo and L. Vicisano. Replacement policies for aproxy cache. IEEE/ACM Transactions on Networking,8(2):158–170, 2000. • J. Shim, P. Scheuermann and R. Vingralek. Proxycache algorithms: design, implementation andperformance. IEEE Transactions on Knowledge andData Engineering, 11(4):549–562, 1999. • A. Vakali. Proxy cache replacement algorithms: ahistory-based approach. World Wide WebJournal, 4(4):277–297, 2001. • C. Williamson. On filter effects in Web cachinghierarchies. ACM Transactions on InternetTechnology, 2(1):47–77, 2002.