100 likes | 117 Views
This web caching simulator facilitates comparison of cooperative caching algorithms by evaluating metrics such as hit rates, latency, and bandwidth consumption. It simulates a proxy cache system with interchangeable modules for algorithms, workloads, and network representations. Real traces and synthetic workloads can be used, while network congestion is not simulated. The goal is to assess algorithm effectiveness and performance limitations.
E N D
Web Caching Simulator Presented by Tashana Landray Molly Brown, Anna Karlin, Tashana Landray, Hank Levy, Felix Livni, Denise Pinnel, Nitin Sharma, Emin Gun Sirer, Geoff Voelker, Alec Wolman
Outline • Why we’re building a simulator • What it simulates • What results we want
Motivation • Several cooperative web caching algorithms have been proposed • Harvest/Squid • Tewari, Dahlin et. al • Zhang et. al • Us • Difficult to compare benefits and drawbacks
Goal • Primary goal to evaluate and compare effectiveness of cooperative web caching algorithms • Allow comparison on a variety of metrics • hit rates, sharing, latency of requests, bandwidth consumed, etc. • Maintain high level of abstraction
Features • Simulates a system of proxy caches • Implemented as modules that can be plugged in and interchanged • Caching algorithms • Traffic workloads • Network representation
Traffic Workloads • Real Traces • Existing (e.g. Dec traces) • UW Trace (Alec’s talk) • Synthetic Workloads • Nitin’s talk
Network Topology • Use Transit-Stub tool developed at Georgia Tech to create graphs representing arbitrary Internet-like topologies* • Precompute routing tables from graphs • We do not assign bandwidths or simulate network congestion * Ken Calvert, Matt Doar and Ellen W. Zegura. "Modeling Internet Topology." IEEE Communications Magazine, June 1997.
What we simulate • We do (or will) simulate • Proxy load and server load at highly loaded servers • Bandwidth consumed over network links • We do NOT simulate • Queuing on network links • Underlying network protocols
Algorithm Evaluation Results • Metrics to compare: • Hit rates (Sharing) • Average and worst-case latency of requests • Breakdown of what contributes to latency • Bandwidth consumed on links • Overhead of algorithms • Extra hops • Number of messages sent • Proxy and server load
Performance Limitations • Scalability • Memory requirements biggest bottleneck • Can currently run one day of Dec trace through hierarchy of 68 proxies in about 120 MB • Representation of cached documents bottleneck • can currently store 2 million in 48 MB • can be optimized • May run with GMS