190 likes | 275 Views
Overhaul: Extending HTTP to Combat Flash Crowds. Jay A. Patel & Indranil Gupta Distributed Protocols Research Group Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Urbana, Illinois, USA. Introduction. Flash crowd: A stampede of unexpected visitors
E N D
Overhaul: Extending HTTP to Combat Flash Crowds Jay A. Patel & Indranil Gupta Distributed Protocols Research Group Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Urbana, Illinois, USA
Introduction • Flash crowd: A stampede of unexpected visitors • Occurs regularly due to linkage from popular news feeds, web logs, etc. • Popularly termed “Slashdot effect” • Victim sites become unresponsive • Perception of dysfunction Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Example: MSNBC MSNBC home pageDecember 14, 2003 Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Motivation • Problem • Unpredictable, yet frequent • Brief period of time • Thousand-fold increase in traffic • Two naïve solutions • Overly insure on resources • Shut down web site Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Architectural Changes SEDA Capriccio ESI Protocol Modifications DHTTP Web Booster Cooperative Sharing Squirrel Kache Backslash BitTorrent Current Solutions Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Overhaul: Overview • Protocol change • HTTP extension, no modification • 5 new tags added, 1 slightly modified • Backwards compatible • Key concept: chunking • Characteristic of the web applied to individual documents • m chunks per document • P2P distribution framework • Voluntary • Ad hoc, not DHT based • Key benefit: parallel resource discovery Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Overhaul: Design Chunked Response with Overhaul headers Client #1 Client #2 HTTP Request with Overhaul support tag Ad hoc peer network Peers exchange chunksto fetch the complete document #4 #3 Server Client Client Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Details: Client/Server Interaction • Initial request by client • Supports: Overhaul $port $speed • Response by server in Overhaul mode • ith chunk transmitted in sequential order • Signatures of other m-1 chunks for verification • Initial Overhaul network membership list • n most-recent Overhaul clients • List maintained at server (updated with every request) Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Details: Peer Clients’ Interaction • Clients contact other peer members • To fetch remaining chunks • To discover new peers • Aggregate membership list by swapping information • 1-hop random walk discovery process • Resource discovery • Lookup documents on a busy Overhaul server • Contact peers randomly on membership list • INFO $host.tld Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Implementation • Server • Apache/2.0 HTTP server • Module: mod_overhaul • Client • Java HTTP Proxy • Cross platform • Universal client support Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Testing Methodology: Server • Server machine • 2.5 GHz AMD Athlon XP+ • 1 GB RAM • Client machine • 650 MHz Pentium III • 320 MB RAM • Same network equipment • 25 concurrent fetches • ApacheBench utility Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Results: Chunking (Fixed Size) Document: 10 KB Concurrency: 25 Regular HTTP 512-byte chunks 2048-byte chunks • Overhaul mode requires the server to send only a single chunk Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Results: Chunking (Maximum Count) Document: 50 KB Concurrency: 25 Regular HTTP 6 chunks 12 chunks 24 chunks Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Results: Overhaul vs. Regular Concurrency: 25 Minimum chunk size: 512-bytes Regular HTTP 6 chunks 12 chunks Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Testing Methodology: Client • Cluster of workstations • 25 homogenous PCs • 2.8 GHz Intel Pentium 4 • 1 GB RAM • Same network equipment • Two experiments • Concurrent: single document • Staggered: multiple documents Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Results: Single Document • Large document: 50 KB (12 chunks) • Server condition: 150-250 concurrent fetches + competition • Overhaul requests: • concurrently • only using 24 Overhaul-aware clients Server bandwidth usage in Overhaul mode: 1/12th of regular requests Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Results: Multiple Documents • 8 documents: 110 KB total (12 chunks) • Server condition: 150-250 concurrent fetches + competition • Overhaul requests staggered • 1st stage: 12 concurrent fetches, fetch all documents • 2nd stage: 12 concurrent fetches, fetch index document only * indicates completed requests Server bandwidth usage in Overhaul mode: 1/18th of regular requests Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Limitations • Both client and server must be Overhaul aware • Requires critical mass to be maintained to remain effective • n clients > m chunks • More responsibilities for the client • Possible security implications Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign
Conclusion • Saves resources • Bandwidth • The bigger the crowd, the lower the per capita usage • Response time • Faster turnaround for both server and client • Getting wide spread acceptance • Marginal cost • Protocol extension requires industry and standards push Distributed Protocols Research Group, Department of Computer Science, University of Illinois at Urbana-Champaign