1 / 32

Traffic delivery evolution in the Internet ENOG 4 – Moscow – 23 rd October 2012

Traffic delivery evolution in the Internet ENOG 4 – Moscow – 23 rd October 2012. Christian Kaufmann Director Network Architecture Akamai Technologies, Inc. January 29th, 2008. way-back machine. way-back machine. way-back machine. way-back machine. Web 1998. Web 1998. Web 2012.

euniced
Download Presentation

Traffic delivery evolution in the Internet ENOG 4 – Moscow – 23 rd October 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Traffic delivery evolution in the InternetENOG 4 – Moscow – 23rd October 2012 Christian Kaufmann Director Network Architecture Akamai Technologies, Inc. January 29th, 2008

  2. way-back machine way-back machine FASTER FORWARD

  3. way-back machine way-back machine Web 1998 FASTER FORWARD

  4. Web 1998 FASTER FORWARD

  5. Web 2012 FASTER FORWARD

  6. Web and Cloud 2012 FASTER FORWARD

  7. So what’s the difference? • Lots of “high definition” content being pushed to end users • Read “large files” • Can the Internet scale to support this? -> Short answer: NO FASTER FORWARD

  8. Key Issues • Problems with a centralized approach, especially for large media files • Problems with Peering • Problems with routing protocols • Inter AS Multicast not really existent • QoS not really existent and consistent FASTER FORWARD

  9. Immersive Trends 2012 Online Banking Social Networking Blogging Media Mobile Gaming eCommerce FASTER FORWARD

  10. Immersive Simple on the outside … Online Banking Social Networking Blogging Media Mobile Gaming eCommerce FASTER FORWARD

  11. Online Banking Social Networking DT Verizon Blogging Media Immersive L3 Mobile Gaming Telia eCommerce … complicated on the inside IX IX FASTER FORWARD

  12. The Centralization Bottleneck • Centralized sites create an inherentbottleneck and target for attackers • Worldwide user population = huge infrastructure problem • Not scalable • Long latency between server and end-user FASTER FORWARD

  13. The Centralization Bottleneck • Content Distribution Networks solve the centralization problem by distributing content. • Greater distribution means greater performance and reliability FASTER FORWARD

  14. The Edge is Highly Distributed • No one Autonomous System has more than 4.4% of the access traffic. • The top 50 ASes add up to only 48%. 4.4% Network A 3.6% Network B 1.8% Network C 1.7% Network D 1.7% Network E 1.7% Network F 1.6% Network G 1.4% Network H % ofAccessTraffic ASes (29,000+) FASTER FORWARD

  15. 3000 Edge Proximity From 1 Location Average Miles to Edge 1 Locations FASTER FORWARD

  16. 3000 1500 Edge Proximity From 30 Locations Average Miles to Edge 1 30 Locations FASTER FORWARD

  17. 3000 1500 250 Edge Proximity From 1000 Locations Average Miles to Edge 1 30 1000 Locations FASTER FORWARD

  18. Does Latency Matter for Large File Downloads? …who cares if the latency to download a 2-hour DVD is 10ms or 100ms? FASTER FORWARD

  19. Distance from Server to User Network Latency Packet Loss Download Time Throughput Local <100 mi. 1.6 ms 0.6% 44 Mbs 12.2 min. Akamai Regional 500-1000 mi. 16 ms 0.7% 4 Mbs 2.2 hrs. Big Data Center Approach Cross Continent <3,000 mi. 48 ms. 1.0% 8.16 hrs. 1 Mbs Different Continent <6,000 mi. 96 ms. 1.4% 20 hrs. 0.4 Mbs The Fat-file Paradox:Latency Limits Throughput… …and throughput limits the time to download large files (e.g., 4 GB DVDs) FASTER FORWARD

  20. Centralized Approach with “Big Data Centers” • Clusters of Web servers in large • data centers at the core of the Internet • Tens of data centers • Transit provided by large backbone ISPs FASTER FORWARD

  21. The Problems with Peering • Economic considerations limit peering capacity – results in congestion and poor performance • Routing algorithms (BGP) ignore congestion • BGP ignores latency • Data used to determine routes is subject to intentional inaccuracies and human error FASTER FORWARD

  22. Normal traffic flow BGP picks the “best” route and all packets flow over that path CW End User Sprint Level 3 Web Server FASTER FORWARD

  23. Normal traffic flow If there is congestion on a link, BGP will continue to send packets down that link CW End User Sprint Level 3 Web Server FASTER FORWARD

  24. 6X 6X Economics Bottleneck Infrastructure Web Content $ $ $ $ $ $ First Mile 20X 20X Hosting MiddleMile Broadband $ $ $ $ $ $ 50X 50X Last Mile Web Users FASTER FORWARD

  25. Data Centers (10s) Local ISPs (1,000s) The Big Data Center Approach to Content Delivery First Mile First Mile First Mile Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge IPSs FASTER FORWARD

  26. Problem 1Data centers are on the wrong side of the bottleneck Data Centers (10s) Local ISPs (1,000s) Bottleneck Users The Big Data Center Approach to Content Delivery First Mile First Mile First Mile Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge IPSs FASTER FORWARD

  27. Problem 1 Data centers are on the wrong side of the bottleneck Solution Caching servers are on the right side of the bottleneck Local ISPs (1,000s) Data Centers (10s) Cache Servers Bottleneck Users The Big Data Center Approach to Content Delivery Tier 1IPS’s Tier 1IPS’s Tier 1IPS’s Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge IPSs FASTER FORWARD

  28. Problem 2Data centers are far from end users Data Centers (10s) Local ISPs (1,000s) Users The Big Data Center Approach to Content Delivery First Mile First Mile First Mile CDN CDN CDN 500-5,000 miles Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge IPSs FASTER FORWARD

  29. Problem 2 Data centers are far from end users Solution Cache servers are close to end users Data Centers (10s) Local ISPs (1,000s) Cache Servers 10-100 miles Users The Big Data Center Approach to Content Delivery Tier 1IPS’s Tier 1IPS’s Tier 1IPS’s CDN CDN CDN 500-5,000 miles Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge ISPs Edge IPSs FASTER FORWARD

  30. The Akamai EdgePlatform: 120,000+ Servers 1928 POPs 1069Networks 660+ Cities 83 Countries The Akamai System The world’s largest on-demand, distributed computing platform delivers all forms of Web content and applications for over 130,000 domains. • Resulting in traffic of: • 13 Tbps peak traffic • 100+ petabytes / day • 1,468+ billion hits / day • 560+ million unique clients IPs / day FASTER FORWARD

  31. Conclusion: How to deliver HD traffic in the future? • Overlay CDN networks are state of the Art • Akamai, Google, Netflix, using on-net servers • Protocols like Multicast and QoS are not the solution • CDN Interconnects and Federations are not the solution either • Peer to peer networks - unclear so far • Waiting for the next big idea FASTER FORWARD

  32. Questions? • ck@akamai.com FASTER FORWARD

More Related