1 / 15

Content Distribution Network

Content Distribution Network. Jesse Szwedko Callen Shaw Heather Friedberg. Motivation. Deliver streams to the most clients while minimizing cost Cost is defined as the tradeoff between client QoS and resource usage of the CDN

mandell
Download Presentation

Content Distribution Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Content Distribution Network Jesse Szwedko Callen Shaw Heather Friedberg

  2. Motivation • Deliver streams to the most clients while minimizing cost • Cost is defined as the tradeoff between client QoS and resource usage of the CDN • Media streaming is a domain in which we can predict exactly what users will need next

  3. Network Topology • Set of front ends, each controlling it’s own cluster of workers

  4. Model - Client • Input parameters define clients and their requests • A client can have multiple streams open at once • Each stream is buffered on the client-side, so the client does not experience lag until this buffer is empty • Lag makes unhappy customers, so we give lag a cost

  5. Model - CDN • Defined number of geographically distributed front ends, which each manage a cluster of local workers • Clients send their request to the closest front end who has a worker server with that file • We pay a fixed cost for every clock tick that a server is online

  6. System Intelligence • Replication • Replicate “hot” files to other clusters and workers under the same front end • Deletion • Clear space by removing infrequently accessed replicas • Load Balancing • Relieve servers who aren’t able to keep client streams from lagging

  7. Replication • Replicas are made within a cluster whenever possible • Files with hotness > a certain threshold are replicated to other clusters as well • While streams are consumed from first block to last, replication happens in reverse • Allows load balancing to offload a request mid-stream, without the receiver needing the entire file

  8. Load Balancing • Load balancing initiated when a server’s load is “too high” • Server is too loaded when clients cannot keep data in their buffers. Formula is: • Request_queue_length / time_to_consume_block • Look at all requests, and move any that can be serviced by a less-loaded worker • Worker must have the associated file, but only needs blocks remaining in stream

  9. Front End Types Static Front End Dynamic Front End Each front end starts with enough servers to hold its initial files Front end may spin up or down new instances of worker servers (for a cost) to decrease lag • Each front end starts with a set number of workers • Front end may move files and requests among these workers, but no new workers can be added

  10. Experiments • Compared static and dynamic front ends while varying parameters • Default Parameters:

  11. Cost v. Request Density

  12. Cost v. Replication

  13. Cost v. Overload Threshold

  14. Cost v. Deletion

  15. Q & A

More Related