1 / 28

Congestion Control in Distributed Media Streaming

Congestion Control in Distributed Media Streaming. Lin Ma Wei Tsang Ooi School of Computing National University of Singapore IEEE INFOCOM 2007. outline. Introduction Distributed Media Streaming (DMS) Task-level TCP Friendliness Framework of DMSCC Throughput Control Congestion Control

fagan
Download Presentation

Congestion Control in Distributed Media Streaming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control in Distributed Media Streaming Lin Ma Wei Tsang Ooi School of Computing National University of Singapore IEEE INFOCOM 2007

  2. outline • Introduction • Distributed Media Streaming (DMS) • Task-level TCP Friendliness • Framework of DMSCC • Throughput Control • Congestion Control • Simulation and Conclusions

  3. Introduction • Distributed Media Streaming (DMS) • coined by Nguyen and Zakhor (IEEE transaction on multimedia 2004) • A client receives a media stream from multiple servers simultaneously • multiple media flows may or may not pass through the same bottleneck • Aggregate congestion control astask-level congestion control

  4. DMS • Distributed Media Streaming • client receives a media stream from multiple servers simultaneous • improves robustness • allows aggregation of bandwidth among peers 0.8Mbps receiver Sender 1 Internet 1.2Mbps 0.4Mbps Sender 2

  5. Sender1’s RTT Send packet Send packet control packet control packet receiver Sender2’s RTT Scenario • Receiver-driven Estimates round trip time Determine the next packet to be sent (PPA) Estimates sender’s loss rates Determine the next packet to be sent (PPA) Decide each sender’s sending rates (RAA) Sender 1 Estimates round trip time Sender 2

  6. DMSprotocol • Rate allocation algorithm (RAA) • Run at receiver • minimize the probability of packet loss • splitting the sending rates appropriately across the senders • Packet partition algorithm (PPA) • Run at individual senders • Ensure every packet is sent by only one sender • minimize the probability of packet arriving late

  7. Bandwidth Estimation • TCP Friendly rate control algorithm (TFRC) B : current available TCP-friendly bandwidth between each sender and receiver Trto : TCP timeout R : estimated round-trip time in seconds p : estimated loss rate S : TCP segment size in bytes

  8. Rate Allocation Algorithm • Receiver computes the optimal sending rate for each sender based on loss rate and bandwidth Subject to During the interval (t, t + ∆) F(t) : total number of loss packet L(i, t) : estimated loss rates S(i, t) : estimated sending rates Sreq(t) : is the required bit rate for the encoded video B(i, t) : TCP-friendly estimated bandwidth

  9. Rate Allocation Algorithm • Sort the senders according to their estimated loss rates from lowest to highest • Assign each sender its available bandwidth, until the sum of their available bandwidths exceeds the bit rate of the encoded video

  10. Rate Allocation Algorithm • Suppose there exists a rate allocation with M' senders in which F(t) is min, say F'(t) • for some 1≤ i’< M‘ ,but S(i’,t)≠B(i‘,t) ,implies that S(i',t) < B(i',t) • Proof:Allocating some of the bit rate from sender M' to i' we can achieve smaller loss rate than F'(t) Ω = min[ S(M',t) , B(i',t)−S(i',t)] F*(t) < F'(t)

  11. Packet Partition Algorithm • Each sender receive control packet from the receiver through a reliable protocol in the following format Di: Denote the estimate delay from each sender to receiver Si : Denote the sending rate for each sender Sync : Synchronization sequence number D1 D2 S1 S2 Sync

  12. Packet Partition Algorithm • Only the sender who maximizes Ak’(j, k) will be assigned to send kth packet • Each sender computes Ak’(j, k) for each packet k for itself and all other senders Ak’(j, k) = Tk’(k) – [nj,kσ(j)+2D(j)] D(j) : estimated delay nj,k : Number of packets already sent since k’ to packet k σ(j) = P/S(j) : Sending interval between packets for sender j (P : packet size) Tk’(k) : Time difference between arrival and playback time of kth packet (not affect the value) arrival time of the kth packet sent by sender j

  13. Packet Partition Algorithm • Among all senders j=1,…N, the one that maximizes Ak’(j,k) is assigned to send kth packet • Each sender keeps track of all the values of Ak’(j,k) for all N senders, and updates every time a packet is sent

  14. 0 2 f A B 1 TCP 3 Task-level TCP Friendliness • Congestion Control in Single-flow streaming • Flow f is TCP-friendly: • B=BTCP • Comparable network environment • same loss rate, RTT and packet size

  15. f1, f2, and f3 0 2 A B TCP 1 3 Task-level TCP Friendliness • Congestion Control on aggregate O: set of flows in flow aggregate bi :throughput of flow fi Rtti :round trip time of flow fi

  16. TCP TCP Task-level TCP Friendliness • Congestion Control in DMS 0 3 A C R 1 B 2 Set of DMS flows to be controlled depends on where congestion appears

  17. Congestion Control Algorithm Congestion location Throughput control Framework of DMSCC Update the increasing factor of AIMD Receiver side DMS Flows Sender side

  18. Congestion Location • An ideal solution to locate a congestion • congestion causes a packet loss on a DMS flow • should be able to tell which link is congested • congestion subsides • should sense it, so that the regulation on the DMS flows previously imposed can be lifted • Such ideal solution is difficult • simultaneous congestion on different links in the tree • same flow might experience congestion on different links

  19. Congestion Location • One link congestion (Rubenstein et al. [15].) • compare the cross-correlation of two flows and the auto-correlation of one of them • CorrTest(i,j) to denote the correlation test of Rubenstein applied on flow i and flow j • the two flows share a bottleneck • Return1 • No shared bottleneck is detected • Return 0

  20. i-4 i-3 Flow 1 pkts i-2 i-1 Flow 2 pkts time i i+1 Congestion Location • Idea: Packets passing through same Point of Congestion(POC) close in time experience loss and delay correlations • Using either loss or delay statistics, compute two measures of correlation: • Mc: cross-measure (c between flows) • Ma: auto-measure (c within a flow) • such that • if Mc < Mathen infer POCs are separate • else Mc > Maand infer POCs are shared Mc = C(Delay(i), Delay(i-1)) Ma = C(Delay(i), Delay(prev(i))

  21. 0 A 1 R B 2 Congestion Location • Algorithm 1: find the shared bottleneck Output ={ A-B }

  22. 0 A B R 1 2 Congestion Location • Algorithm 2: On every packet loss H = {A-B, 0-A, 0-A, A-B, A-B, 0-A,A-B,A-B …} C = {A-B, 0-A}

  23. Throughput Control W : size of congestion window α : increasing factor L : period (in RTT) between every two packet losses p : packet loss rate • Total number of packets received during that period Mathis Equation[14] loss rate is p, then for every 1/p packets, one packet is lost.

  24. Throughput Control • If we want a DMS flow to have β times the throughput of a TCP flow • For the throughput of a DMS flow to be β times of a conformant TCP flow, we need to set its increasing factor α to β^2

  25. 0 A 1 B R 2 Throughput Control • Update Alpha: according to dominant bottleneck • Algorithm 3: Update Increasing Factors C = {A-B, 0-A} C ’ = {A-B} A-B dominates the other link

  26. Throughput Control • Bottleneck Recovery • Congestion subsides and there is no more packet loss, we need to reset αito 1 • DMS flow to fully utilize available bandwidth • A timer is refreshed when packet loss is detected • If no packet loss is detected within t seconds • The increasing factors of all DMS flows are reset to 1

  27. 0 3 A C R 1 B 2 Simulation

  28. Conclusions • Congestion control in DMS system • Task-level TCP-friendliness • Locate congestion in a reverse tree topology • Relies on Rubenstein’s method • Control the throughput of a DMS flow using AIMD loop • combined throughput on the bottleneck is TCP-friendly • Based on Mathis equation

More Related