1 / 15

Estimating Shared Congestion Among Internet Paths

Estimating Shared Congestion Among Internet Paths. Weidong Cui, Sridhar Machiraju Randy H. Katz, Ion Stoica Electrical Engineering and Computer Science Department University of California, Berkeley {wdc, machi, randy, istoica}@EECS.Berkeley.EDU. Sahara Retreat Summer 2003. Motivation.

turi
Download Presentation

Estimating Shared Congestion Among Internet Paths

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Estimating Shared Congestion Among Internet Paths Weidong Cui, Sridhar Machiraju Randy H. Katz, Ion Stoica Electrical Engineering and Computer Science Department University of California, Berkeley {wdc, machi, randy, istoica}@EECS.Berkeley.EDU Sahara Retreat Summer 2003

  2. Motivation • Applications using path diversity for better performance • multimedia streaming - independent losses • parallel downloads – better throughput • overlay routing networks - backup paths for robustness • Traceroute will not work • ICMP may be filtered • False positives • Conservative N2 N1 N7 N3 N5 N6 N4 Congested Links

  3. Problem Formulation • Problem: Given two paths in the Internet, estimate the fraction of packet drops at shared points of congestion (PoCs) using probe flows along the paths • Limitations of existing solutions • Work only with Y and Inverted Y topologies • Return a “Yes/No” decision on shared PoCs

  4. Our Approach • Assumptions • Most routers still use drop-tail queuing discipline • Most traffic is TCP-based • Basic idea • Count correlated (simultaneous) packet drops of two probe flows (UDP or TCP). • Droptail Queues +TCP => Bursty Drops • Packets traversing a PoC around the same time are likely to be dropped or not dropped together. • Why not delay/jitter? • Algorithm • Determine synchronization lag • Calculate the fraction of correlated packet drops • “Inflate” the fraction using delay jitter correlation

  5. 7 7 8 6 0 5 4 1 2 3 4 6 5 T 0 1 3 2 0 1 2 0 3 4 2 6 3 5 1 4 d1 d2+ Note: is bounded by RTTmax/2 Synchronization Lag = 3T Synchronization Lag • We need to know which two packets traverse the queue around the same time • No knowledge on times of traversal at shared PoCs (if any) • Senders may not be synchronized • The delay from senders to a shared PoC is unknown Sender 1 CBR Flow 1 PoC Sender 2 CBR Flow 2 PoC Time 0

  6. Determine Synclag • Assuming UDP-based CBR probe flows: construct 2 sequences of 1s(drops) and 0s • Synclag is loosely bounded by 2*RTTmax • For a given synclag, cross-correlation coefficient (CCC) of the 2 (synclag-shifted) sequences can be calculated • Try various values of synclag and calculate CCCs • Use the synclag that maximizes the CCC of (synclag-shifted) packet drop sequences

  7. Correlate Bursty Packet Drops • All packets during congested period at PoC may not be dropped • Correlate bursts of packet drops and avoid false negatives Burst of Flow 1 Flow 1 Synclag-shifted times b Flow 2 Burst of Flow 2 Packet Drop Transmitted Packet

  8. Correlate Bursts with Overlap • Bursts at different PoCs may have small overlap • Consider bursts with a minimum degree of overlap to prevent false positives Burst of Flow 1 Synclag-shifted times Flow 1 Flow 2 Packet Drop Burst of Flow 2 Transmitted Packet

  9. Evaluation Methodology • Challenges • Hard to verify our results because congestion information about links not available • Hard to simulate real network traffic in ns simulations • Methodology • Create overlay topologies on Planetlab • Each overlay node records packet arrivals • Drops on “overlay links” can be inferred • Probe flows: • UDP (active): CBR traffic • TCP (passive): UDP-Encapsulated • Application: MPEG streaming over two paths • Parameters • UDP probing rate = 100Hz • Burst interval = 15ms • Burst overlap = 50%

  10. 4-I and 4-II Topologies (UDP) 4-I topology • 80% of the estimates > 0.8 4-II topology

  11. Evaluation Metrics • Cannot infer if drops are not shared • Drops between N1 and M1 can be at a shared PoC • Bounds on fraction of drops at shared PoCs • Lower bound: d3/(d1+d2+d3+d4) • Upper bound: (d2+d3+d4)/(d1+d2+d3+d4) N1 d1 R1 d2 d4 S1 d3 M1 M2 S2 R2 N2

  12. 4-YV Topology (UDP) • 80% paths show at least 0.8 times actual value • Better way to verify the accuracy? 4-YV topology

  13. 2-I Topology(TCP) – Base Case • TCP ~ 80%-0.6; bursty sending and fewer drops? • How to improve the performance of TCP-based estimation? 2-I topology

  14. Conclusions • Problem • Estimate the fraction of packet drops on shared PoCs • Challenges • Synchronization lag • False positives • False negatives • Results • Can estimate the actual fraction of shared drops within a factor of 0.8 in 80-90% UDP experiments • Can work with any general topology

  15. Open Questions • Better way to verify the accuracy of the estimated fraction? • How to improve the performance of TCP-based estimation? • How to work with RED? • Correlate delay? • Correlate packet loss probability? • Applications exploiting our technique? • Media streaming? • Application level multicast? • Parallel downloads? • Backup path routing?

More Related