320 likes | 415 Views
Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes. Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University reddy@ee.tamu.edu June 23 2004, ICC. Motivation Performance Evaluation Results & Analysis
E N D
Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University reddy@ee.tamu.edu June 23 2004, ICC Texas A & M University
Motivation Performance Evaluation Results & Analysis Discussion Agenda Texas A & M University
Current Network Workload • Traffic composition in current network • ~60% Long-term TCP (LTRFs), ~30% Short-term TCP (STFs), ~10% Long-term UDP (LTNRFs) • Nonresponsive traffic is increasing • STF + LTNRF • Link capacities are increasing • What is the consequence? Texas A & M University
UDP arrival rate UDP Goodput TCP Goodput The Trends • Long-term UDP traffic increases • Multimedia applications • Impact on TCP applications from the non-responsive UDP traffic Texas A & M University
The Trends (cont’d) • Link capacity increases • Larger buffer memory required if current rules followed (buffer = BW * delay product) • Increasing queuing delay • Larger memories constrain router speeds • What if smaller buffers used in the future? Texas A & M University
Overview of Paper • Study buffer management policies in the light of • Increasing Non-responsive loads • Increasing link speeds • Policies studied • Droptail • RED • RED with ECN Texas A & M University
P 1 Pmax P1 0 AvgQlen Minth Q1 Maxth Queue Management Schemes • RED • RED-ECN (RED w/ ECN enabled) • Droptail Texas A & M University
Agenda • Motivations Performance Evaluation • Results & Analysis • Discussion Texas A & M University
Performance Evaluation • Different workloads w/ higher non-responsive loads: 60% • Different link capacities: 5Mb, 35Mb, 100Mb • Different buffer sizes: 1/3 or 1 or 3 *1 BWDP * Buffer size is in the unit of packet (1 packet = 1000 bytes) Texas A & M University
Workload Characteristics • TCP(FTP): LTRFs • UDP(CBR): LTNRFs • 60%, 55%, 30% • 1Mbps or 0.5Mbps • Short-term TCP: STFs • 0%, 5%, 30% • 10packets/10s on average Texas A & M University
Workload Characteristics (cont’d) • Number of flows under 35Mb link contributing to 60% non-responsive load * Each LTRNF sends at 1Mbps * Numbers of flows under 5Mb and 100Mb links are scaled accordingly Texas A & M University
Performance Metrics • Realized TCP throughput • Average queuing delay • Link utilization • Standard deviation of queuing delay Texas A & M University
TCPs TCP Sinks R1 R2 RED/DT, Tp=50ms CBRs CBR Sinks Simulation Topology Simulation Setup Texas A & M University
Link Characteristics • Capacities between R1 and R2: 5Mb, 35Mb, 100Mb • Total round-trip propagation delay: 120ms • Queue management schemes deployed between R1 and R2: RED/RED-ECN/ Droptail Texas A & M University
Agenda • Motivations • Performance Evaluation • Simulation Setup Results & Analysis • Discussion Texas A & M University
Sets of Simulations • Changing buffer sizes • Changing link capacities • Changing STF loads Texas A & M University
DropTail RED/RED-ECN Set 1: Changing Buffer Sizes • Correlation between average queuing delay & BWDP Texas A & M University
5Mb Link 100Mb Link Realized TCP Throughput • 30% STF load • Changing buffer size from 1/3 to 3 BWDPs Texas A & M University
Realized TCP Throughput (cont’d) • TCP Throughput higher with DropTail • Difference decreases with larger buffer sizes • Avg. Qdelay from REDs much smaller than that from Droptail • RED-ECN marginally improves throughput over RED Texas A & M University
Link Utilization • 30% STF load • Droptail has higher utilization with smaller buffers • Difference decreases with larger buffers Texas A & M University
5Mb Link 100Mb Link Std. Dev. Of Queuing Delay • 30% STF + 30% ON/OFF LTNRF load Texas A & M University
Std. Dev. Of Queuing Delay (cont’d) • Droptail has comparable deviation at 5Mb link capacity • REDs have less deviation under higher buffer sizes and higher bandwidths • REDs are more suitable for jitter sensitive applications Texas A & M University
ECN Disabled ECN Enabled Set 2: Changing Link Capacities • 30% STF load • Relative Avg Queuing Delay = Avg Queuing Delay/RT Propagation Delay Texas A & M University
Relative Avg Queuing Delay • Droptail has Relative Avg Queuing Delay close to the buffer size (x * BWDP) • REDs has significantly smaller Avg Queuing Delay (~1/3 of DropTail) • Changing link capacities have almost no impact Texas A & M University
Drop/Marking Rate • 30% STF load, 1 BWDP 1 Format: Drop Rate 2 Format: Drop Rate/Marking Rate Texas A & M University
ECN Disabled ECN Enabled Set 3: Changing STF Loads • 1 BWDP • Normalized TCP throughput = TCP throughput / (UDP+TCP) throughput Texas A & M University
Comparison of Throughputs • STF throughputs are almost constant over 3 queue management schemes • Difference of TCP throughputs decreases while STF load increases Texas A & M University
Agenda • Motivations • Performance Evaluation • Simulation Setup • Results & Analysis Discussion Texas A & M University
Discussion • Performance metrics of REDs comparable to or prevailing over DT w/ the existence of STF load and in high BWDP cases • Marginal improvement of long-term TCP throughput from RED-ECN with TCP-Sack compared to RED Texas A & M University
Discussion (cont’d) • Minor impact on Avg Queuing Delay or TCP throughput by changing either link capacities or STF loads • With the existence of STFs: Texas A & M University
Thank you June, 2004 Texas A & M University
Related Work • S. Floyd et. al. “Internet needs better models” • C. Diot et. al. “Aggregated Traffic Performance with Active Queue Management and Drop from Tail” & “Reasons not to deploy RED” • K. Jeffay et. al. “Tuning RED for Web Traffic” Texas A & M University