320 likes | 338 Views
This lecture explores the nuances of high-speed TCP connections, focusing on techniques such as wraparound for efficient data transmission and congestion control mechanisms. Topics covered include estimating round-trip time, maintaining a full pipeline, fairness in congestion control, and internet resource allocation with QoS considerations. Various algorithms like Karn/Partridge and Jacobson/Karels are discussed to enhance adaptive retransmission strategies. The lecture also delves into the impact of delay and throughput trade-offs, buffer acceptance algorithms like Tail Drop and RED, and the concept of Explicit Congestion Notification (ECN) for optimizing network resources. Key aspects such as bandwidth allocation fairness and flow measurements for quality of service (QoS) in datagram networks are highlighted.
E N D
Lecture 14 • High-speed TCP connections • Wraparound • Keeping the pipeline full • Estimating RTT • Fairness of TCP congestion control • Internet resource allocation and QoS
Protection against wraparound • What is wraparound: A byte with a sequence number x may be sent at one time and then on the same connection a byte with the same sequence number x may be sent again. • Wrap Around: controlled by the 32-bit SequenceNum • The maximum lifetimeof an IP datagram is 120 sec thus we need to have a wraparound time at least 120 sec. • For slow links OK but no longer sufficient for optical networks. • Bandwidth & Time Until Wrap Around Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps) Time Until Wrap Around 6.4 hours 57 minutes 13 minutes 6 minutes 4 minutes 55 seconds 28 seconds
Keeping the pipe full • The SequenceNum, the sequence number space (32 bits) should be twice as large as the window size (16 bits). It is. • The window size (the number of bytes in transit) is given by the the AdvertisedWindow field (16 bits). • The higher the bandwidth the larger the windowsize to keep the pipe full. • Essentially we regard the network as a storage system and the amount of data is equal to: ( bandwidth x delay )
Required window size for a 100 msec RTT. Delay x Bandwidth Product 18KB 122KB 549KB 1.2MB 1.8MB 7.4MB 14.8MB Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps)
Original Algorithm for Adaptive Retransmission • MeasureSampleRTTfor each segment/ACK pair • Compute weighted average of RTT • EstimatedRTT = ax EstimatedRTT + (1- a)x SampleRTT • where 0.8 < a < 0.9 • Set timeout based onEstimatedRTT • TimeOut = 2 x EstimatedRTT
Karn/Partridge Algorithm • Do not sample RTT when re-transmitting • Double timeout after each retransmission
Jacobson/Karels Algorithm • New calculation for average RTT Diff = SampleRTT - EstimatedRTT EstimatedRTT = EstimatedRTT + (dx Deviation = Deviation + d(|Diff|- Deviation) • where d is a fraction between 0 and 1 • Consider variance when setting timeout value • TimeOut = m x EstimatedRTT + f x Deviation • where m = 1 and f = 4 • Notes • algorithm only as good as granularity of clock (500 microseconds on Unix) • accurate timeout mechanism important to congestion control (later)
Congestion Control Mechanisms • The sender must perform retransmissions to compensate for lost packets due to buffer overflow. • Unneeded retransmissions by the sender due to large delays causes a router to use link bandwidth to forward unneeded copies of a packet. • When a packet is dropped along a path the capacity used used at each upstream routers to forward packets to the point where it was dropped was wasted.
Flows and resource allocation • Flow: sequence of packets with a common characteristics • A layer-N flow the common attribute a layer-N attribute • All packets exchanged between two hosts network layer flow • All packets exchanged between two processes transport layer flow
Min-max fair bandwidth allocation • Goal: fairness in a best-effort network. • Consider: • Unidirectional flows • Routers with infinite buffer space • Link capacity is the only limiting factor.
Algorithm • Start with an allocation of zero Mbps for each flow. • Increment equally the allocation for each flow until one of the links of the network becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. • Increment equally the allocation for each flow that does not pass through the first saturated link until a second link becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. • Continue by incrementing equally the allocations of all flows that do not use a saturated link until all flows use at least one saturated link.
QoS in a datagram network? • Buffer acceptance algorithms. • Explicit Congestion Notification. • Packet Classification. • Flow measurements
Buffer acceptance algorithms • Tail Drop. • RED – Random Early Detection • RIO – Random Early Detection with In and Out packet dropping strategies.
Explicit Congestion Notification (ECN) • The TCP congestion control mechanism discussed earlier has a major flow; it detects congestion after the routers have already started dropping packets. Network resources are wasted because packets are dropped at some point along their path, after using link bandwidth as well as router buffers and CPU cycles up to the point where they are discharged. • The question that comes to mind is: Could routers prevent congestion by informing the source of the packets when they become lightly congested, but before they start dropping packets? This strategy is called source quench.
Source quench • Send explicit notifications to the source, e.g., use the ICMP. Yet, sending more packets in a network that shows signs of congestion may not be the best idea. • Modify a congestion notification flag in the IP header to inform the destination; then have the destination inform the source by setting a flag in the TCP header of segments carrying acknowledgments.
Problems with ECN (1) TCP must be modified to support the new flag. (2) Routers must be modified to distinguish between ECN-capable flows and those who do not support ECN. (3) IP must be modified to support the congestion notification flag. (4) TCP should allow the sender to confirm the congestion notification to the receiver, because acknowledgments could be lost.
Maximum and minimum bandwidth guarantees • A. Packet classification. • Identify the flow the packet belongs to. • At what layer should be done? Network layer? • At each router too expensive. • The edge routers may be able to do that. • At application layer? Difficult. • MPLS – multi protocol label switch. Add an extra header in front of the IP header. Now a router decides the output link based upon the input link and the MPLS header.
Maximum and minimum bandwidth guarantees • B. Flow measurements • How to choose the measurement interval to accommodate bursty traffic? • Token bucket
The token bucket filter • Characterized by : (1) A token rate R, and (2) The depth of the bucket, B • Basic idea the sender is allocated tokens at a given rate and can accumulate tokens in the bucket until the bucket is filled. To send a byte the sender must have a token. The maximum burst can be of size B because at most B token can be accumulated.
Example • Flow A: generates data at a constant rate of 1 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 byte, • Flow B: alternates between 0.5 and 2.0 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 Mbps • Note: a single flow can be described by many token buckets.
Token bucket L = packet length C = # of tokens in the bucket --------------------------------------------------- if ( L <= C ) { accept the packet; C = C - L; } else drop the packet;
A shaping buffer delays packets that do not confirm to the traffic shape if ( L <= C ) { accept the packet; C = C - L;} else { /* the packet arrived early, delay it */ while ( C < L ) { wait; } transmit the packet; C = C - L;}