750 likes | 933 Views
Transport Layer. provide logical communication between app processes running on different hosts transport protocols run in end systems send side: breaks app messages into segments , passes to network layer rcv side: reassembles segments into messages, passes to app layer
E N D
provide logical communication between app processes running on different hosts transport protocols run in end systems send side: breaks app messages into segments, passes to network layer rcv side: reassembles segments into messages, passes to app layer more than one transport protocol available to apps Internet: TCP and UDP application transport network data link physical application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical logical end-end transport Transport services and protocols
network layer: logical communication between hosts transport layer: logical communication between processes relies on and enhances network layer services Household analogy: 12 kids sending letters to 12 kids processes = kids app messages = letters in envelopes hosts = houses transport protocol = Ann and Bill network-layer protocol = postal service Transport vs. network layer
unreliable, unordered delivery: UDP no-frills extension of “best-effort” IP reliable, in-order delivery (TCP) congestion control flow control connection setup services not available: delay guarantees bandwidth guarantees application transport network data link physical application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical logical end-end transport Internet transport-layer protocols
“no frills,” “bare bones” Internet transport protocol “best effort” service, UDP segments may be: lost delivered out of order to app connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of others Why is there a UDP? no connection establishment (which can add delay) simple: no connection state at sender, receiver small segment header no congestion control: UDP can blast away as fast as desired UDP: User Datagram Protocol [RFC 768]
Reliable Data Transfer Reliable data transfer over a reliable channel • over a reliable channel
Reliable Data Transfer Reliable data transfer over a reliable channel • over a reliable channel • over a channel with error
Reliable Data Transfer Reliable data transfer over a reliable channel • over a reliable channel • over a channel with error • Checksum + NACK or ACK • over a channel with error and loss
Reliable Data Transfer Reliable data transfer over a reliable channel • over a reliable channel • over a channel with error • Checksum + NACK or ACK • over a channel with error and loss • Checksum + ACK + sequence no. + timeout
full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init’s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver point-to-point: one sender, one receiver reliable, in-order byte steam: no “message boundaries” pipelined: TCP congestion and flow control set window size send & receive buffers TCP: OverviewRFCs: 793, 1122, 1323, 2018, 2581
receive side of TCP connection has a receive buffer: speed-matching service: matching the sending rate to the receiving app’s drain rate Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unACKed data to RcvWindow guarantees receive buffer doesn’t overflow flow control sender won’t overflow receiver’s buffer by transmitting too much, too fast TCP Flow Control • app process may be slow at reading from buffer
Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem! Principles of Congestion Control
End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at (XCP) Approaches towards congestion control Two broad approaches towards congestion control:
TCP congestion control: additive increase, multiplicative decrease • Approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs • additive increase: increase CongWin by 1 MSS every RTT until loss detected • multiplicative decrease: cut CongWin in half after loss Saw tooth behavior: probing for bandwidth congestion window size time
sender limits transmission: LastByteSent-LastByteAcked CongWin Roughly, Both CongWin and RTT are time-varying How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: slow start AIMD conservative after timeout events CongWin rate = Bytes/sec RTT TCP Congestion Control: details
Summary: TCP Congestion Control • When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. • When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. • When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. • When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.
TCP in Wireless Networks • Transmission errors • Random errors • Burst errors • Mobility • Infrastructure wireless networks • Wireless ad hoc networks
Impacts of Random Errors • Random errors may cause fast retransmit • Fast retransmit results in • retransmission of lost packet • reduction in congestion window • Reducing congestion window in response to errors is unnecessary and reduces the throughput • Random errors may cause timeout • Multiple packet losses in a window can result in timeout when using TCP-Reno (and to a lesser extent when using SACK)
Burst Errors May Cause Timeouts • If wireless link remains unavailable for extended duration, a window worth of data may be lost • passing a truck • driving through the tunnel • Timeout results in • Possibly long idle time • Slow start, which reduces congestion window to 1 MSS and reduces ssthresh to 1/2 • Reduction in window and ssthresh in response to errors are unnecessary
Various Schemes • Link level mechanisms • Split connection approach • TCP-Aware link layer • TCP-Unaware approximation of TCP-aware link layer • Explicit notification • Receiver-based discrimination • Sender-based discrimination
application application application transport transport transport network network network rxmt link link link physical physical physical wireless Link Level Schemes Link layer state TCP connection
Link Layer Schemes (Cont.) Ideas • Recover wireless losses using FEC code, retransmission, and/or adapting frame size Characteristics • Hide wireless losses from TCP sender • Link layer modifications needed at both ends of wireless link • TCP need not be modified When is a reliable link layer beneficial to TCP performance? • If it provides almost in-order delivery and • TCP retransmission timeout large enough to tolerate additional delays due to link level retransmits
application application application rxmt transport transport transport network network network link link link physical physical physical wireless Split Connection Approach Per-TCP connection state TCP connection TCP connection
Split Connection Approach (Cont.) • Idea • End-to-end TCP connection is broken into one connection on the wired part of route and one over wireless part of the route • Characteristics • Hides transmission errors from sender • Primary responsibility at base station • If specialized transport protocol used on wireless, then wireless host also needs modification • Pros and cons?
Split Connection Approach : Advantages • Local recovery of errors • Faster recovery due to relatively shorter RTT on wireless link • BS-MH connection can be optimized independent of FH-BS connection • Different flow / error control on the two connections • Good performanceachievable using appropriate BS-MH protocol • Standard TCP on BS-MH performs poorly when multiple packet losses occur per window (timeouts can occur on the BS-MH connection, stalling during the timeout interval) • Selective acks improve performance for such cases
Split Connection Approach : Disadvantages • End-to-end semantics violated • ack may be delivered to sender before data delivered to the receiver • May not be a problem for applications that do not rely on TCP for the end-to-end semantics • May not be useful if data and acks traverse different paths (both do not go through the base station) • Extra copy and storage required at base station
Snoop Protocol [Balakrishnan95acm] • Retains local recovery of Split Connection approach and link level retransmission schemes • Improves on split connection • end-to-end semantics retained • soft state at base station, instead of hard state
application application application transport transport transport network network network link link link physical physical physical Snoop Protocol Per TCP-connection state TCP connection rxmt FH BS MH wireless
Snoop Protocol • Buffers data packets at the base station BS • to allow link layer retransmission • When dupacks received by BS from MH, retransmiton wireless link, if packet present in buffer • Prevents fast retransmit at TCP sender FH by dropping the dupacks at BS FH BS MH
Snoop : Example 35 TCP state maintained at link layer 36 37 38 40 39 38 37 FH BS MH 34 36 Example assumes delayed ack - every other packet ack’d
Snoop : Example 35 39 36 37 38 41 40 39 38 34 36
Snoop : Example 37 40 38 39 42 41 40 39 36 36 dupack Duplicate acks are not delayed
Snoop : Example 37 40 38 41 39 43 42 41 40 36 36 36 Duplicate acks
Snoop : Example 37 40 38 41 39 42 44 43 37 41 FH BS MH 36 36 Discard dupack Dupack triggers retransmission of packet 37 from base station BS needs to be TCP-aware to be able to interpret TCP headers 36
Snoop : Example 37 40 43 38 41 39 42 45 44 42 37 36 36 36 36
Snoop : Example 37 40 43 38 41 44 39 42 46 45 43 42 36 41 TCP sender does not fast retransmit 36 36 36
Snoop : Example 37 40 43 38 41 44 39 42 45 47 46 44 43 41 TCP sender does not fast retransmit 36 36 36 36
Snoop : Example 42 45 43 46 44 48 47 45 44 FH BS MH 41 43 36 36 36 36
Snoop Protocol • Snoop prevents fast retransmit from sender despite transmission errors and out-of-order delivery on the wireless link • If wireless link level delay-bandwidth product is less than 4 packets, a simple (TCP-unaware) link level retransmission scheme can suffice • Since delay-bandwidth product is small, the retransmission scheme can deliver the lost packet without resulting in 3 dupacks from the TCP receiver
Snoop Protocol Characteristics • Hides wireless losses from the sender • Requires modification to only BS (network-centric approach) • Pros and cons?
Snoop Protocol : Advantages • High throughput can be achieved • performance further improved using selective acks • Local recovery from wireless losses • Fast retransmit not triggered at sender despite out-of-order link layer delivery • End-to-end semantics retained • Soft state at base station • loss of the soft state affects performance, but not correctness
Snoop Protocol : Disadvantages • Link layer at base station needs to be TCP-aware • Not useful if TCP headers are encrypted (IPsec) • Cannot be used if TCP data and TCP acks traverse different paths (both do not go through the base station)
Delayed Dupacks Protocol [Mehta98,Vaidya99] • Attempts to imitate Snoop without making the base station TCP-aware • Snoop implements two features at the base station • link layer retransmission • reducing interference between TCP and link layer retransmissions (by dropping dupacks) • Delayed Dupacks implements the same two features • at BS : link layer retransmission • at MH : reducing interference between TCP and link layer retransmissions (by delaying third and subsequent dupacks)
Delayed Dupacks Protocol • TCP receiver delays dupacks (thirdand subsequent) for interval D, when out-of-order packets received • Dupack delay intended to give link level time to retransmit • Pros • Delayed dupacks can result in recovery from a transmission loss without triggering a response from the TCP sender • Cons • Recovery from congestion losses delayed
Various Schemes • Link-layer retransmissions • Split connection approach • TCP-Aware link layer • TCP-Unaware approximation of TCP-aware link layer • Explicit notification • ELN: Base station tags dup-ack with ELN if it’s wireless related loss • ECN: Router tags packet if it experiences congestion • Receiver-based discrimination • Receiver attempts to guess cause of packet loss • When receiver believes that packet loss is due to errors, it sends a notification to the TCP sender • TCP sender, on receiving the notification, retransmits the lost packet without reducing congestion window • Sender-based discrimination • Sender can attempt to determine cause of a packet loss • If packet loss determined to be due to errors, do not reduce congestion window