1 / 45

Lecture 6 Data Communication And Networks

Lecture 6 Data Communication And Networks. Transport Layer References: Kurose & Ross – Computer Networking: A top-down approach Andrew Tanenbaum – Computer Networks www.google.com . Abdullah Tayyab abdullah.tayyab@gmail.com. TRANSPORT LAYER.

tait
Download Presentation

Lecture 6 Data Communication And Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 6Data Communication And Networks Transport Layer References: Kurose & Ross – Computer Networking: A top-down approach Andrew Tanenbaum – Computer Networks www.google.com Abdullah Tayyab abdullah.tayyab@gmail.com

  2. TRANSPORT LAYER • The transport layer is in the core of the layered network architecture • Logical communication between application processes running on different hosts. • Implemented in the end systems but not in network routers. • transport protocols run in end systems • send side: breaks app messages into segments, passes to network layer • receive side: reassembles segments into messages, passes to app layer • more than one transport protocol available to applications • Internet: TCP and UDP

  3. computer network can make more than one transport layer protocol available to network Applications • transport layer protocols provide an application multiplexing/demultiplexing service.

  4. application transport network data link physical application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical logical end-end transport

  5. Relationship between Transport and Network Layers • A transport layer protocol provides logical communication between processes • The network layer protocol provides logical communication between hosts • - Difference between ports and sockets? • - Can one port serve many clients in client/server architecture?

  6. Household analogy: 12 kids sending letters to 12 kids • processes = kids • app messages = letters in envelopes • hosts = houses • transport protocol = Ann and Bill • network-layer protocol = postal service

  7. Internet transport-layer protocols • reliable, in-order delivery (TCP) • congestion control • flow control • connection setup • unreliable, unordered delivery: UDP • no-frills extension of “best-effort” IP • services not available: • delay guarantees • bandwidth guarantees

  8. application application application transport transport transport P4 P2 P1 P1 P3 network network network link link link physical physical physical Multiplexing at send host: Demultiplexing at rcv host: host 3 host 2 host 1 Multiplexing/Demultiplexing delivering received segments to correct socket gathering data from multiple sockets, enveloping data with header (later used for demultiplexing) = socket = process

  9. How demultiplexing works • host receives IP datagrams • each datagram has source IP address, destination IP address • each datagram carries 1 transport-layer segment • each segment has source, destination port number (recall: well-known port numbers for specific applications) • host uses IP addresses & port numbers to direct segment to appropriate socket 32 bits source port # dest port # other header fields application data (message) TCP/UDP segment format

  10. Connectionless demultiplexing • Create sockets with port numbers: DatagramSocket mySocket1 = new DatagramSocket(99111); DatagramSocket mySocket2 = new DatagramSocket(99222); • UDP socket identified by two-tuple: (dest IP address, dest port number) • When host receives UDP segment: • checks destination port number in segment • directs UDP segment to socket with that port number • IP datagrams with different source IP addresses and/or source port numbers directed to same socket

  11. P2 P1 P1 P3 SP: 9157 client IP: A DP: 6428 Client IP:B server IP: C SP: 5775 SP: 6428 SP: 6428 DP: 6428 DP: 9157 DP: 5775 Connectionless demux (cont) • DatagramSocketserverSocket = new DatagramSocket(6428); SP provides “return address”

  12. Connection-oriented demux • TCP socket identified by 4-tuple: • source IP address • source port number • dest IP address • dest port number • receiver host uses all four values to direct segment to appropriate socket • Server host may support many simultaneous TCP sockets: • each socket identified by its own 4-tuple • Web servers have different sockets for each connecting client • non-persistent HTTP will have different socket for each request

  13. SP: 5775 SP: 9157 P1 P1 P2 P4 P3 P6 P5 client IP: A DP: 80 DP: 80 Connection-oriented demux (cont) S-IP: B D-IP:C SP: 9157 DP: 80 Client IP:B server IP: C S-IP: A S-IP: B D-IP:C D-IP:C

  14. Connectionless Transport: UDP • “no frills,” “bare bones” Internet transport protocol • “best effort” service, UDP segments may be: • lost • delivered out of order to app • connectionless: • no handshaking between UDP sender, receiver • each UDP segment handled independently of others

  15. Why is there a UDP? • no connection establishment (which can add delay) • simple: no connection state at sender, receiver • small segment header • Unregulated send rate, no congestion control: UDP can blast away as fast as desired

  16. UDP Cont… • often used for streaming multimedia apps • loss tolerant • rate sensitive • other UDP uses • DNS • SNMP • reliable transfer over UDP: add reliability at application layer • application-specific error recovery! • (Bonus – Why does DNS use UDP?) 32 bits Length, in bytes of UDP segment, including header source port # dest port # checksum length Application data (message) UDP segment format

  17. Assignment – WHY do each of these applications use these protocols? Short answer ONLY. Due Wednesday 11:59PM

  18. UDP checksum The UDP checksum provides for error detection. RFC (1071) Sender: • treat segment contents as sequence of 16-bit words • checksum: addition (1’s complement sum) of segment contents • sender puts checksum value into UDP checksum field

  19. Receiver: • compute checksum of received segment • check if computed checksum equals checksum field value: • NO - error detected • YES - no error detected. But maybe errors nonetheless? More later

  20. Checksum Example • Note • When adding numbers, a carryout from the most significant bit needs to be added to the result • Example: add two 16-bit integers • (Why do we need error checking at layer 4 by UDP when it is also done at layer 2 by hardware devices/switches? 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 wraparound sum checksum

  21. Connection-Oriented Transport: TCP • point-to-point: • one sender, one receiver • reliable, in-order byte stream: • no “message boundaries” • pipelined: • TCP congestion and flow control set window size • full duplex data: • bi-directional data flow in same connection • MSS: maximum segment size

  22. Cont.. • connection-oriented: • handshaking (exchange of control msgs) initiates sender, receiver state before data exchange • flow controlled: • sender will not overwhelm receiver

  23. TCP segment structure

  24. Sequence Numbers and Acknowledgment Numbers • Seq. No.: • byte stream. "number” of first byte in segment's data • ACKs: • seq # of next byte expected from other side • cumulative ACK • Q: how receiver handles out-of-order segments • A: TCP spec doesn’t say, - up to implementer

  25. Sequence and acknowledgment numbers for a simple Telnet application over TCP

  26. Reliable Data Transfer • Simplified sender, assuming • one way data transfer • no flow, • congestion control • Many popular application protocols -- including FTP, SMTP, NNTP, HTTP and Telnet --

  27. Retransmission due to a lost acknowledgment

  28. Segment is not retransmitted because its acknowledgment arrives before the timeout

  29. A cumulative acknowledgment avoids retransmission of first segment

  30. Without pipelining

  31. Problem with stop-and-wait • Instead of stop-and-wait send back collective ACKs for packets in same window. • example: 1 Gbps link, 15 ms prop. delay, 1KB packet: L (packet length in bits) 8kb/pkt T = = = 8 microsec transmit R (transmission rate, bps) 10**9 b/sec • U sender: utilization – fraction of time sender busy sending • 1KB pkt every 30 msec -> 33kB/sec thruput over 1 Gbps link • network protocol limits use of physical resources!

  32. Pipelining Pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged pkts • range of sequence numbers must be increased • buffering at sender and/or receiver • Two generic forms of pipelined protocols: go-Back-N, selective repeat

  33. Pipelining: increased utilization sender receiver first packet bit transmitted, t = 0 last bit transmitted, t = L / R first packet bit arrives RTT last packet bit arrives, send ACK last bit of 2nd packet arrives, send ACK last bit of 3rd packet arrives, send ACK ACK arrives, send next packet, t = RTT + L / R Increase utilization by a factor of 3!

  34. Sender: k-bit seq # in pkt header “window” of up to N, consecutive unack’ed pkts allowed Go-Back-N • ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK” • may receive duplicate ACKs (see receiver) • timer for each in-flight pkt • timeout(n): retransmit pkt n and all higher seq # pkts in window

  35. ACK-only: always send ACK for correctly-received pkt with highest in-order seq # may generate duplicate ACKs need only remember expectedseqnum out-of-order pkt: discard (don’t buffer) -> no receiver buffering! Re-ACK pkt with highest in-order seq # GBN: receiver extended FSM default udt_send(sndpkt) rdt_rcv(rcvpkt) && notcurrupt(rcvpkt) && hasseqnum(rcvpkt,expectedseqnum) L Wait extract(rcvpkt,data) deliver_data(data) sndpkt = make_pkt(expectedseqnum,ACK,chksum) udt_send(sndpkt) expectedseqnum++ expectedseqnum=1 sndpkt = make_pkt(expectedseqnum,ACK,chksum)

  36. GBN inaction

  37. receiver individually acknowledges all correctly received pkts buffers pkts, as needed, for eventual in-order delivery to upper layer sender only resends pkts for which ACK not received sender timer for each unACKed pkt sender window N consecutive seq #’s again limits seq #s of sent, unACKed pkts Selective Repeat

  38. Selective repeat: sender, receiver windows

  39. Selective repeat in action

  40. Example: seq #’s: 0, 1, 2, 3 window size=3 receiver sees no difference in two scenarios! incorrectly passes duplicate data as new in (a) Q: what relationship between seq # size and window size? Selective repeat: dilemma

  41. Flow Control • Flow control is a speed matching service - matching the rate at which the sender is sending to the rate at which the receiving application is reading. • TCP provides flow control by having the sender maintain a variable called the receive window. • The receive window, denoted RcvWindow, is set to the amount of spare room in the buffer

  42. TCP Round Trip Time and Timeout • how to set TCP timeout value? • longer than RTT • note: RTT will vary • too short: premature timeout • unnecessary retransmissions • too long: slow reaction to segment loss

  43. Cont.. • how to estimate RTT? • SampleRTT: measured time from segment transmission until ACK receipt • ignore retransmissions,cumulativelyACKed segments • SampleRTTwill vary, want estimated RTT “smoother” • use several recent measurements, not just current SampleRTT

  44. Cont.. • EstimatedRTT = (1-x)*EstimatedRTT + x*SampleRTT • Exponential weighted moving average • influence of given sample decreases exponentially fast • typical value of x: 0.1 • Setting the timeout • ❒ RTT plus “safety margin” • ❒ large variation in EstimatedRTT -> larger safety margin • Timeout = EstimatedRTT + 4*Deviation • Deviation = (1-x)*Deviation + x*abs(SampleRTT-EstimatedRTT)

  45. Next Lecture • TCP Connection Management – Three-way handshake model • Congestion control mechanisms and algorithm • Performance assessment basics

More Related