1 / 30

CS 5565 Network Architecture and Protocols

This lecture covers TCP network architecture & congestion control principles, sequence number management, congestion types & causes, congestion control approaches, and end-to-end vs. network-assisted control. Key topics include 3-way handshake, sequence number reuse, congestion collapse, and TCP's response to packet loss. Understanding these concepts is crucial for designing efficient and reliable network protocols.

ddavy
Download Presentation

CS 5565 Network Architecture and Protocols

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 5565Network Architecture and Protocols Godmar Back Lecture 13

  2. Announcements • Problem Set 2 due Mar 18 • Project 1B due Mar 20 • Reminder: can be done as a team, can switch teams between projects, use forum if you’re looking for team members • Midterm April 1 (no joke) • Required Reading: • DCCP by Koehler et al, SIGCOMM 2006 CS 5565 Spring 2009

  3. Study of TCP: Outline • segment structure • reliable data transfer • delayed ACKs • Nagle’s algorithm • timeout management, fast retransmit • flow control + silly window syndrome • connection management [ Network Address Translation ] [ Principles of congestion control ] • TCP congestion control CS 5565 Spring 2009

  4. TCP Connection Management

  5. TCP 3-way handshake • TCP connection establishment: • Q1: why 3-way and not 2-way handshake? • Q2: how do sender & receiver determine initial seqnums? CS 5565 Spring 2009

  6. Sequence Number Reuse • Idea: Tie initial TCP seq numbers to clock • Increment every 4s, guards against previous incarnations of a connection with identical sequence numbers • Must also guard against sequence number prediction attack • Use PRNG see [RFC 1948], [CERT 2001-09] • RFC 1948: ISN = 4s clock val + F(src, dst, sport, dport, random()) CS 5565 Spring 2009

  7. Sequence Number Summary • Goals Set 1: • Guard against old duplicates in one connection -> don’t reuse sequence numbers until after it’s reasonably certain that old duplicates have disappeared • Even after a crash restart -> hence tie to time -> but don’t use global clock -> hence need 3-way handshake were each side verifies the sequence number chosen by the other side before successful connection occurs • Goals Set 2: • Don’t allow high-jacking of connections unless attacker can eavesdrop – use PRNG for initial seq number choice • Don’t allow SYN attacks – compute, but don’t store initial sequence number CS 5565 Spring 2009

  8. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: long delays (queueing in router buffers) lost packets (buffer overflow at routers) a top-10 problem! Principles of Congestion Control CS 5565 Spring 2009

  9. two senders, two receivers one router, infinite buffers no retransmission large delays when congested (but no reduction in throughput here!) lout lin : original data unlimited shared output link buffers Host A Host B Causes/costs of congestion: scenario 1 CS 5565 Spring 2009

  10. one router, finite buffers sender retransmission of lost packet after timeout Causes/costs of congestion: scenario 2 Host A lout lin : original data l'in : original data, plus retransmitted data Host B finite shared output link buffers CS 5565 Spring 2009

  11. Always: in = out (goodput) a) if no loss ’in = in b) assume clairvoyant sender: retransmission only when loss certain. c) retransmission of both delayed and lost packets makes ’in larger for same out (every packet transmitted twice) R/2 R/2 R/2 lin lin lin R/3 lout lout lout R/4 R/2 R/2 R/2 Causes/costs of congestion: scenario 2 c. a. b. • “costs” of congestion: • more work (retrans) for given “goodput” • unneeded retransmissions: link carries multiple copies of pkt CS 5565 Spring 2009

  12. four senders multihop paths timeout/retransmit lout lin : original data l'in : original data, plus retransmitted data finite shared output link buffers Host A Host B Causes/costs of congestion: scenario 3 Q:what happens as in and ’in increase ? CS 5565 Spring 2009

  13. lout Host A Host B Causes/costs of congestion: scenario 3 • Another “cost” of congestion: • when packet is dropped, any upstream transmission capacity used for that packet was wasted! • ultimately leads to congestion collapse CS 5565 Spring 2009

  14. Reasons for Congestion Control • Congested networks increase delay even if no packet loss occurs • If packet loss occurs, needed retransmission require offered load to be greater than goodput • Downstream losses waste upstream transmission capacity, leading to congestion collapse in the worst case CS 5565 Spring 2009

  15. End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at Congestion Control Approaches Two broad classes: CS 5565 Spring 2009

  16. end-end control (no network assistance) sender limits transmission: LastByteSent-LastByteAcked  min(CongWin, RcvWindow) CongWin and RTT influence throughput CongWin is dynamic, function of perceived network congestion How does sender notice congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event (assumes congestion is primary cause of loss!) Three mechanisms: AIMD slow start fast recovery CongWin rate = Bytes/sec RTT TCP Congestion Control CS 5565 Spring 2009

  17. multiplicative decrease: cut CongWin in half after loss event TCP AIMD additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing Long-lived TCP connection CS 5565 Spring 2009

  18. When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate TCP Slow Start • When connection begins, increase rate exponentially fast until first loss event CS 5565 Spring 2009

  19. When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast time TCP Slow Start (more) Host A Host B one segment RTT two segments four segments CS 5565 Spring 2009

  20. Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event If timeout event, set CongWin to 1 If triple-ack event: set CongWin to Threshold and increase linearly (this is called fast recovery and was added in version TCP Reno) TCP Tahoe vs. Reno CS 5565 Spring 2009

  21. After 3 dup ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Timeouts vs 3-dup ACKs Rationale: • 3 dup ACKs indicates network capable of delivering some segments • timeout before 3 dup ACKs received is stronger, “more alarming” indicator of congestion CS 5565 Spring 2009

  22. Summary: TCP Congestion Control • When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. • When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. • When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. • When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS. CS 5565 Spring 2009

  23. TCP Sender Congestion Control CS 5565 Spring 2009

  24. TCP Throughput: Idealized W • What’s the average throughout of TCP as a function of window size and RTT? • Long-lived connection: Ignore slow start • When window is W, throughput is W/RTT • Just after loss, window drops to W/2, throughput to W/2RTT. • Average steady-state throughput: .75 W/RTT W/2 CS 5565 Spring 2009

  25. TCP Throughput & Loss • Example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput • Requires window size W = 83,333 in-flight segments • Throughput in terms of loss rate: • ➜ L = 2·10-10 Very low • Require almost perfect link! CS 5565 Spring 2009

  26. Equation-Based Control • Note: TCP congestion control forms control loop: • Inputs: round-trip time, “loss events” (which are samples of timeout events + 3-ack events) • Instead, equation-based control uses an equation to compute sending rate based on this input • See RFC 5348 for more info CS 5565 Spring 2009

  27. TCP Fairness

  28. Fairness goal: if k TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/k TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness CS 5565 Spring 2009

  29. Consider two competing sessions: additive increase gives slope of 1, as throughput increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R CS 5565 Spring 2009

  30. Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss TCP friendliness Fairness and parallel TCP connections nothing prevents app from opening parallel connections between 2 hosts. Example: link of rate R supporting 9 connections; new app asks for 1 TCP, gets rate R/10 new app asks for 9 TCPs, gets R/2! That’s what “download accelerators” do Fairness (more) CS 5565 Spring 2009

More Related