240 likes | 355 Views
EE 627 Lecture 11. Review of Last Lecture UDP & Multimedia TCP & UDP Interaction. UDP. Provides multiplexing and demultiplexing of sources. No reliability, flow control, congestion control. Sends data in a burst. Most multimedia applications using UDP. UDP & Multimedia.
E N D
EE 627 Lecture 11 • Review of Last Lecture • UDP & Multimedia • TCP & UDP Interaction
UDP • Provides multiplexing and demultiplexing of sources. • No reliability, flow control, congestion control. • Sends data in a burst. • Most multimedia applications using UDP
UDP & Multimedia • Put flow control, congestion control into application. • Retransmit if packet deadline not past • Move on if packet deadline is past • Don’t respond to Congestion • Not a “nice” citizen. • Possible to cause congestion collapse
TCP/UDP Summary • TCP not well suited to multimedia. • TCP is a well understood, ‘nice’ protocol. • Multiplicative decrease/additive increase allows fair sharing of BW and avoids congestion collapse. • UDP is being used by multimedia developers.
UDP Consequences • Most applications today use TCP • Stability of network relies on congestion response of applications • Large scale use of UDP could lead to problems - no congestion response • Large number of multimedia applications expected - move larger amounts of data
Unfairness • When UDP and TCP compete, UDP wins by pushing TCP into congestion
Loss of goodput -FIFO • Packets dropped later in network
Multimedia Delivery • Even when using UDP, applications should respond to congestion end-to-end. • Need to promote “nice” behavior or “TCP-friendly” behavior. • Emerging applications shouldn’t kill the performance of “nice” applications.
TCP-Friendly • Throughput of a TCP connection • Limit flows to TCP-style BW • Don’t know RTT exactly • Why should everyone follow this exactly? • Monitoring individual flows difficult
Equation-based control • Don’t have to respond to congestion exactly like TCP • As long as steady-state BW is about the same • Design a protocol that claims on an average the same BW as TCP • RTT is available to end-host, try to estimate drop probability • Adjust rate based on the BW using the equation
Drop Probability • Don’t want to use instantaneous drop probability - varies too much, noisy. • Use some kind of averaging • Tends to dampen response to congestion • Important to respond quickly in times of heavy congestion • Uses a limited history -remember last 8 events • Weigh the more recent ones higher
Equation-based control • Shown to work well when competing with TCP • Lower variance in flow’s BW than TCP • Fairer distribution than TCP • A little complicated • Spurred a lot of interest in new protocols.
Binomial congestion control • Generalize congestion response • Showed that these protocols are TCP-friendly if l+k = 1, through analysis • Varying l and k, keeping l+k = 1 -- can get multiple protocols.
Binomial Congestion control • Showed that steady-state analysis did not tell the complete picture • Depending on congestion response, the drop rates could be different for different protocols • Could respond to congestion in a TCP-friendly way, but may force TCP do have more drops • Conjecture: RED is better than droptail to make drop probabilities equal across flows
Open Issues • Much interest in this area of research • Not clear at what time scales, a flow needs to be TCP-friendly • clearly steady state analysis not sufficient. • Instantaneous TCP like response not needed. • Other possible mechanisms simpler?
Other Mechanisms • Multiple connections - seem to respond to congestion, but claim larger BW. • Used in web browsers • Pricing - make user pay more when sending more bits. Adjust pricing based on congestion.
Rate-based adaptation • Have a notion of allowed rate -adjust rate to avoid congestion - reduce rate before packet loss. • Packet-pair: Send a pair of packets, watch the time separation of acks • The delay between acks gives an indication of bottleneck BW
Packet-pair Technique • Ack compression leads to incorrect BW estimation. • Timestamp packets on receipt - t1, t2 • Inform sender d = t2 - t1, bottleneck BW = (d)/P, P = size of first packet. • Need to send multiple times and use min d. • Hard to get an estimate of available BW
Packet-pair • With parallel transfers, both packets may arrive simultaneously at the receiver -inflating available BW • Can be improved by sending more packets • Possible to decouple rate adaptation and reliable delivery
Hop-by-Hop • Possible to do flow control hop-by-hop • Send backward pressure to reduce rate when queues are building up • Tough to control individual flows • Every network element need to implement -not just endpoints.