700 likes | 721 Views
The Role of “Awareness” in Internet Protocol Performance. Carey Williamson Professor/iCORE Senior Research Fellow Department of Computer Science University of Calgary. Introduction. It is an exciting time to be an Internet researcher (or even a user!)
E N D
The Role of “Awareness”in Internet Protocol Performance Carey Williamson Professor/iCORE Senior Research Fellow Department of Computer Science University of Calgary
Introduction • It is an exciting time to be an Internet researcher (or even a user!) • The last 10 years of Internet development have brought many advances: • World Wide Web (WWW) • Media streaming applications • “Wi-Fi” wireless LANs • Mobile computing • E-Commerce, mobile commerce • Pervasive/ubiquitous computing
The Wireless Web • The emergence and convergence of these technologies enable the “wireless Web” • the wireless classroom (CRAN) • the wireless workplace • the wireless home • My iCORE mandate: design, build, test, and evaluate wireless Web infrastructures • Holy grail: “anything, anytime, anywhere” access to information (when we want it, of course!)
Some Challenges • Portable computing devices: no problem (cell phones, PDAs, notebooks, laptops…) • Wireless access: not much of a problem (WiFi, 802.11b, BlueTooth, Pringles…) • Security: still an issue, but being addressed • Services: the next big growth area??? • Performance transparency: providing an end-user experience that is hopefully no worse than that in traditional wired Internet desktop environments (my focus!)
Research Theme • Existing layered Internet protocol stack does not lend itself well to providing optimal performance for diversity of service demands and environments • Who should bend: users or protocols? • Explore the role of “awareness” in Internet protocol performance • Identify tradeoffs, evaluate performance
Talk Overview • Introduction • Background • Internet Protocol Stack • TCP 101 • Motivating Examples • Our Work on CATNIP • Concluding Remarks
Application: supporting network applications and end-user services FTP, SMTP, HTTP, DNS, NTP Transport: end to end data transfer TCP, UDP Network: routing of datagrams from source to destination IPv4, IPv6, BGP, RIP, routing protocols Data Link: hop by hop frames, channel access, flow/error control PPP, Ethernet, IEEE 802.11b Physical: raw transmission of bits Application Transport Network Data Link Physical Internet Protocol Stack 001101011...
Viewpoint • “Layered design is good; layered implementation is bad” -Anon. • Good: • unifying framework for describing protocols • modularity, black-boxes, “plug and play” functionality, well-defined interfaces (good SE) • Bad: • increases overhead (interface boundaries) • compromises performance (ignorance)
Tutorial: TCP 101 • The Transmission Control Protocol (TCP) is the protocol that gets your data reliably • Used for email, Web, ftp, telnet, … • Makes sure that data is received correctly: right data, right order, exactly once • Detects and recovers from any problems that occur at the IP network layer • Mechanisms for reliable data transfer: sequence numbers, acknowledgements, timers, retransmissions, flow control...
SYN SYN/ACK ACK GET URL FIN FIN/ACK ACK TCP 101 (Cont’d) • TCP is a connection-oriented protocol YOUR DATA HERE
ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance
ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance
ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance
TCP 101 (Cont’d) • This (exponential growth) “slow start” process continues until either of the following happens: • packet loss: after a brief recovery phase, you enter a (linear growth) “congestion avoidance” phase based on slow-start threshold found • all done: terminate connection and go home
Simple Observation • Consider a big file transfer download: • brief startup period to estimate network bandwidth; most time spent sending data at the “right rate”; small added penalty for lost packet(s) • Consider a typical Web document transfer: • median size about 6 KB, mean about 10 KB • most time is spent in startup period; as soon as you find out the network capacity, you’re done! • if you lose a packet or two, it hurts a lot!!!
The Problem (Restated) • TCP doesn’t realize this dichotomy between optimizing throughput (the classic file transfer model) versus optimizing transfer time (the Web document download model) • Wouldn’t it be nice if it did? (i.e., how much data it was sending, and over what type of network) • Some research starting to explore this...
Motivating Example #1 • Wireless TCP Performance Problems Low capacity, high error rate Wired Internet High capacity, low error rate Wireless Access
Motivating Example #1 • Solution: “wireless-aware TCP” (I-TCP, ProxyTCP, Snoop-TCP, ...)
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey
Motivating Example #2 • Two interesting subproblems: • Dynamic ad hoc routing: node movement can disrupt the IP routing path at any time, disrupting TCP connection; yet another way to lose packets!!!; possible solution: Explicit Loss Notification (ELN) • TCP flow control: the bursty nature of TCP packet transmissions can create contention for the shared wireless channel among forwarding nodes; possible solution: rate-based flow control
Our Work • Context-Aware Transport/Network Internet Protocol (CATNIP) • Motivation: “Like kittens, TCP connections are born with their eyes shut” - CLW • Research Question: How much better could TCP perform if it knew what it was trying to accomplish (e.g., Web document transfer)?
Some Key Observations (I think) • Not all packet losses are created equal • TCP sources have relatively little control • IP routers have all the power!!!
Tutorial: TCP 201 • There is a beautiful way to plot and visualize the dynamics of TCP behaviour • Called a “TCP Sequence Number Plot” • Plot packet events (data and acks) as points in 2-D space, with time on the horizontal axis, and sequence number on the vertical axis
+ Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
TCP 201 (Cont’d) • What happens when a packet loss occurs? • Quiz Time... • Consider a 14-packet Web document • For simplicity, consider only a single packet loss
+ Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
? Key: X Data Packet + Ack Packet + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ X Key: X Data Packet + Ack Packet + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
Key: X Data Packet + Ack Packet X ? + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ Key: X Data Packet + Ack Packet X X + + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
Key: X Data Packet + Ack Packet X X X ? + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ Key: X Data Packet + Ack Packet X X X X + + + + X SeqNum + X + X X + + X + X + X + X + X + X Time
+ Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time
Key: X Data Packet + Ack Packet SeqNum ? + X + X Time
+ X + X + X + X X + + X Key: X Data Packet + Ack Packet SeqNum + X X X + + + X + X Time
Key: X Data Packet + Ack Packet SeqNum ? Time
X + + X + X + X + X + X Key: X Data Packet + Ack Packet SeqNum + X Time
TCP 201 (Cont’d) • Main observation: • “Not all packet losses are created equal” • Losses early in the transfer have a huge adverse impact on the transfer latency • Losses near the end of the transfer always cost at least a retransmit timeout • Losses in the middle may or may not hurt, depending on congestion window size at the time of the loss