1.06k likes | 1.08k Views
CPEG 419. Introduction to Data Networking. Review of Lecture 1 and continuation of chapter 1. Announcements. Homework 1 due next week Project 1 due next week. Today. Review and complete Chapter 1 Start Chapter 2. Packet Switching Case.
E N D
CPEG 419 Introduction to Data Networking Review of Lecture 1 and continuation of chapter 1
Announcements • Homework 1 due next week • Project 1 due next week
Today • Review and complete Chapter 1 • Start Chapter 2
Packet Switching Case This is the binomial complimentary cumulative distribution Still pretty good The probability of 101 users being active plus, 102 users being active, plus, …., 200 users being active, which is What is the probability of more than 100 users being active? We conclude that if there are 200 users, then in “pretty much always” things will work fine Suppose that there are 300 users: Might be acceptable performance Suppose that there are 400 users: Therefore: circuit switching could support 100 users, while packet switching can support 400 users. A factor of 4 more!!!
Losses and delay in packet switched networks A B packet being transmitted (delay) packets queueing (delay) free (available) buffers: arriving packets dropped (loss) if no free buffers • Losses • Transmission losses • In fiber links, bit-error is 10^-12 or better (i.e., less). • What is the probability of packet error when there are 1400 bytes in a packet? • In wireless links, the bit-error rate can be very high • Congestion losses. • If too many packets arrive at the same time, then the buffers will fill up and packets are lost. • Increasing the link speeds or reducing the number of users can reduce the probability of loss. • Increasing the size of the buffer reduces losses, but also increases delay. • Delay • Queuing delay • Transmission delay • Propagation delay • Processing delay
In the news News sources www.lightreading.com (general networks) www.unstrung.com (wireless and mobile) www.darkreading.com (network security) www.alleyinsider.com (general tech business news) arstechnica.com (general tech news)
The Protocol Stack The application layer includes network applications and network application protocols e.g. of applications: web, IM, email e.g., application protocols: OSCAR, http, smtp, ftp, DNS. Provide a service to a user or another application. Require service from the lower layers, but typically only interact with the transport layer. application transport network link physical
The Protocol Stack The transport layer (typically) transports messages from and to applications Different transport layer protocols provide different types of services. Types of services MAY include Reliability: the sender application can be assured that the data is correctly received, or receives an error message. Congestion and flow control: attempt to send data quickly but not so quickly to cause congestion in the network or at the receiving host Error detection / correction In order delivery Break long messages into small chunks suitable for transmission over the network Multiplexing so that multiple transport layer connections can occur simultaneously Note that when a transport protocol provides these services, the application does not have to. This makes implementation of applications easier. This allows careful design of transport protocols, following the divide and conquer approach The transport layer uses the network layer to deliver packets, but does not require any type of service guarantees from the network layer In practice, the transport layer hopes for in order delivery. application transport network link physical
Transport layer protocols: TCP and UDP TCP and UDP are the most widely used transport protocols. Other protocols include SCTP (UD and Cisco are active in developing SCTP), RTP (for multimedia such as VoIP) TCP and UDP will be covered in great detail later. But for now: TCP provides many services Congestion control Flow control Reliability Multiplexing Error detection UDP provides few services Error detection Multiplexing The application must implement any other services that it requires. TCP requires a connection to be established, UDP does not application transport network link physical
Transport Multiplexing Transport layers use ports to provide multiplexing A two hosts can have multiple simultaneous connections by using ports. Well known ports can be used to specify a particular application E.g., web servers will accept TCP connections on port 80 A host can have two connections with a web server by using different ports host (web server) host UDP TCP UDP TCP 0 0 0 0 4567 80 4568 216-1 216-1 216-1 216-1
Sockets – gateway between the app layer and the transport layer process sends/receives messages to/from its socket socket analogous to door sending process shoves message out door sending process relies on transport infrastructure on other side of door which brings message to socket at receiving process host or server host or server controlled by app developer process process socket socket Internet controlled by OS TCP with buffers, variables TCP with buffers, variables
TCP Sockets • An application accesses TCP and UDP through sockets. • TCP is connection based so one host must be listening and the other must be connecting (calling) • The basic steps for a TCP listener • Define socket variable as a TCP socket • Bind socket to a port (the bind function) • If some other application is or was recently (120 sec) listening on this port, this function will fail. • The application must check that this command succeeds. • Listen on this port (the listen function) • When a the other host connects, the listen function completes and data can be send or received. • Close socket • Basic steps for TCP caller • Define socket variable as a TCP socket • No port is given, the OS will assign which ever port is available. The application has no control over the port • Connect • Send data • Close socket
UDP Sockets • UDP are connectionless. • A host sends a packet when it wants. • There is no concept of one host connecting to another. • There is only the concept of one host sending a packet and the other host receiving the packet. And either host can send or receive • Steps to send and then receive a UDP message • Define socket as a UDP socket • Bind socket to a port • If this port is in use, bind will fail • Send message • Wait for message • There are two ways to wait for messages, blocking or non-blocking • A blocking function will wait for a message to arrive. It might wait forever. • A non-blocking will return immediately, but if no message was waiting in the transport layer, then no message is returned • select function allows a time out to be set. So the function will wait until a message arrives or the timeout time to elapse. • Close socket • Steps to receive a UDP message • Define socket as a UDP socket • Bind socket to a port • If this port is in use, bind will fail • Send message • Wait for response • Close socket
Project 1 Due 9/16 • In this project messages will be sent over TCP and UDP. • The project is description currently at • http://www.eecis.udel.edu/~bohacek/Classes/CPEG419_2005/Proj1/project1_part1.htm • All the required information should be online. • This project can be completed by cut and pasting from the web site. But try to understand the steps. • Let me know if there are typos.
The Protocol Stack The network layer routes packets (datagrams) through the network The network layer gets packets from the transport layer or from the link layer. Depending on the destination address, the network layer will give the packet to the transport protocol or to a specific link layer to send on a specific link The network layer also provides fragmenting of a large packet into chunks suitable for the link layer application transport network link physical
The Protocol Stack The link layer moves packets (frames) between two hosts However, the link layer may provide a wide range of services including Media access control Error detection / correction Routing over layer 2 networks Reliability (where the network layer is informed if the transmission fails) application transport network link physical
The Protocol Stack The physical layer moves packets (frames) between two connected hosts This requires putting the bits onto a physical medium and decoding them from the medium. In this course we mostly neglect the physical layer and assume that is works correctly (each layer always assumes that the other layers work correctly) But the performance of a protocol at a layer often dependent on the other layers. One approach is for cross-layer design application transport network link physical
Encapsulation network link physical link physical M M M Ht M Hn Hn Hn Hn Ht Ht Ht Ht M M M M Ht Ht Hn Hl Hl Hl Hn Hn Hn Ht Ht Ht M M M source message application transport network link physical segment datagram frame switch destination application transport network link physical router
Chapter 2 The Application Layer
Goals of this Chapter • To understand common application protocols work • Web (http) • Email (smtp) • FTP • DNS • P2P • IM • To understand how the design alternatives for application design • A network application runs on many hosts, it is a distributed application • This chapter discusses several designs of distributed applications
Road Map • Application basics • Web • Email • FTP • DNS • P2P • Graph theory • State diagrams • P2P design • IM
Road Map • Application basics • Web • Email • FTP • DNS • P2P • Graph theory • State diagrams • P2P design • IM
Creating a network app write programs that run on (different) end systems communicate over network e.g., web server software communicates with browser software No need to write software for network-core devices Network-core devices do not run user applications applications on end systems allows for rapid app development, propagation application transport network data link physical application transport network data link physical application transport network data link physical
An App-layer protocol defines Types of messages exchanged, e.g., request, response Message syntax: what fields in messages & how fields are delineated Message semantics meaning of information in fields Rules for when and how processes send & respond to messages Public-domain protocols: defined in RFCs allows for interoperability e.g., HTTP, SMTP Proprietary protocols: e.g., Skype
Ports An application is identified by the hosts IP address, transport protocols, and port E.g., A web server has a particular IP address, listens with TCP on port 80. A web browser on a host will connect a request a file from the web server. The browser is identified by the host’s IP address and a TCP port. host (web server) host UDP TCP UDP TCP 0 0 0 0 4567 80 4568 216-1 216-1 216-1 216-1
What transport service does an app need? Data reliability some apps (e.g., audio) can tolerate some loss other apps (e.g., file transfer, telnet) require 100% reliable data transfer Timing some apps (e.g., Internet telephony, interactive games) require low delay to be “effective” Throughput • some apps (e.g., multimedia) require minimum amount of throughput to be “useful” (i.e., in order for the user to gain utility) • other apps (“elastic apps”) make use of whatever throughput they get Security • Encryption, data integrity, …
Transport service requirements of common apps Application file transfer e-mail Web documents real-time audio/video stored audio/video interactive games instant messaging Time Sensitive no no not really yes, 100’s msec yes, few secs yes, 100’s msec yes and no Throughput elastic elastic some what elastic audio: 5kbps-1Mbps video:10kbps-5Mbps same as above few kbps up elastic Data loss no loss no loss no loss loss-tolerant loss-tolerant loss-tolerant no loss
Internet transport protocols services TCP service: connection-oriented:setup required between client and server processes reliable transport between sending and receiving process flow control: sender won’t overwhelm receiver congestion control: throttle sender when network overloaded does not provide: timing, minimum throughput guarantees, security UDP service: unreliable data transfer between sending and receiving process does not provide: reliability, flow control, congestion control, timing, throughput guarantee, or security Does not require connection set-up Packets can be sent at any rate desired (but this might be cause considerable congestion)
Internet apps: application, transport protocols Application layer protocol SMTP [RFC 2821] Telnet [RFC 854] HTTP [RFC 2616] FTP [RFC 959] HTTP (eg Youtube), RTP [RFC 1889] SIP, RTP, proprietary (e.g., Skype) Underlying transport protocol TCP TCP TCP TCP TCP or UDP typically UDP Application e-mail remote terminal access Web file transfer streaming multimedia Internet telephony
Road Map • Application basics • Web • Email • FTP • DNS • P2P • Graph theory • State diagrams • P2P design • IM
Web and HTTP www.someschool.edu/someDept/pic.gif path name host name • Web page consists of objects • Object can be HTML file, JPEG image, Java applet, audio file,… • Web page consists of base HTML-file which includes several referenced objects • The browser first requests the base file • The base file species text and URLs of objects • The browser requests these objects, where ever they are (not always on the same server) • HTTP is used to request the base file and all the other files • Note, that HTTP can be used for other applications besides web • Each object is addressable by a URL • Example URL:
HTTP overview HTTP: hypertext transfer protocol Web’s application layer protocol client/server model client: browser that requests, receives, “displays” Web objects server: Web server sends objects in response to requests HTTP request PC running Explorer HTTP response HTTP request Server running Apache Web server HTTP response Mac running Navigator
HTTP overview (continued) Uses TCP: client initiates TCP connection (creates socket) to server, port 80 server accepts TCP connection from client HTTP messages (application-layer protocol messages) exchanged between browser (HTTP client) and Web server (HTTP server) TCP connection closed HTTP is “stateless” server maintains no information about past client requests aside Protocols that maintain “state” are complex! • past history (state) must be maintained • if server/client crashes, their views of “state” may be inconsistent, must be reconciled
HTTP connections Nonpersistent HTTP At most one object is sent over a TCP connection. Persistent HTTP Multiple objects can be sent over single TCP connection between client and server.
Nonpersistent HTTP Suppose user enters URL www.someSchool.edu/someDepartment/home.index 1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80 (contains text, references to 10 jpeg images) 1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client 2. HTTP client sends HTTP request message (containing URL) into TCP connection socket. Message indicates that client wants object someDepartment/home.index 3. HTTP server receives request message, forms response message containing requested object, and sends message into its socket 5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects 4. HTTP server closes TCP connection. time 6.Steps 1-5 repeated for each of 10 jpeg objects
Non-Persistent HTTP: Response time time to transmit file Definition of RTT: time for a small packet to travel from client to server and back. Response time: • one RTT to initiate TCP connection • one RTT for HTTP request and first few bytes of HTTP response to return • file transmission time total = 2RTT+transmit time initiate TCP connection RTT request file RTT file received time time
Persistent HTTP Nonpersistent HTTP issues: requires 2 RTTs per object OS overhead for each TCP connection browsers often open parallel TCP connections to fetch referenced objects Persistent HTTP server leaves connection open after sending response subsequent HTTP messages between same client/server sent over open connection client sends requests as soon as it encounters a referenced object as little as one RTT for all the referenced objects
HTTP request message • two types of HTTP messages: request, response • HTTP request message: • ASCII (human-readable format) request line (GET, POST, HEAD commands) GET /somedir/page.html HTTP/1.1 Host: www.someschool.edu User-agent: Mozilla/4.0 Connection: close Accept-language:fr (extra carriage return, line feed) header lines Carriage return, line feed indicates end of message
HTTP response message status line (protocol status code status phrase) HTTP/1.1 200 OK Connection close Date: Thu, 06 Aug 1998 12:00:15 GMT Server: Apache/1.3.0 (Unix) Last-Modified: Mon, 22 Jun 1998 …... Content-Length: 6821 Content-Type: text/html data data data data data ... header lines data, e.g., requested HTML file
HTTP response status codes 200 OK request succeeded, requested object later in this message 301 Moved Permanently requested object moved, new location specified later in this message (Location:) 400 Bad Request request message not understood by server 404 Not Found requested document not found on this server 505 HTTP Version Not Supported In first line in server->client response message. A few sample codes:
Trying out HTTP (client side) for yourself 1. Telnet to your favorite Web server: Opens TCP connection to port 80 (default HTTP server port) at cis.poly.edu. Anything typed in sent to port 80 at cis.poly.edu telnet cis.poly.edu 80 2. Type in a GET HTTP request: By typing this in (hit carriage return twice), you send this minimal (but complete) GET request to HTTP server GET /~ross/ HTTP/1.1 Host: cis.poly.edu 3. Look at response message sent by HTTP server!
Wireshark (ethereal) • Wireshark captures all packets that pass through the hosts interface • To run Wireshark , libpcap (linux) or winpcap (windows) must be installed. It comes with wireshark package • Then, run wireshark • Select Capture • Find the active interface • E.g., mot generic dialup, nor vnp, nor packet scheduler, but wireless …. With IP address • Then select prepare • Let’s watch TCP packets on port 80 • Next to capture filter, enter TCP port 80 • Select update in realtime and autoscroll • Might need to enable or disable “capture in promiscuous mode” • Press start • Press close • Load www.eecis.udel.edu page in browser • Press stop in Wireshark • Find http request to 128.4.40.10. • Right click and select follow TCP stream
Web caches (proxy server) user sets browser: Web accesses via cache browser sends all HTTP requests to cache object in cache: cache returns object else cache requests object from origin server, then returns object to client HTTP request HTTP request HTTP response HTTP response HTTP request HTTP response Goal: reduce network utilization by satisfying client request without involving origin server origin server Proxy server client client origin server
More about Web caching cache acts as both client and server typically cache is installed by ISP (university, company, residential ISP) Why Web caching? reduce response time for client request reduce traffic on an institution’s access link. Internet dense with caches: enables “poor” content providers to effectively deliver content (but so does P2P file sharing)
Caching example Assumptions average object size = 100,000 bits avg. request rate from institution’s browsers to origin servers = 15/sec delay from institutional router to any origin server and back to router = 2 sec Consequences utilization on LAN = 15% utilization on access link = 100% total delay = Internet delay + access delay + LAN delay = 2 sec + minutes + milliseconds origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache
Caching example (cont) possible solution increase bandwidth of access link to, say, 10 Mbps consequence utilization on LAN = 15% utilization on access link = 15% Total delay = Internet delay + access delay + LAN delay = 2 sec + msecs + msecs often a costly upgrade origin servers public Internet 10 Mbps access link institutional network 10 Mbps LAN institutional cache
Caching example (cont) possible solution: install cache suppose hit rate is 0.4 consequence 40% requests will be satisfied almost immediately 60% requests satisfied by origin server utilization of access link reduced to 60%, resulting in negligible delays (say 10 msec) total avg delay = Internet delay + access delay + LAN delay = .6*(2.01) secs + .4*milliseconds < 1.4 secs origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache
Conditional GET Goal: don’t send object if cache has up-to-date cached version cache: specify date of cached copy in HTTP request If-modified-since: <date> server: response contains no object if cached copy is up-to-date: HTTP/1.0 304 Not Modified HTTP response HTTP/1.0 304 Not Modified server cache HTTP request msg If-modified-since: <date> object not modified HTTP request msg If-modified-since: <date> object modified HTTP response HTTP/1.0 200 OK <data>
Road Map • Application basics • Web • FTP • Email • DNS • P2P • Graph theory • State diagrams • P2P design • IM