430 likes | 915 Views
20-755: The Internet Lecture 12: Scalable services David O’Hallaron School of Computer Science and Department of Electrical and Computer Engineering Carnegie Mellon University Institute for eCommerce, Summer 1999 Today’s lecture Speeding up servers (30 min) Break (10 min)
E N D
20-755: The InternetLecture 12: Scalable services David O’Hallaron School of Computer Science and Department of Electrical and Computer Engineering Carnegie Mellon University Institute for eCommerce, Summer 1999
Today’s lecture • Speeding up servers (30 min) • Break (10 min) • Caching (50 min)
Scalable servers • Question: How do we provide services that scale well with the number of requests? • Goals for high-volume sites: • Minimize request latency (response time) for our clients. • want to avoid the dreaded hourglass • Minimize the amount of traffic over our high-speed Internet connection. • Many ISPs charge monthly rates based on actual bandwidth usage. • Recall MCI T1 and T3 pricing from Lecture 6 (programming the Internet).
Scalability approaches • Speed up the servers • Use multiple processes to handle requests • concurrent servers • pre-forking servers (not covered here) • Use multiple computers to process requests. • clustering (not covered here) • e.g., Microsoft cluster, HotBot cluster • distributed servers (not covered here) • use DNS to send requests to geographically distributed mirror sites. • Move the content closer to the clients. • Caching • Crucial concept (and big business)
Iterative servers • An iterative server processes one connection at a time. # simple iterative server while (1) { connfd = accept(); <process request using socket connfd> }
Iterative servers • Step 1: server accepts connect request from client A. client B connection request server client A listen socket
Iterative servers • Step 2: Server processes request from client A (using A’s connection socket) • Client B initiates connection request and waits for server to accept it. connection request client B server listen socket client A client A’s connection socket process request
Iterative servers • Step 3: Server finishes processing request from Client A. • Accepts connection request from Client B. client B connection request server client A listen socket
Iterative servers • Step 4: Server processes request from client B (using B’s connection socket) client B process request server listen socket client A client B’s connection socket
Iterative servers • Step 5: Server finishes process client B’s request. • Server waits for connection request from next client. client B server client A listen socket
Iterative servers • Pros • Simple • Minimizes latency of short requests. • Cons • Higher latencies and lower throughput (requests/sec) for large requests • large response bodies that must be served off disk • long running CGI scripts that access disk files or databases. • no other requests can be served while other work is being done.
Concurrent servers • A concurrent server accepts connections from a parent process and creates children to process the requests. # concurrent server while (1) { connfd = accept(); pid = fork(); if (pid == 0) { # child process <process request in child process> exit(); } }
Concurrent servers • Step 1: server accepts connect request from client A. client B connection request server client A listen socket
Concurrent servers • Step 2: Server creates child process to handle request. • Client B initiates connection request and waits for server to accept it. client B connection request server listen socket client A client A’s connection socket child process request
Concurrent servers • Step 3: Server accepts connection request from client B and creates child process to handle request. process request child B client B server listen socket client A client A’s connection socket client B’s connection socket child A process request
Concurrent servers • Step 4: Server’s children finish processing requests from clients A and B. • Server waits for next connection request. client B server client A listen socket
Concurrent servers • Pros • Can decrease latency for large requests (decreases time waiting for connection request to be accepted) • Can increase overall server throughput (requests/sec). • Cons • More complex • Potential for “fork bombs” • must limit number of active children • Variant: Pre-forking servers • Create a fixed number of children to handle requests ahead of time • Approach used by Apache.
Today’s lecture • Speeding up servers (30 min) • Break (10 min) • Caching (50 min)
Caching • A cache is a storage area (either in memory or on disk) that holds copies of frequently accessed data. • Typically smaller than primary storage area, but cheaper and faster to access. • Fundamental computer systems technique • Memory systems (register files, L1, L2, and L3 caches) • File and database systems (OS I/O buffers) • Internet systems (Web caches)
“nearby” cache storage “far away” remote storage A, key(A) B, key(B) C, key(C) Accessing objects from a cache • Initially, the remote storage holds objects (data items) and associated keys that identify the objects. • Program wants to fetch A, B, then A again program
“nearby” cache storage “far away” remote storage A, key(A) B, key(B) C, key(C) Accessing objects from a cache • Program fetches object A by passing key(A) to the cache. key(A) program
“far away” remote storage A, key(A) B, key(B) C, key(C) Accessing objects from a cache • Object A is not in cache, so cache retrieves a copy of A from primary storage and returns it to program. • Cache keeps a copy of A and its key in its storage area “nearby” cache storage key(A) A, key(A) program A A
“far away” remote storage A, key(A) B, key(B) C, key(C) Accessing objects from a cache • Program accesses object B. • Cache keeps a copy of B and its key in its storage area. “nearby” cache storage key(B) key(B) A, key(A B, key(B) program B B
“far away” remote storage A, key(A) B, key(B) C, key(C) Accessing objects from a cache • Program accesses object A. • Cache returns object directly without accessing remote storage “nearby” cache storage key(A) A, key(A B, key(B) program A
Impact of caching • Reduces latency of cached objects • e.g., we can access object A from nearby storage rather than faraway storage. • Reduces load on remote storage area • Remote storage area never sees requests satisfied by cache.
Web caching • Objects are web pages, keys are URLs • Browser caches • One client, multiple servers • Proxy caches • Multiple clients, multiple servers • Examples: Squid, Harvest, Apache, every major vendor. • Based on proxy servers • Reverse proxy caches • Multiple clients, one server • Example: Inktomi TrafficServer • Based on proxy servers • Also called inverse caches or http accelerators
Browser caches • One client - many servers • Caches objects that come from requests of a single client to many servers • Browser caches are located on the disk and in the memory of a local machine. client machine server disk browser cache browser server server
Proxy servers • A proxy server (or proxy) acts as an intermediary between clients and origin servers • Acts as a server to the client... • Acts as a client to the origin server... origin server request forwarded request proxy client fowarded response response
Applications of proxy servers • Allow users on secure nets behind firewalls to access Internet services • Original motivating application (Luotonen and Altis, 1994) remote HTTP server HTTP remote FTP server FTP proxy server on firewall machine clients inside the firewall HTTP NNTP remote news server Secure subnet inside firewall SNMP remote mail server
A proxied HTTP transaction complete URL partial URL GET http://server.com/index.html HTTP/1.0 GET /index.html HTTP/1.0 origin server client proxy HTTP/1.0 200 OK HTTP/1.0 200 OK
Motivation for proxy servers • Lightweight client machines. • Only need to support HTTP • Local machines with DNS can still use Internet • only needs to know IP address of proxy • Centralized logging of all HTTP requests. • Centralized filtering and access control of client requests. • Centralized authentication site. • Facilitates caching.
Web proxy caches • Multiple clients - multiple servers • Typically installed on the border of an organization’s internal network and the Internet. • Motivation: • decrease request latency for the clients on the organization’s network. • decrease traffic on the organization’s connection to the Internet • The organization can be on the scale of a university department, company, ISP, or country. • Important for overseas sites because most content is in US and connections between most countries and US is slow.
Web proxy caches • The requested object is stored locally (along with any cache relevant response headers) in the proxy cache for later use. • Request can come from the same client or a different client forwarded request request proxy server origin server client response proxy cache
Web proxy caches • If an up-to-date object is in the cache, then the object can be served locally from the proxy cache. request proxy server origin server client response proxy cache
Web proxy caches • How does a proxy know that it’s local copy is up-to-date? • An object is considered fresh (i.e., able to be sent to client without checking first with the origin server) if: • It’s origin server served it with an expiration controlling header and the current time precedes this expiration time. • Expires and Cache-Control response headers • The proxy cache has seen the object recently and it was modified relatively long ago. • Last-Modified response header.
Web proxy caches • Objects that are not known to be fresh must be validated by querying the origin server for the time the object was last modified on the origin server. • Last-Modified response header in HEAD method • Compare with Last-Modified header of cached copy • E-tag is recomputed each object is changed. • After validation, if the object is stale it must be fetched from the origin server. • Otherwise, it is served directly from the proxy cache.
Reverse proxy caches • Many clients - one server • Reverse proxy caches are proxy caches that are located near high-volume servers. • Also called reverse proxies or httpd accellerators • Goal is to reduce server load. Remote server site client large expensive high-latency database reverse proxy cache client server client client
Case study:The Akamai FreeFlow cache Source: akamai.com
Case study:The Akamai FreeFlow cache The Akamai server is chosen dynamically to maximize some performance metric based on existing network conditions [droh] Web pages on this server were previously “Akamaized” offline by the “FreeFlow Launcher” tool [droh]
The Akamai network(Aug 1999) Number of Servers 900 Number of Networks 25 Number of Countries 15 Total Capacity 12 Gigabits/second Average Load (at peak utilization) 500 Megabits/second Average Network Utilization 5% Average Hits Per Day ¼ Billion Source: akamai.com
Example Akamaized page <html> <head> <title>Dave O'Hallaron's Home Page</title> </head> <body bgcolor="ffffff"> <img src="http://a516.g.akamaitech.net/7/516/1/ 3b3a087c3d0ea3/www.cs.cmu.edu/~droh/droh.quake.gif" align="left"> <p><font size=-1> <strong>David O'Hallaron</strong><br> Associate Professor, <A HREF="http://www.cs.cmu.edu/csd"> ... • Questions: • Authentication of requests to Akamai servers? • Accurately monitoring a dynamic net?