110 likes | 227 Views
Achieving Deployment Scalability. Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science Department University of Tennessee Cambridge University 9/27/2004 . End-to-End Arguments. Principles that guided the design of the Internet
E N D
Achieving Deployment Scalability Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science Department University of Tennessee Cambridge University 9/27/2004
End-to-End Arguments • Principles that guided the design of the Internet • It took the form of a number of maxims, rules of thumb and “end-to-end arguments” • “Never implement a complex service at an intermediate node if the complexity can be moved to the endpoint” • “Never use the resources of intermediate nodes to implement services that are not ubiquitously required by applications” • The penalty for Violating End-to-End: “It won’t scale!”
Deployment Scalability • Scalability has many dimensions • number of users • attached devices • traffic volume • networks & admin domains subsumed • economic rationality • Correct operation & acceptable performance • We use the term deployment scalability to mean the ability to scale freely across boundaries • primary example: the Internet
A Scalability Principle • When designing a service that is implemented using a shared infrastructure, there is an inherent tradeoff between • the service’s scalability on that infrastructure, • and both • its specialization to a particular class of applications, and • the value or scarcity of the resource consumed to provide atomic services.
Applying the Scalability Principle to Server Resources • Consider intermediate nodes with server resources • Moore’s law makes server resources cheap • Sharing server resources may be a necessary cost of scalability • Networks share massive bandwidth resources • Scalability requirements, according to our scalability principle • Services must be simple, generic, unreliable • Any single service request must not give away resources that are too scarce or too valuable. • Can we fashion services that will scale?
Internet Backplane Protocol (IBP): Scalable Storage and Computation • Storage and processing implemented at IBP intermediate nodes called “depots” • IP limits the maximum size & time-to-live (in hops) of a datagram • IBP limits the maximum size and duration (in secs) of a storage allocation • IBP limits the maximum size and processing time (in CPU secs) of an operation parameter • IP datagram delivery is best-effort • IBP storage and computation are best-effort • IP routers are untrusted • IBP depots are untrusted
The Transnet: A Unified View of Data Transfer, Storage and Processing “… memory locations … are just wires turned sideways in time” Dan Hillis, 1982,Why Computer Science is No Good
Storage as “Movement” Through Time A A A M M M
Message as “Process” C B A M’’ M M’
Processing as “Movement” Through Space of Values A A A M3 M1 M2
Compute Nodes as Intermediate Nodes • Minimize state and trust • Distinguish hard and soft state clearly and manage the latter aggressively for resource sharing • Define primitive common interfaces at a low level in order to enable heterogeneity in OS services • Expose state of long-running “processes” to manipulation by “endpoints”