380 likes | 522 Views
A blueprint for introducing disruptive technology into Internet. by L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion leader Michael Wilson Mar 3, 2005 CS7702 Research seminar.
E N D
A blueprint for introducing disruptive technology into Internet by L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion leader Michael Wilson Mar 3, 2005 CS7702 Research seminar
Outline • Introduction • Architecture • PlanetLab • Conclusion
Introduction Recently: Until recently: • Widely-distributed applications make own forwarding decisions • Network-embedded storage, peer-to-peer file sharing, content distribution networks, robust routing overlays, scalable object location, scalable event propagation • Network elements (layer-7 switches & transparent caches) do application-specific processing But Internet is ossified.. Internet Figures courtesy planet-lab.org
Overlay network • A virtual network of nodes & logical links, built atop existing network, to implement a new service • Provides opportunity for innovation as no changes in Internet • Eventually, ‘weight’ of these overlays will cause emergence of new architecture • Similar to Internet itself (an overlay) causing evolution of underlying telephony network This paper speculates what this new architecture will look like.. Figure courtesy planet-lab.org
Outline • Introduction • Architecture • PlanetLab • Conclusion
Goals • Short-term: Support experimentation with new services • Testbed • Experiment at scale (1000s of sites) • Experiment under real-world conditions • diverse bandwidth/ latency/ loss • wide-spread geographic coverage • Potential for real workloads & users • Low cost of entry • Medium-term: Support continuous services that serve clients • Deployment platform • support seamless migration of application from prototype to service, through design iterations, that continues to evolve • Long-term: Microcosm for next generation Internet!
Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces
Slice-ability • A slice is horizontal cut of global resources across nodes • Processing, memory, storage.. • Each service runs in a slice • Service is a set of programs delivering some functionality • Node slicing must • be secure • use resource control mechanism • be scalable Slice ~ a network of VMs Figure courtesy planet-lab.org
Virtual Machine • VM is the environment where a program implementing some aspect of the service runs • Each VM runs on a single node & uses some of the node’s resources • VM must be • No harder to write programs, protection from other VMs, fair sharing of resources, restriction of traffic generation • Multiple VMs run on each node with • VMM (Virtual Machine Monitor) arbitrating node’s resources
Virtual Machine Monitor (VMM) • a kernel-mode driver running in the host operating system • Has access to the physical processor & manages resources between host OS & VMs • prevents malicious or poorly designed applications running in virtual server from requesting excessive hardware resources from the host OS • With virtualization, two interfaces now • API for typical services & • Protection Interface used by VMM • VMM used here is Linux VServer..
A node.. Figure courtesy planet-lab.org
Across nodes (ie. across network) • Node manger (one per node; part of VMM) • When service managers provide valid tickets • Allocates resources, creates VMs & returns a lease • Resource Monitor (one per node) • Tracks node’s available resources (using VM’s interface) • Tells agents about available resources • Agents (centralized) • Collect resource monitor reports • Advertise tickets • Issue tickets to resource brokers • Resource Broker (per service) • Obtain tickets from agents on behalf of service managers • Service Managers (per service) • Obtain tickets from broker • Redeem tickets with node managers to create VM • Start service
Obtaining a Slice Agent Broker Service Manager Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent Broker Resource Monitor Service Manager Resource Monitor Resource Monitor Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker ticket Resource Monitor ticket Service Manager Resource Monitor Resource Monitor Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker Service Manager ticket ticket Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker Service Manager Node Manager ticket Node Manager ticket Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker Service Manager Courtesy Jason Waddle’s presentation material
Obtaining a Slice Agent ticket Broker Service Manager Courtesy Jason Waddle’s presentation material
Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled(overlay) management • Application-centric interfaces
Distributed control of resources • Because of dual role of testbed, two types of users • Researchers • Likely to dictate how services are deployed & • Node properties • Node owners/ clients • Likely to restrict what services run on their nodes & how resources are allocated to them • De-centralize control between the two • Central authority provides credentials to service developers • Each node independently grants or denies a request, based on local policy
Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces
Unbundled (overlay) management • Independent sub-services, running in own slices • discover set of nodes in overlay & learn their capabilities • monitor health & instrument behavior of these nodes • establish a default topology • manage user accounts & credentials • keep software running on each node up-to-date & • extract tracing & debugging info from a running node • Some are part of core system (user a/c..) • Single, agreed-upon version • Others can have alternatives, with a default, replaceable over time • Unbundling requires appropriate interfaces Eg. hooks in VMM interface to get status of each node’s resources • Sub-services may depend on each other Eg. resource discovery service may depend on node monitor service
Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces
Application-centric interfaces • Promote application development by letting it run continuously (deployment platform) • Problem: difficult to simultaneously create testbed & use it for writing applications • API should remain largely unchanged while underlying implementation changes • If alternative API emerges, new applications must be written to it but original should be maintained for legacy applications
Outline • Introduction • Architecture • PlanetLab • Conclusion
PlanetLab Phases of evolution • Seed phase • 100 centrally managed machines • Pure testbed (no client workload) • Researchers as clients • Scale testbed to 1000 sites • Continuously running services • Attracting real clients • Non-researchers as clients
PlanetLab today Services • Berkeley’s OceanStore – RAID distributed over Internet • Intel’s Netbait – Detect & track worms globally • UW’s ScriptRoute – Internet measurement tool • Princeton’s CoDeeN – Open content distribution network Courtesy planet-lab.org
Related work • Internet2 (Abilene backbone) • Closed commercial routers -> no new functionality in the middle of network • Emulab • Not a deployment platform • Grid (Globus) • Glues together modest number of large computing assets with high bandwidth pipes but • planetlab emphasizes on scaling the less bandwidth applications across wider collection of nodes • ABONE (from active networks) • Focuses on supporting extensibility of forwarding function but • planetlab is more inclusive ie. apps throughout the network including those involving storage component • XBONE • Supports IP-in-IP tunneling, w/ GUI for specific overlay configurations • Alternative: package as desktop application Eg. Napster, KaZaa • Needs to be immediately & widely popular • Difficult to modify system once deployed unless compelling applications • Not secure • KaZaa exposed all files on local system
Conclusion • An open, global network test-bed, for pioneering novel planetary-scale services (deployment). • A model for introducing innovations (service-oriented network architecture) into the Internet through overlays. • Whether a single winner emerges & gets subsumed into Internet or services continue to define their own routing, remains a subject of speculation..
References • PlanetLab: An overlay testbed for broad-coverage services by B. Chun et. al., Jan 2003
Overlay construction problems • Dynamic changes in group membership – Members may join and leave dynamically – Members may die • Dynamic changes in network conditions and topology – Delay between members may vary over time due to congestion, routing changes • Knowledge of network conditions is member specific – Each member must determine network conditions for itself