80 likes | 232 Views
PlanetLab: An overview. Fred Kuhns fredk@arl.wustl.edu Applied Research laboratory Department of Computer Science and Engineering Washington University in St. Louis. References. Papers at http://www.planet-lab.org/biblio
E N D
PlanetLab: An overview Fred Kuhns fredk@arl.wustl.edu Applied Research laboratory Department of Computer Science and Engineering Washington University in St. Louis
References • Papers at http://www.planet-lab.org/biblio • Operating System Support for Planetary-Scale Network Services. A. Bavier, M. Bowman, B. Chun, D. Culler, S. Karlin, S. Muir, L. Peterson, T. Roscoe, T. Spalink, and M. Wawrzoniak. (NSDI ‘04), May 2004 • Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors. Stephen Soltesz, Herbert Pötzl, Marc Fiuczynski, Andy Bavier, and Larry Peterson (EuroSys ’07), March 2007 • Design documents at http://www.planet-lab.org/doc/pdn • Towards a Comprehensive PlanetLab Architecture,L. Peterson, A. Bavier, M Fiuczynski, S. Muir, T. Roscoe, PDN-05-030, June 2005 • PlanetLab Architecture: An Overview, Larry Peterson, Steve Muir, Timothy Roscoe, and Aaron Klingaman, May 2006. • Presentations at http://www.planet-lab.org/presentations • PlanetLab: Catalyzing Network Innovation. October 2007. • PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure. Updated May 2007. • PlanetLab: A Strategy for Continually Reinventing the Internet. Presentation at OSTP.
PlanetLab • 800+ machines spanning 400 sites and 40 countries • Supports distributed virtualization • each of 600+ network services running in their own slice
Slice Host Host Slice pl_green Slice pl_blue Slice pl_red Host
Typical PlanetLab Node slice pl_blue VM slice pl_green VM slice pl_red VM service VM Node Manager (Similar to Console VM) ... Virtualization Layer MMU Memory ... Disk Interface Network Interface Other Hardware
Node Manager • rspec and rcap: rcap -> (rspec, vmid) • rcap: 128-bit random value • rspec defines base class for ??? • rspec subtype representing a slice: (name, value) vm_type : linux-vserver?? cpu_share : proportional share CPU scheduler, currently all slices get equal share mem_limit : per node upper bound on memory, currently no bound is specified disk_quota : per node upper bound on disk space used, current ?? base_rate : default 1Kbps burst_rate : default none, so can burst at the full available rate. sustained_rate : default 1.5Mbps. Limits sustained sending rates over an extended period of time, currently 24 hours (24h·60m/h·60s/m·1.5Mb/s·1B/8b = 16.2GB/day). After this the VM is limited to 1.5Mbps
Node Manger interface • has 5 operations: rcap = create_root_pool(rspec, slace_name) rcap = get_rcap() rspec = get_rspec(rcap) rcap = split_pool(rcap, rspec) bind(rcap, slice_name) • Trusted owner VM calls create_root_pool() to allocate pool for each trusted slice authority plus any other VMs (ie. services) the owner wishes to create. • slice retrieves its rcap by calling get_rcap(). • the bind() call will create a VM if it does not exist, otherwise the resources are added to its current allocation.