190 likes | 327 Views
PlanetLab Applications and Federation. Aki NAKAO Utokyo / NICT nakao@iii.u-tokyo.ac.jp aki.nakao@nict.go.jp. Kiyohide NAKAUCHI NICT nakauchi@nict.go.jp 23 rd ITRC Symposium 2008/05/16. (1) PlanetLab Applications. Over 400 nodes. CoMon: monitoring slice-level statistics.
E N D
PlanetLab Applications and Federation Aki NAKAO Utokyo / NICT nakao@iii.u-tokyo.ac.jp aki.nakao@nict.go.jp Kiyohide NAKAUCHI NICT nakauchi@nict.go.jp 23rd ITRC Symposium 2008/05/16
(1) PlanetLab Applications Over 400 nodes CoMon: monitoring slice-level statistics http://summer.cs.princeton.edu/status/index_slice.html 2008/05/16 K.NAKAUCHI, NICT 2
Typical Long-running Applications • CDN • CoDeeN[Princeton], Coral[NYU], Coweb[Cornell] • Large-file transfer • CoBlitz, CoDeploy[Princeton], SplitStream[Rice], • Routing Overlays • i3 [UCB], Pluto[Princeton] • DHT / P2P middleware • Bamboo[UCB], Meridian[Cornel] , Overlay Weaver[UWaseda] • Brokerage service • Sirius[UGA] • Measurement, Monitoring • ScriptRoute[Maryland, UWash], S-cube[HPLab] • CoMon, CoTop, PlanetFlow[Princeton] • DNS, Anomaly Detection , streaming, multicast, anycast, … • In addition, there are many short-term research projects on PlanetLab K.NAKAUCHI, NICT
Improve web performance & reliability 100+ proxy servers on PlanetLab Running 24/7 since June 2003 Roughly 3-4 million reqs/day aggregate One of the highest-traffic projects on PlanetLab CoDeeN : Academic Content Distribution Network K.NAKAUCHI, NICT
Cache miss Response Request Cache Miss Cache hit Cache miss Response Cache hit Request Response How CoDeen Works? • Each CoDeeN proxy is a forward proxy, reverse proxy, & redirector CoDeeN Proxy 2008/05/16 K.NAKAUCHI, NICT 5
Origin Server HTTP RANGE QUERY coblitz.codeen.org chunk 1 chunk 2 chunk 1 chunk 2 chunk 1 chunk1 chunk 3 chunk 3 chunk 5 chunk 5 chunk 5 chunk 4 chunk 4 chunk 5 Coblitz : Scalable Large-file CDN • Faster than BitTorrent by 55-86% (~500%) Only reverse proxy(CDN) caches the chunks! CDN = Redirector + Reverse Proxy DNS chunk1 chunk2 CDN CDN chunk3 Client Agent CDN CDN Agent Client CDN CDN chunk4 chunk5 2008/05/16 K.NAKAUCHI, NICT 6
How Does PlanetLab Behave? Node Availability [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06] K.NAKAUCHI, NICT
Live Slices 50% nodes have 5-10 live slices [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06] K.NAKAUCHI, NICT
Bandwidth Bandwidth out Bandwidth in Median: 500-1000 Kbps [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06] K.NAKAUCHI, NICT
(2) Extending PlanetLab • Federation • Distributed operation/management • Private PlanetLab • Private use, original configuration • CORE [UTokyo, NICT] • Hardware support (C/D separation) • Custom hardware: Intel IXP, NetFPGA, 10GbE • E.g. Supercharging PlanetLab [UWash] • Edge diversity • Wireless technologies integration [OneLab] • E.g. HSDPA, WiFi, Bluetooth, ZigBee, 3GPP LTE • GENI, VINI 2008/05/16 K.NAKAUCHI, NICT 10
Federation • Split PlanetLab • Several regional PlanetLabs with original policy • Interconnection • Share node resources among PlanetLabs Node Mgr Node Mgr VM1 VM1 VM2 VM2 VMn VMn PlanetLab 1 PlanetLab 3 … PLC PLC VMM VMM Internet PlanetLab 2 Trade PLC K.NAKAUCHI, NICT
PlanetLab-EU Starts Federation • Emerging European portion of public PlanetLab • 33 nodes today (migrated from PlanetLab) • Supported by OneLab project (UPMC, INRIA) • Control center in Paris • PlanetLab-JP will also follow federation K.NAKAUCHI, NICT
PLC Linux MyPLC for Your Own PlanetLab • PlanetLab in a box • Complete PlanetLab Central (PLC) portable package • Easy to install, administer • Isolate all code in a chroot jail • Single configuration file Apach OpenSSL PostgreSQL … pl_db plc_www plc_api bootmanager bootcd_v3 /plc K.NAKAUCHI, NICT
Resource Management • Resource sharing policy • By contributing 2 nodes to any one PlanetLab, a site can create 10 slices that span the federated PlanetLab • Rspec • General, Extensible, Resource Description • Portals presents a higher-level front-end view of resources • Portals will use RSpec as part of the back-end K.NAKAUCHI, NICT
Rspec Example <component type=”virtual access point” requestID=”siteA-ap1” physicalID=”geni.us.utah.wireless.node45”> <processing requestID=”cpu1”> <power units=”CyclesPerSecond”> <value>1000000000</value> </power> <function>Full</function> </processing> <storage requestID=”disk1”> <capacity units=”GB”> <value>10</value> </capacity> <access>R/W<access> </storage> <wireless:communication requestID=”nic1”> <medium>FreqShared</medium> <mediumtype>broadcast</mediumtype> <wireless:protocol>802.11g</wireless:protocol> <wireless:frequency type=”802.11channel”>16</wireless:frequency> </wireless:communication> </component> K.NAKAUCHI, NICT
Summary • PlanetLab applications • 800+ network services running in their own slice • Long-running infrastructure services • Measurement using a set of useful monitoring tools reveals the extensive use of PlanetLab • Federation • Distributes operation and management • Future PlanetLab = current PL + PL-EU + PL-JP +… K.NAKAUCHI, NICT
CoTop: monitoring what slices are consuming resources on each node, like “top” CoMon: monitoring statistics for PlanetLab at both a node-level and a slice-level Monitoring Tools 2008/05/16 K.NAKAUCHI, NICT 18
Publicly accessible distributed hash table (DHT) service Simple put-get interface is accessible over both Sun RPC and XML RPC OpenDHT/OpenHash K.NAKAUCHI, NICT