530 likes | 768 Views
Cloud Computing and Virtualization with Globus . Oakland, May 2008 Kate Keahey (keahey@mcs.anl.gov) Tim Freeman (tfreeman@mcs.anl.gov) University of Chicago Argonne National Laboratory. Cloud Computing Tutorial Hands-on.
E N D
Cloud Computing and Virtualization with Globus Oakland, May 2008 Kate Keahey (keahey@mcs.anl.gov) Tim Freeman (tfreeman@mcs.anl.gov) University of Chicago Argonne National Laboratory
Cloud Computing Tutorial Hands-on • To participate in the hands-on part of the tutorial, send your PKI X509 subject line to nimbus@mcs.anl.gov • The first 10 requests will be given access to the nimbus cloud • Hurry!
Overview • Motivation • The Workspace Ecosystem: Abstractions and Background • The Workspace Deployment Tools • Managing Resources with Virtual Workspaces • Appliance management and contextualization • Virtual Cluster Management with Workspace Tools • Application Example: the STAR experiment • Cloud Computing • Run on the cloud: hands-on tutorial
? A Good Workspace is Hard to Find Configuration: finding environment tailored to my application 2) Leasing: negotiating a resource allocation tailored to my needs
Consumer’s Perspective:Quality of Life • Real life applications are complex • STAR example: Developed over more than 10 years, by more than 100 scientists, comprises ~2 M lines of C++ and Fortran code • … and require complex, customized environments • Rely heavily on the right combination of compiler versions and available libraries • Environment validation • To ensure reproducibility and result uniformity across environments
Consumer’s Perspective:Quality of Service • There is life beyond submitting batch jobs • Resource leases rather than job submission • Control of resources • Explicit SLA: different sites offer different quality of service • Satisfying peak demand • Experiment season, paper deadlines, etc.
Provider’s Perspective • Providing resources is easy, providing environments is hard • User comment: “I have 512 nodes I cannot use” ;-) • Fine-tuning environments for different communities is expensive • Evaluating, installing and maintaining software packages etc. • Reconciling conflicts • Coordinating update schedules for different communities is a nightmare • It may be hard to justify configuring/dedicating resources if they are only needed 1% of the time -- even if the 1% is very important for one of your users
Virtual Workspaces • A dynamically provisioned environment • A complete environment: a complete (software) environment as required by our community or applications provisioned on demand. • Resource allocation: provision the resources the workspace needs (CPUs, memory, disk, bandwidth, availability), allowing for dynamic renegotiation to reflect changing requirements and conditions. • Deployment point of view • Appliances/virtual appliances • A complete environment that can be packaged in various formats • Packaging point of view
Traditional tools Base environment (discovery) Automated configuration Typically long deployment time Isolation Performance isolation Runtime environment Virtual machines Complete environment Contextualization Short deployment time Very good isolation Runtime performance impact Workspace Implementations Paper: “Virtual Workspaces: Achieving Quality of Service and Quality of Life in the Grid”
VM VM VM The Virtues of Virtualization Parallels App App App App App • Bring your environment with you • Excellent enforcement and isolation • Fast to deploy, enables short-term leasing • Have a performance impact but it is acceptable for most modern hypervisors • Suspend/resume, migration Xen Guest OS (Linux) Guest OS (NetBSD) Guest OS (Windows) VMWare UML Virtual Machine Monitor (VMM) / Hypervisor KVM Hardware etc.
Creating a Virtual Cluster that Works Create a functioning virtual ensemble Contextualization layer Put the VMs in context VM VM VM Deploy VMs onto the resource Resource Obtain a lease on a raw resource Deploy virtual machines
Appliance Deployment: Mapping environments onto leased computing resources Coordinating creation of virtual resources A mix of open source software and proprietary tools communicating via common protocols Resource Providers: Grid providers: TeraGrid, OSG, etc. Commercial providers: EC2, Sun, etc. Appliance Providers: off-the-shelf environment bundles certified/endorsed for safety leverage appliance software commercial and open “marketplaces” The Workspace Ecosystem
Roles and Responsibilities • Division of labor • Resource providers provide resources • Virtual organizations provide appliances • Middleware that maps appliances onto resources • Appliance management software • Appliance creation, maintenance, validation, etc. • Not an appliance provider • Shifting the work around • Into the hands of the parties most motivated and qualified to do it
Virtual Workspaces: Vital Stats • Virtual Workspace software allows an authorized client to dynamically deploy and manage workspaces • Virtual Workspace Service (VWS), workspace control, Context Broker • Currently implements workspaces as Xen VMs • KVM coming this summer • Also, contextualization layer • Globus incubator project • Started ~2003, first release in September 2005 • Current release 1.3.1 (March ‘08) • Download it from: • http://workspace.globus.org
Using Workspaces(Deployment) Pool node Pool node Pool node VWS Service Pool node Pool node Pool node Workspace • Workspace metadata • Pointer to the image • Logistics information • Deployment request • CPU, memory, node count, etc. Pool node Pool node Pool node Pool node Pool node Pool node
Using Workspaces(Interaction) The workspace service publishes information on each workspace as standard WSRF Resource Properties. Pool node Pool node Pool node VWS Service Pool node Pool node Pool node Users can query those properties to find out information about their workspace (e.g. what IP the workspace was bound to) Pool node Pool node Pool node Pool node Pool node Pool node Users can interact directly with their workspaces the same way the would with a physical machine. Trusted Computing Base (TCB)
Workspace Service (what sits inside) Workspace WSRF front-end that allows clients to deploy and manage virtual workspaces VWS Service Pool node Pool node Pool node Workspace back-end: Pool node Pool node Pool node Resource manager for a pool of physical nodes Deploys and manages Workspaces on the nodes Pool node Pool node Pool node Each node must have a VMM (Xen) installed, as well as the workspace control program that manages individual nodes Pool node Pool node Pool node Contextualization creates a common context for a virtual cluster Trusted Computing Base (TCB)
Workspace Service Components • GT4 WSRF front-end • Leverages GT core and services, notifications, security, etc. • Roughly follows the OGF WS-Agreement provisioning model • Lease descriptions • Publishes available lease terms • Workspace Service back-end • Works with multiple Resource Managers • Workspace Control for on the node functions • Contextualization • Put the virtual appliance in its deployment context
Workspace Back-Ends • Default resource manager (basic slot fitting) • “datacenter technology” equivalent • Used for OSG Edge Services project • Challenge: finding Xen-enabled resources • Amazon Elastic Compute Cloud (EC2) • Software similar to Workspace Service (no virtual clusters, contextualization, fine-grain allocations, etc.) • Solution: develop a back-end to EC2 • Grid credential admission -> EC2 charging model • Address contextualization needs • Challenge: integrating VMs into current provisioning models • Solution: gliding in VMs with the Workspace Pilot
The Workspace Pilot • Challenge: how can I provide a “cloud” using virtualization without disrupting the current operation of my cluster? • Flying Low: the Workspace Pilot • Integrates with popular LRMs (such as PBS) • Implements “best effort” leases • Glidein approach: submits a “pilot” program that claims a resource slot • Includes administrator tools • Deployment • Testing @ U of Victoria (Atlas), Ian Gable and collaborators • Adapting for the use of the Atlas experiment @ CERN, Omer Khalid • TeraPort (small partition)
VM VM VM VM Workspace Pilot in Action Level 2: provision VMs Level 1: provision raw resources VWS Xen dom0 LRM/PBS Xen dom0 Xen dom0
The Pilot Program • Uses Xen balloon driver to reduce/restore domain0 memory so that guest domains (VMs) can be deployed • Secure VM deployment • The pilot requires sudo privilege and thus can be used only with site administrator’s approval • The workspace service provides fine-grained authorization for all requests • Signal handling • SIGTERM: pilot exceeded its allotted time • Notifies VWS, allows it to clean up • After a configurable time period takes things into its hands. • Default policy: one VM per physical node • Available for download • Workspace Release 1.3.1: • http://workspace.globus.org/downloads/index.html
Workspace Control • VM control • Starting, stopping, pausing, etc. • Integrating a VM into the network • Assigning MAC addresses and IP addresses • DHCP delivery tool • Building up a trusted (non-spoofable) networking layer • VM image propagation • Image management and reconstruction • creating blank partitions, sharing partitions • Contextualization information management • Talks to the workspace service via ssh • Can be used as a standalone component
Workspace Back-Ends • Default resource manager (basic slot fitting) • “datacenter technology” equivalent • Used for OSG Edge Services project • Challenge: finding Xen-enabled resources • Amazon Elastic Compute Cloud (EC2) • Software similar to Workspace Service (no virtual clusters, contextualization, fine-grain allocations, etc.) • Solution: develop a back-end to EC2 • Grid credential admission -> EC2 charging model • Address contextualization needs • Challenge: integrating VMs into current provisioning models • Solution: gliding in VMs with the Workspace Pilot Long-term solutions • Leasing model with explicit terms • Semantically rich leases: advance reservations, urgent leases, renegotiable leases, etc. • Cost-effective lease semantics
Where Do Appliances Come From? Marketplaces (VMWare, EC2, Workspace …) Appliance Provider (a user, a VO, a Grid…) appliance description Good… but: maintenance? ease of use? formats?
Where Do Appliances Come From? Marketplaces (VMWare, EC2, Workspace …) Appliance Management Software (OSFarm, rPath,…)) Xen CDROM appliance description VMware Appliance Provider (a user, a VO, a Grid…) Better
Deploying Appliances • Appliances need to be “portable” • So that they can be reused in many contexts • Making the appliance context-aware: • Other appliances • Site-specific information (e.g. a DNS server) • User/group/VO/Grid-specific information (e.g. public keys, host certs, gridmapfiles, etc.) • Security issues • Who do I trust to provide legitimate context information? • How do I make sure that appliances adhere to my site policies? VM VM VM VM site Virtual Organization
Where Do Appliances Come From? Marketplaces (VMWare, EC2, Workspace …) Appliance Management Software (OSFarm, rPath, CohesiveFT…)) Xen CDROM appliance description VMware appliance assertions appliance contextualization Appliance Provider (a user, a VO, a Grid…)
Make Me a Working Cluster • You got some VMs and you’ve deployed them… Now What? • What network are they connected to? Do they actually represent something useful? (like a ready-to-use OSG cluster?) Do the VMs know about each other? Can they share some disk? How do they integrate into the site storage/account system? Do they have host certificates? And a gridmapfile? And all the other things that will integrate them into my VO? • Challenge: what is a virtual cluster? • A more complex virtual machine • Networking, shared storage, etc. • Available at the same time and sharing a common context • Example: an OSG cluster • Solutions • Ensemble management • Exporting and sharing common context • Sophisticated networking configurations. Paper: “Virtual Clusters for Grid Communities”, CCGrid 2006
Contextualization • Challenge: Putting a VM in the deployment context of the Grid, site, and other VMs • Assigning and sharing IP addresses, name resolution, application-level configuration, etc. • Solution: Management of Common Context • Configuration-dependent • provides&requires • Common understanding between the image “vendor” and deployer • Mechanisms for securely delivering the required information to images across different implementations contextualization agent Common Context IP hostname pk Paper: “A Scalable Approach To Deploying And Managing Appliances”, TeraGrid conference 2007
Appliance context agent Contextualizing Appliances Context Broker Appliance Provider appliance context Appliance context template disk image appliance context application-specific context agents Appliance Deployer generic context appliance content Resource Provider
Application Example: Virtualization with the STAR experiment
Virtual Workspaces for STAR • STAR image configuration • A virtual cluster composed of one OSG headnode and multiple STAR worker nodes • Using the workspace service over EC2 to provision resources • Allocations of up to 100 nodes • Dynamically contextualized for out-of-the-box cluster
Virtual Workspaces for STAR • Deployment stages: • Create an “ensemble” defining the virtual cluster • Deploy the virtual machines • Contextualize to provide an out-of the-box cluster • Contextualization: • Cluster applications: NFS & PBS • Grid information: gridmapfile and host certificates • Runs • Using VWS on the nimbus cloud for small node allocations (VWS + default + Context Broker) • Using VWS with EC2 backend for allocations of ~100 nodes (VWS + EC2 backend + Context Broker)
with thanks to Jerome Lauret and Doug Olson of the STAR project Running jobs : 0 Running jobs : 42 Running jobs : 73 Running jobs : 94 Running jobs : 124 Running jobs : 142 Running jobs : 150 Running jobs : 150 Running jobs : 109 Running jobs : 230 VWS/EC2 BNL Running jobs : 300 Running jobs : 300 Running jobs : 0 Running jobs : 300 Running jobs : 140 Running jobs : 282 Running jobs : 195 Running jobs : 221 Running jobs : 243 Running jobs : 76 WSU Fermi Running jobs : 0 Running jobs : 37 Running jobs : 96 Running jobs : 136 Running jobs : 54 Running jobs : 183 Running jobs : 195 Running jobs : 200 Running jobs : 150 Running jobs : 152 Running jobs : 50 Running jobs : 34 Running jobs : 50 Running jobs : 42 Running jobs : 39 Running jobs : 0 Running jobs : 21 Running jobs : 15 Running jobs : 9 Running jobs : 27 PDSF Job Completion : File Recovery :
with thanks to Jerome Lauret and Doug Olson of the STAR project with thanks to Jerome Lauret and Doug Olson of the STAR project Nersc PDSF EC2 (via Workspace Service) WSU Accelerated display of a workflow job state Y = job number, X = job state
The Workspace Cloud Client • We took the workspace client and made it easy to use • Narrowing down the functionality • Wrapper on top of the workspace client • Allows scientists to lease VMs roughly following Amazon’s EC2 model (simplified) • PKI X509 credentials and quotas instead of payment • The goal is to restore/evolve this functionality as user requests come in • Saving VMs, network configurations • In the future: richer leases, etc. • “Cloudkit” coming out in next release, due soon
Nimbus @ University of Chicago • Objectives • Make it easy for scientific community to experiment with this mode of resource provisioning • Learn about the requirements of scientific projects and evolve the infrastructure • Features, SLAs, security and sharing concerns, etc. • Vital Stats • Deployed on 16 nodes of TeraPort cluster @ UC • Powered by the workspace set of tools • Image management handled via gridFTP • Made available mid-March ‘08 • http://workspace.globus.org/clouds/ • To obtain access mail nimbus@mcs.anl.gov • Available to scientific, educational projects, open source testing, etc.
Science Clouds • A group of clouds making resources available “on the nimbus model” • Nimbus, Stratus@UFL (Mauricio Tsugawa), FZK in Germany (almost done, Lizhe Wang), others expressed interest • EC2 • Some differences in setup, policies • UFL requires private networks (using OpenVPN) • Currently you’d use the same credential for the cloud and for the virtual private network • EC2 requires payment • Cloud federation • Moving an app from a hardware platform to a cloud is relatively hard • Need image, learn new paradigm, etc. • Moving between clouds is relatively easy • … if you have “rough consensus” on interfaces, image formats, etc.
Related Projects • Portal development (Josh Boverhof, LBNL) • Workspace KVM backend (Michael Fenn, Clemson University) • Integration with the Nebula project (University of Madrid)