1 / 30

The RESERVOIR Model and Architecture for Open Federated Cloud Computing

The RESERVOIR Model and Architecture for Open Federated Cloud Computing. B. Rochwerger D. Breitgand E. Levy A. Galis K. Nagin I. Llorente R. Montero Y. Wolfsthal E. Elmroth J . aceres M. Ben-Yehuda W. Emmerich F. Gal´an. IBM Journal of Research and Development, Vol. 53, No. 4. (2009).

mare
Download Presentation

The RESERVOIR Model and Architecture for Open Federated Cloud Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The RESERVOIR Model and Architecture forOpen Federated Cloud Computing B. Rochwerger D. Breitgand E. Levy A. Galis K. Nagin I. LlorenteR. Montero Y. Wolfsthal E. ElmrothJ. aceres M. Ben-Yehuda W. Emmerich F. Gal´an IBM Journal of Research and Development, Vol. 53, No. 4. (2009)

  2. Outline • Introduction • Issues in current Clouds • Use case • SAP System • Primary requirement • The RESERVOIR Model for Federated Cloud Computing • Summary

  3. Introduction • In the Web 2.0 era, companies grow from inception to a massive scale at incredible rates. • EX. MySpace acquired 20 million users in two years; • YouTube reached the same number of users in just 16 months • To leverage this potential rate of growth, companies must properly address critical business decisions related to their service delivery infrastructure. • There is possible solution: Cloud Computing

  4. Introduction • With cloud computing, • companies can lease infrastructure resources on-demand from a virtually unlimited pool. • The “pay as you go” billing model applies charges for the actually used resources per unit time. • A business can optimize its IT investment and improve availability and scalability.

  5. Issues in current Clouds(1/2) • Inherently limited scalability of single-provider clouds • Most infrastructure cloud providers today claim infinite scalability • However, In the long term, scalability problems may be expected to aggravate when • Cloud providers may serve an increasing number of on-line services, and • Services become accessed by massive amounts of global users at all times.

  6. Issues in current Clouds(2/2) • Lack of interoperability among cloud providers • Contemporary cloud technologies have not been designed with interoperability in mind. • Cannot scale through business partnerships across clouds providers. • Prevents small and medium cloud infrastructure providers from entering the cloud provisioning market. • To customer, this stifles competition and locks consumers to a single vendor.

  7. The RESERVOIR Approach • The RESERVOIR vision is to • enable on-demand delivery of IT services at competitive costs, • without requiring a large capital investment in infrastructure. • The model is inspired by a strong desire to liken the delivery of IT services to the delivery of common utilities • EX. dynamically acquire electricity from a neighboring facility to meet a spike in demand. • That is,no single provider serves all customers at all times, • Next-generation cloud computing infrastructure should support a model that Multiple independent providers can cooperate seamlessly to maximize their benefit. • Informally, we refer to the infrastructure that supports this paradigm as a federated cloud.

  8. Use case • In this paper, we use SAP systems as the use case. • Good complexity, • Provide a good conceptual benchmark for validating the RESERVOIR model and deriving requirements. • SAP systems are used for a variety of business applications that differ by version and functionality • EX. CRM, ERP

  9. About SAP System • Requests are handled by the SAP Web dispatcher • Multiple stateful Dialog Instances (DIs) • Single Central Instance (CI) that performs central services such as • application level locking, messaging, and registration of DIs. • A single Database Management System (DBMS) serves the SAP system.

  10. About SAP System

  11. Primary Requirements(1/2) • Automated and fast deployment • Automated provisioning of service applications based on a formal contract specifying the infrastructure SLAs. • The same contract should be reused to provision multiple instances of the same application for different tenants with different customizations. • Dynamic elasticity(seamless) • Resource allocation parameters of individual virtual execution environments. • (memory, CPU, network bandwidth, storage) • The number of virtual execution environments.

  12. Primary Requirements(2/2) • Automated continuous optimization • Continuously optimize alignment of infrastructure resources management with the high-level business goals • Virtualization technology independence • Support different virtualization technologies transparently

  13. The RESERVOIR Model for Federated Cloud Computing • Two roles in the RESERVOIR model: • Service providers: • The entities that understand the needs of a particular business • Offer service applications to address those needs. • Do not own the computational resources. • Infrastructure providers: • Operate RESERVOIR sites that own and manage the physical infrastructure on which service applications execute.

  14. About Reservoir Site VEE: virtual execution environment VEE host: The virtualized computational resources, and all the management enablement components.

  15. Two modes of capacity provisioning(1/2) • Explicit capacity requirements for sized service applications: • The service provider conducts sizing and capacity planning studies of the service application. • Then, the service provider precisely specifies the capacity needs of the application under specific workload conditions. • The minimal service configuration, and the elasticity rules. • The infrastructure provider commits itself to an infrastructure SLA according to the specification above. • Charged for actual capacity usage in line with the “pay as you go” model.

  16. Two modes of capacity provisioning(2/2) • Implicit capacity requirements for unsized service applications: • In this mode, the service provider may have only initial sizing estimations for its service or may not have them at all. • The infrastructure provider commits itself to an SLA that is formulated in terms of high-level Service Level Objectives (SLOs). • (e.g., response time, throughput, etc.) • Charged for the actual usage of capacity. • Optional: usage reports at various level of detail, minimal resource utilization policy, maximal cost policy, etc

  17. Service Manifest(1/2) Service manifest (defines the settings, the contract and SLA) The infrastructure provider The service provider • The manifest specifies a reference to a master imagethat fully captures the functionality of the each component types • EX. OS, middleware, applications, data, and configuration • The manifest contains the information and rules necessary to automatically create, unique VEE instances that can run simultaneously without conflicts[7] [7] O. Goldshmidt, B. Rochwerger, A. Glikson, I. Shapira, and T. Domany, “Encompass: Managing functionality,” in Proceedings of 21th International Parallel and Distributed Processing Symposium (IPDPS 2007), March 2007, pp. 1–5.

  18. Service Manifest(2/2) • The manifest also specifies: • The grouping of components into virtual networks and/or tiers that form the service applications. • The capacity requirements for an explicitly sized service application • e.g., the number of virtual CPUs, memory size, storage pool size, and the number of network interfaces and their bandwidth • A set of elasticity rules • Correlate monitored Key Performance Indicators (KPIs) and load parameters with resource allocations • {(response time, throughput…) ,(memory, CPU, bandwidth…} • The capacity specification also includes the minimum and maximum number of VEEs

  19. Example of Service manifest(simplified)

  20. The RESERVOIR Architecture

  21. The Service Manager(1/2) • The Service Manager is the highest level of abstraction, interacting with the service providers to receive their Service Manifests, negotiate pricing, and handle billing. • Its two most complex tasksare: • (1) deploying and provisioning VEEs based on the Service Manifest, and • (2) monitoring and enforcing SLA compliance by throttling a service application’s capacity.

  22. The Service Manager(2/2) • Finally, the Service Manager is responsible for accounting and billing. • Existing cloud computing infrastructures : inflexible, usually employing fixed-cost post-paid subscription models • We consider both post-paid and pre-paid billing models based on resource usage. • based on the resource utilization information provided by the Service Manager accounting system

  23. The Virtual Execution Environment Manager (VEEM)(1/2) • VEEM is responsible for • the optimal placement of VEEs into VEE hosts subject to constraints determined by the Service Manager. • The VEEM is free to place and move VEEs anywhere, even on the remote sites as long as the placement satisfies the constraints. • The VEEM also provides the functionality needed to handle the dynamic nature of the service workload • such as the ability to add and remove VEEs from an existing VEE Group, or to change the capacity of a single VEE.

  24. The Virtual Execution Environment Manager (VEEM)(2/2) • In addition to serving local requests (from the local Service Manager), VEEM is responsible for the federation of remote sites. • By taking the role of a Service Manager toward the remote VEEM in all cross-site interactions. • The primary VEEM does not get involved in the internal placement decisions on the remote site, as this is a concern of the remote VEEM

  25. The Virtual Execution Environment Host (VEEH) • The VEEH is responsible for the basic control and monitoring of VEEs and their resources • EX: creating a VEE, allocating additional resources to a VEE, monitoring a VEE, migrating a VEE, creating a virtual network and storage pool, etc • Each VEEH type encapsulates a particular type of virtualization technology, and all VEEH types expose a common interface • VEEM can issue generic commands to manage the life-cycle of VEEs • Moreover, VEEHs must support transparent VEE migration to any compatible VEEH in a RESERVOIR cloud, • regardless of site location or network and storage configurations.

  26. Layers of Interoperability • SMI : • The Service Management Interface with its service manifest exposes a standardized interface into the RESERVOIR Cloud for service providers. • VMI: • The VEE Management Interface (VMI) simplifies the introduction of different and independent IT optimization strategies without disrupting other layers or peer VEEMs. • VHI: • The VEE Host Interface (VHI) will support plugging-in of new virtualization platforms

  27. Summary • This paper presents a new Open Federated Cloud Computing model, Architecture, and functionality as being developed in the RESERVOIR project. • The RESERVOIR model explicitly addresses • The limited scalability of a single-provider cloud, • The lack of interoperability among cloud providers,

  28. END

  29. Image Metadata • storage volumes • HW profile: machine requirements • SW profile: server functionality, system utilities • State • Machine the image is assigned to • Customizations

More Related