280 likes | 448 Views
Managing Cloud Resources for Medical Applications. P. Nowakowski , T. Bartyński, T. Gubała, D. Harężlak, M. Kasztelnik, J. Meizner, M. Bubak ACC CYFRONET AGH , Krakow , Poland. Core concept: a cloud platform for medical application services and data. Install any scientific
E N D
Managing Cloud Resources for Medical Applications P. Nowakowski, T. Bartyński, T. Gubała, D. Harężlak, M. Kasztelnik, J. Meizner, M. Bubak ACC CYFRONET AGH, Krakow, Poland
Core concept: a cloud platform for medical application services and data Install any scientific application in the cloud Access available applications and data in a secure manner End user Application Managed application Developer Cloud infrastructure for e-science Manage cloud computing and storage resources Administrator Install/configure each application service (which we call an Atomic Service) once – then use them multiple times in different workflows; Direct access to raw virtual machines is provided for developers, with multitudes of operating systems to choose from (IaaS solution); Install whatever you want (root access to Cloud Virtual Machines); The cloud platform takes over management and instantiation of Atomic Services; Many instances of Atomic Services can be spawned simultaneously; Large-scale computations can be delegated from the PC to the cloud/HPC via a dedicated interface; Smart deployment: computationscan be executed close to data (or the other way round).
A brief glossary OS Raw OS OS Atomic service: A VPH-Share application (or a component thereof) installed on a Virtual Machine and registered with the cloud management tools for deployment. Atomic service instance: A running instance of an atomic service, hosted in the Cloud and capable of being directly interfaced, e.g. by the workflow management tools or VPH-Share GUIs. ! ! ! Virtual Machine: A self-contained operating system image, registered in the Cloud framework and capable of being managed by VPH-Share mechanisms. VPH-Share app. (or component) External APIs Cloud host VPH-Share app. (or component) External APIs
Platform for three user groups • The goal of of the platform is to manage cloud/HPC resources in support of VPH-Share applications by: • Providing a mechanism for application developers to install their applications/tools/services on the available resources • Providing a mechanism for end users (domain scientists) to execute workflows and/or standalone applications on the available resources with minimum fuss • Providing a mechanism for end users (domain scientists) to securely manage their binary data in a hybrid cloud environment • Providing administrative tools facilitating configuration and monitoring of the platform End user support Easy access to applications and binary data Generic service Application • Cloud Platform Interface • Manage hardware resources • Heuristicallydeploy services • Ensureaccess to applications • Keeptrack of binary data • Enforcecommon security Application Application Developer support Tools for deploying applications and registering datasets Data Data Data Hybrid cloud environment (public and private resources) Admin support Management of VPH-Share hardware resources
Cloud Platform Architecture Admin Modules available in first prototype Data and Compute Cloud Platform Atomic Service Instances VPH-Share Master UI Deployed by AMS on available resources as required by WF mgmt or generic AS invoker Developer Scientist Data mgmt. interface Raw OS (Linux variant) Generic data retrieval AS mgmt. interface Data mgmt. UI extensions Generic AS invoker VPH-Share Tool / App. AS images Workflow description and execution LOB Federated storage access Security mgmt. interface Web Service cmd. wrapper Computation UI extensions Web Service security agent Atmosphere persistence layer (internal registry) Custom AS client Security framework VM templates 101101 011010 111011 101101 011010 111011 101101 011010 111011 Generic VNC server LOB federated storage access Cloud stack clients HPC resource client/backend Physical resources Remote access to Atomic Svc. UIs AM Service DRI Service Available cloud infrastructure Managed datasets
End user’s view of the cloud platform – contd. Select Atomic Service Instantiate Atomic Service Access and use application • Atomic Services can be instantiated on demand • Once instantiated, the service can be accessed by the end user • Unused instances can be shut down by Atmosphere Log into Master Interface
TheAtmosphereManagement Service • receivesrequestsfromtheWorkflowExecutionstatingthat a set of atomic services isrequired to process/producecertain data; • queriestheComponent Registry to determinetherelevant AS and data characteristics; • collectsinfostructuremetrics, • analyzesavailable data and prepares an optimaldeployment plan. Atmosphere AIR Also called the Atmosphere Internal Registry; stores all data on cloud resources, Atomic Services and their instances. Core component of the VPH-Share cloud platform, responsible for managing cloud resources and deploying Atomic Services accordingly. 1. Application (or any other authorized entity) requests access to an Atomic Service 2. Poll AIR for data regarding this AS and the available computing resources 3. Heuristically determine whether to recycle an existing instance or spawn a new one. Also determine which computing resources to use when instantiating additional instances (based on cost information and performance metrics obtained from monitoring data) Application -- or -- [Asynchronous process] Collect monitoring data and analyze health of the cloud infrastructure to ensure optimal deployment of application services Workflow environment 4. Call cloud middleware services to enforce the deployment plan Computing infrastructure (hybrid public/private cloud) -- or -- Cloud middleware End user Selection of low-level middleware libraries to manage specific types of cloud sites 5. Deploy Atomic Service Instances as directed by Atmosphere
Deploymentplanning • Applicationsareheuristicallydeployed on theavailablecomputing resources, withregard to thefollowingconsiderations: • where to deploy atomicservices (partner’s private cloud site, public cloudinfrastructureor hybridinstallation), • whetherthedata shouldbe transferred to thesitewherethe atomicservice is deployed or the other way around, • how many instances should be started, • whetheritispossible to reuse predeployed AS (instances sharedamong workflows) • Thedeployment plan bases on theanalysis of: • workflow and atomic service resourcedemands, • volume and location of input and output data, • load of available resources, • cost of acquiring resources on private and public cloudsites, • cost of using cheaper instances (whenever possible and sufficient; e.g. EC2 Spot Instances or S3 Reduced Redundancy Storage for some noncritical (temporary) data), • public cloudprovider billing model
High Performance ExecutionEnvironment • Provides virtualized access to high performance execution environments • Seamlessly provides access to high performance computing to workflows that require more computational power than clouds can provide • Deploys and extends the Application Hosting Environment – provides a set of web services to start and control applications on HPC resources Application Hosting Environment Invoke the Web Service API of AHE to delegate computation to the grid Auxiliary component of the cloud platform, responsible for managing access to traditional (grid-based) high performance computing environments. Provides a Web Service interface for clients. Present security token (obtained from authentication service) AHE Web Services (WSRF::Lite) User access layer Application Tomcat container -- or -- Job Submission Service (OGSA BES / Globus GRAM) RealityGrid SWS WebDAV GridFTP HARC Resource client layer Workflow environment -- or -- Delegate credentials, instantiate computing tasks, poll for execution status and retrieve results on behalf of the client Grid resources running Local Resource Manager (PBS, SGE, Loadleveler etc.) End user
Service-based access to high-performance computational resources AHE service host (ozone.chem.ucl.ac.uk) AHE service interface Provides RESTful access to AHE applications, enables data staging and delegation of security credentials AHE service backend Provides credential delegation, data staging and execution monitoring features Accessing grid resources through the AHE service frontend: prepare (The end-users selects a grid application for an appropriate computational resource registered with AHE, and starts an AHE Application Instance (job)) SetDataStaging (Sets up data staging information between the grid infrastructure and the user resource) setProperty(Sets up job property) start (Initiates data transfer, executes job, checks job status and fetches result once completed) status (Polls the underlying grid infrastructure for job status) Developer Scientist HPC resources (National Grid Service) The AHE service interface: Simplifies Grid Security (end user does not have to handle grid security and MyProxy configurations and generation) Simplifies application setup on the Grid(end user does not have to compile, optimize, install and configure applications) Simplifies basic Grid Workflow (AHE stages the data, runs and polls the job and fetches the results automatically) Simplifies Grid access through RESTful web-services (AHE provides a RESTful interface allowing clients and other web services to access the computational infrastructure and applications in a Software as a Service (SaaS) manner).
Data Access for LargeBinary Objects Core component host (vph.cyfronet.pl) LOBCDER host (149.156.10.143) Data Manager Portlet (VPH-Share Master Interface component) WebDAV servlet LOBCDER service backend GUI-based access Resource factory Storage driver Storage driver Storage driver (SWIFT) Atomic Service Instance (10.100.x.x) Service payload (VPH-Share application component) Resource catalogue Mounted on local FS (e.g. via davfs2) SWIFT storage backend Generic WebDAV client External host LOBCDER (the VPH-Share federated data storage component) enables data sharing in the context of VPH-Share applications The system is capable of interfacing various types of storage resources and supports SWIFT cloud storage (support for Amazon S3 is under development) LOBCDER exposes a WebDAV interface and can be accessed by any DAV-compliant client. It can also be mounted as a component of the local client filesystem using any DAV-to-FS driver (such as davfs2).
Data Reliability and Integrity • Provides a mechanism which will keep track of binary data stored in the Cloud infrastructure • Monitors data availability • Advises the cloud platform when instantiating atomic services • Shifts/replicate data between cloud sites, as required DRI Service A standalone application service, capable of autonomous operation. It periodically verifies access to any datasets submitted for validation and is capable of issuing alerts to dataset owners and system administrators in case of irregularities. AIR Validation policy Register files Get metadata Migrate LOBs Get usage stats (etc.) Configurable validation runtime (registry-driven) Runtime layer Extensible resource client layer Binary data registry End-user features (browsing, querying, direct access to data) Store and marshal data VPH Master Int. OpenStack Swift Cumulus Amazon S3 Data management portlet (with DRI management extensions) Distributed Cloud storage
Security Framework • Provides a policy-driven access system for the security framework. • Providesa solution for an open-source based access control system based on fine-grained authorization policies. • Implements Policy Enforcement, Policy Decision and Policy Management • Ensures privacy and confidentiality of eHealthcare data • Capable of expressing eHealth requirements and constraints in security policies (compliance) • Tailored to the requirements of public clouds VPH clients (or any authorized user capable of presenting a valid security token) Application Workflow management service Developer End user Administrator VPH Security Framework Public internet VPH Security Framework VPH Atomic Service Instances
Authentication and authorization Admin VPH-Share Master Int. VPH-Share Atomic Service Instance BiomedTown Identity Provider 1. User selects „Log in with BiomedTown” Developer Scientist Authentication service 2. Open login window and delegate credentials Users and roles 3. Validate credentials and spawn session cookie containing user token (created by the Master Interface) Login feature Portlet Security Proxy Authentication widget Portlet Service payload (VPH-Share application component) Portlet 4. When invoking AS, pass user token along with request header 6’. Relay request if authorized Portlet Security Policy 6’. Report error (HTTP/401) if not authorized 5. Parse user token, retrieve roles and allow/deny access to the ASI according to the security policy Developers, admins and scientists obtain access to the cloud platform via the Master Interface UI The OpenID architecture enables the Master Interace to delegate authentication to any public identity provider (e.g. BiomedTown). Following authentication the MI obtains a secure user token containing the current user’s roles. This token is then used to authorize access to Atomic Service Instances, in accordance with their security policies.
Handling security on the ASI level VPH-Share Atomic Service Instance Actual application API (localhost access only) Service payload (VPH-Share application component) 2. Intercept request 1. Incoming request 5. Relay original request (if cleared) Public AS API (SOAP/REST) Security Policy 3’, 4’ Report error 6. Intercept service response Exposed externally by local web server (apache2/tomcat) 7. Relay response 3. Decrypt and validate the digital signature with the Master Interface’s secret key. 5. Otherwise, relay the original request to the service payload. Include the user token for potential use by the service itself. User token a6b72bfb5f2466512ab2700cd27ed5f84f991422rdiaz!developer!rdiaz,Rodrigo Diaz,rodrigo.diaz@atosresearch.eu,,SPAIN,08018 4. If the digital signature checks out, consult the security policy to determine whether the user should be granted access on the basis of his/her assigned roles. 6-7. The service response is relayed to the original client. This mechanism is entirely transparent from the point of view of the person/application invoking the Atomic Service. Security Proxy 3’, 4’. If the digital signature is invalid or if the security policy prevents access given the user’s existing roles, the Security Proxy throws a HTTP/401 (Forbidden) exception to the client. digitalsignature timestamp uniqueusername assigned role(s) additionalinfo The application API is only exposed to localhost clients Calls to Atomic Services are intercepted by the Security Proxy Each call carries a user token (passed in the request header) The user token is digitally signed to prevent forgery. This signature is validated by the Security Proxy The Security Proxy decides whether to allow or disallow the request on the basis of its internal security policy Cleared requests are forwarded to the local service instance
Behind the scenes: Instantiating an Atomic Service Template (1/2) VPH-Share Master Int. OpenStack WN (10.100.x.x) WN hypervisor (KVM) Atomic Service Instance 7. Boot VM 7. Developer Mounted network storage Assigned local storage Per-WN storage Start Atomic Service 1. Start AS Development Mode 6. Upload VM image to WN storage Cloud Manager Core Component Host (149.156.10.143) Nova Head Node (149.156.10.131) 2. Request instantiation of Atomic Service Cloud Facade (API) OpenStack (API) 4. Call Nova to instantiate selected VM 5. Stage AS image on WN Glance image store Atmosphere AMS 3. Get AS VM details Atmosphere Internal Registry AS Images Comp. model MongoDB Storage model Nova management interface The Cloud Manager portlet enables developers to create, deploy, save and instantiate Atomic Service Instances on cloud resources.
Behind the scenes: Instantiating an Atomic Service Template (2/2) VPH-Share Master Int. OpenStack WN (10.100.x.x) IP Wrangler host (149.156.10.131) Atomic Service Instance IP Wrangler Developer WN hypervisor Port mapping table Assigned local storage ASI details 8. Report VM is booting 9. Report VM is running 15. Poll for ASI status and update view Development Mode 13. Configure IP Wrangler to enable port forwarding Cloud Manager Core Component Host (149.156.10.143) Nova Head Node (149.156.10.131) 16. Retrieve ASI status, port mappings and access credentials OpenStack (API) Cloud Facade (API) Atmosphere AMS 10. Poll Nova for VM status Nova management interface 11. Delegate query and relay reply Atmosphere Internal Registry 12. Register ASI as booting/running Comp. model 14. Register port mappings for this ASI MongoDB Storage model Atmosphere takes care of interpreting user requests and managing the underlying cloud platform. CYFRONET contributes a private cloud site for development purposes.
Behind the scenes: Communicating with Atomic Service Instance OpenStack WN (10.100.x.x) IP Wrangler host (149.156.10.131) VPH-Share Master Int. Atomic Service Instance IP Wrangler Standard IP stack (accessible via public IP) 3. Relay 4. Call ASI 2. Initiate interaction Port mapping table Assigned local storage Developer 1. Look up ASI details (including IP Wrangler IP, port mappings and access credentials, if needed) Development Mode Cloud Manager ASI metadata Note: Atomic Service Instances typically do not have public IPs The role of the IP Wrangler is to facilitate user interaction on arbitrary ports (e.g. SSH, VNC etc.) with VMs deployed on a computing cluster (such as is the case at CYFRONET) The IP Wrangler bridges communication on predetermined ports, according to the ASI configuration which is stored in AIR Web Service calls do not require nonstandard ports and are instead handled by appending data to the endpoint path
Behind the scenes: Saving the Instance as a new Atomic Service VPH-Share Master Int. OpenStack WN (10.100.x.x) WN hypervisor (KVM) Atomic Service Instance 5. Image selected VM (incl. user space) 5. Developer Mounted network storage Assigned local storage Per-WN storage Save Atomic Service 1. Create AS from ASI AS metadata Development Mode 6. Upload VM image to Glance Cloud Manager Core Component Host (149.156.10.143) Nova Head Node (149.156.10.131) 2. Request storage of Atomic Service Cloud Facade (API) OpenStack (API) 4. Store VM image in Glance Glance image store 3. Call Nova to persist ASI Atmosphere AMS 3’. Register AS as being saved. Atmosphere Internal Registry 7. Report success AS Images 8. Register AS as available. Comp. model MongoDB Storage model Nova management interface Developers are able to save existing instances as new Atomic Services. Once saved, an Atomic Service can be instantiated by clients.
More information on accessing the VPH-Share Infrastructure • The Master Interface is deployed at new.physiomespace.com • Provides access to all VPH-Share cloud platform features • Tailored for domain experts (no in-depth technical knowledge necessary) • Uses OpenID authentication provided by BiomedTown • Contact Piotr Nowakowski (CYF) for details regarding access and account provisioning • Further information about the project can be found at www.vph-share.eu • Make sure to check out the DICE team website at CYF (dice.cyfronet.pl/projects/VPH-Share) for further information regarding the cloud platform and practical usage examples