1 / 11

Ideas for a virtual analysis facility

Explore a virtual analysis facility concept for efficient computing. Utilize Xen, LCG.WN, PROOF, xrootd, and SmartDomains. Dynamically allocate resources to optimize processing power. Develop a system to automatically scale resources based on workload. Consider deployment on multicore machines for enhanced performance.

levin
Download Presentation

Ideas for a virtual analysis facility

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ideas for a virtual analysis facility Stefano Bagnasco, INFN Torino CAF & PROOF Workshop CERN Nov 29-30, 2007

  2. Caveat! • This is just an idea we’re starting to work on in Torino • We don’t even have a prototype yet • But Federico urged for a presentation… • …so I substituted facts with brightly coloured animated diagrams.

  3. Analysis in the tiered model • At Tier-1s • Large number of CPUs • Feasible to take some out of the Grid infrastructure to build a PROOF-based Analysis Facility • Or may even be possible to “drain” jobs and switch to interactive mode quickly • At Tier-3s • Very small number of CPUs • Probably not a Grid site, at least with gLite middleware • Use PROOF • And Tier-2s? • Most resources are provided ad Grid WNs • In the ALICE computing model, this is where user analysis runs

  4. Virtual proof cluster Xen Dom0 LCG Worker Node PROOF Slave Xrootd Server LCG Worker Node • Xen can dynamically allocate resources to either machine • Both memory and CPU scheduling priority! • Memory is the issue, CPU priority limit is enough • Normal operation: PROOF slaves are “dormant” (minimal memory allocation, very low CPU priority) • Interactive access: dinamically increase resources to the PROOF instances, job on WN slows down • Alternatively, “wake up” more slaves LCG CE LCG Worker Node LCG Worker Node LCG Worker Node LCG Worker Node LCG Worker Node LCG Worker Node LCG Worker Node LCG Worker Node

  5. Dynamical allocation Advantages Grid batch job on the WN ideally never completely stops, only slows down Non-CPU-intensive I/0 operations can go on and do not timeout Both environments are sandboxed and independent, no interference No don’t actual need to be LCG WNs at all, can be anything Query Query Query Slave Slave Slave Slave Slave Slave Slave Slave PROOF Master LCG CE WN WN WN WN WN WN WN WN

  6. Pros and cons But… Needs well-stuffed boxes to be viable LCG Deployments don’t mix up well with other stuff There is an issue with advertised CPU power (e.g. in ETT). In a multi-VO environment is this acceptable? Is it clearly possible that some WN-side batch jobs will crash even if one provides a huge swap space. Will this be acceptable? … Advantages Grid batch job on the WN ideally never completely stops, only slows down Non-CPU-intensive I/0 operations can go on and do not timeout Both environments are sandboxed and independent, no interference No actual need to be LCG WNs at all, can be anything Can have quickly a working prototype, and add advanced features later

  7. Virtual Analysis Facility For Alice • “Director” • Globally manages the resource allocation • This is the missing piece to be developed • Next slide! Shopping list: • Xen • Two (or maybe more) virtual machines per physical one • LCG WN (or whatever) • On one of the virtual machines • PROOF + xrootd • One (or more) slaves per physical machine • One head node (master)

  8. The missing piece • Can easily have a semi-static prototype • Or completely static, just setting CPU limits • Just a “slider” to move resources by hand • This is not very far, essentially a deployment issue • An idea by P. Buncic: use SmartDomains • https://sourceforge.net/projects/smartdomains • Developed by X. Gréhant (HP Fellow at CERN Openlab) • This application is not its primary use case • Not all the needed functionality is there • Following step is a truly dynamical system • Coupled with PROOF Master • Measures load and automatically starts more workers/assign more resources as needed

  9. Deployment on Multicore machines • One VM, several PROOF Workers • Assign more resources to the VM when starting a fresh worker Xen Dom0 LCG Worker Node Proof Slave Proof Slave Proof Slave Proof Slave Proof Slave Proof Slave Proof Slave Xrootd Server

  10. Deployment on Multicore machines Proof Slave Proof Slave Proof Slave Proof Slave • One VM per PROOF Worker • Maybe running xrootd on Dom0? Xen Dom0 LCG Worker Node Xrootd Server

  11. Thanks! Ideas, advice & suggestions please! bagnasco@to.infn.it PROOF Workshop CERN Nov 29-30, 2007

More Related