540 likes | 558 Views
Explore the fascinating story of Predrag Buncic's experiences at CERN, from his work on middleware architecture to the development of AliEn, a distributed computing environment. Discover the challenges and achievements of working within and outside the grid.
E N D
Inside and Outside of the Grid Predrag Buncic
Introduction About myself… Part I Alice, AliEn, ARDA (outsider story) Part II EGEE, gLite (inside Grid circle) Part III Looking back (Grid in the review mirror) Part IV Looking forward (Grid in the crystal ball) Conclusions Overview CERN, 5 May 2006 - 2
A brief history of my time at CERN… Middleware Architect 1962 1970 1980 1990 2000 2010 • Tracking for RICH detector • Automatic reconstruction of streamer chamber events • NA49 software architecture & infrastructure • Reconstruction chain • Databases • File catalogue • Event visualisation • Large scale data management • DSPACK – Persistent Object Manager for HEP • In charge of Production Environment and Databases “Section” • Designed, developed and deployed AliEn, self contained, end-to-end, Grid-like distributed production environment used by Alice) • Participating in CERN Commitees (GAG, HEPCALII, RTAGs (Blueprint, ARDA)) CERN, 5 May 2006 - 3
Part I Alice, AliEn, ARDA, Grid… (outsider story) CERN, 5 May 2006 - 4
ALIce ENvironment @ GRID • AliEn is Not Grid • Just Like GNU (GNU’s Not UNIX) • AliEn sits on top of Grid • It is meant to • complement the existing Grid middleware • and implement one where it is not available • interoperate with mainstream Grid middleware projects • JDL (Job description language) based on CONDOR ClassAds • Globus/GSI for authentication • provide stable high level user interfaces and API CERN, 5 May 2006 - 5
So, what is AliEn? • It is a distributed computing environment developed to meet the needs of Alice experiment • A set of services (21 at the moment) • SOAP/Web Services (18) • Core Services (Brokers, Optimizers, etc) • Site Services • Abstract interfaces to resources (SE,CE, FTS) with several backend implementation • Package Manager • Other (non Web) Services (ldap, database proxy, posix I/O) • Distributed file and metadata catalogue built on top of RDBMS • User interfaces and API • command line, GUI, Web portal, C/C++/perl/java API, ROOT CERN, 5 May 2006 - 6
Working on AliEn • Small team (between 2 and 4 + temporary students) • P.Saiz, A. Peters, C. Cristiou (ARDA) • J-F. Grosse-Oetringhaus (ALICE) • Extensive use of Open Source components • Perl as main programming language • More than 180 components • C/C++ for critical parts • What You See Is What You Get • Work with directly with users, always running prototype • About 40+ Alice sites • Reality check • Alice Data and Physics Challenges • Collaboration with external partners • India, Ericsson research institute in Croatia, HP CERN, 5 May 2006 - 7
2001 2002 2003 2004 2005 Start Physics Performance Report (mixing & reconstruction) First production (distributed simulation) Timeline 10% Data Challenge (analysis) Functionality Interoperability Performance, Scalability, Standards CERN, 5 May 2006 - 8
MammoGrid GEMSS SeLeNe BioGrid AVO OpenMolGrid EGSO FlowGrid GRIA MOSES COG EuroGrid GRACE CrossGrid DAMIEN GridLab GRIP DataGrid DataTAG 1/10/2000 1/10/2001 1/10/2002 EU FP5 Grid Projects (58M€) – 2000-2004 • Infrastructure • DataTag • Computing • EuroGrid, DataGrid, Damien • Tools and Middleware • GridLab, GRIP • Applications • EGSO, CrossGrid, BioGrid, FlowGrid, Moses, COG, GEMSS, Grace, Mammogrid, OpenMolGrid, Selene, • P2P / ASP / Webservices • P2People, ASP-BP,GRIA, MMAPS, GRASP, GRIP, WEBSI • Clustering • GridStart Applications Middleware Infrastructure CERN, 5 May 2006 - 9
Grid projects in the world • UK e-Science Grid • Netherlands – VLAM, PolderGrid • Germany – UNICORE, Grid proposal • France – Grid funding approved • Italy – INFN Grid • Eire – Grid proposals • Switzerland - Network/Grid proposal • Hungary – DemoGrid, Grid proposal • Norway, Sweden - NorduGrid • NASA Information Power Grid • DOE Science Grid • NSF National Virtual Observatory • NSF GriPhyN • DOE Particle Physics Data Grid • NSF TeraGrid • DOE ASCI Grid • DOE Earth Systems Grid • DARPA CoABS Grid • NEESGrid • DOH BIRN • NSF iVDGL • DataGrid (CERN, ...) • EuroGrid (Unicore) • DataTag (CERN,…) • Astrophysical Virtual Observatory • GRIP (Globus/Unicore) • GRIA (Industrial applications) • GridLab (Cactus Toolkit) • CrossGrid (Infrastructure Components) • EGSO (Solar Physics) CERN, 5 May 2006 - 10
European DataGrid (EDG) • People: • Total of 21 partners, over 150 programmers from research and academic institutes as well as industrial companies • Status: • Testbed including approximately 1000 CPUs at 15 sites • Several improved versions of middleware software (final release end 2003) • Several components of software integrated in LCG • Software used by partner projects: DataTAG, CROSSGRID But, users from LHC experiments not quite happy with quality and functionality of EDG software in particular due to lack of support for distributed analysis CERN, 5 May 2006 - 11
ARDA (RTAG #11) Mandate • To review the current Distributed Analysis projects in experiments • capture their architectures in a consistent way • confront them to the HEPCAL II use cases • review their functionality of experiment-specific packages, • review state of advancement and role in the experiment, • identify similarities and components that could be integrated in the generic Grid middleware • To consider the interfaces between Grid, LCG and experiment-specific services • To confront the current projects with critical Grid areas (security) • To develop a roadmap specifying wherever possible the architecture, the components and potential sources of deliverables to guide the medium term (2 year) work of the LCG and the DA planning in the experiments CERN, 5 May 2006 - 12
Job Auditing Provenance Information Service 1: 2: Authentication 3: Authorisation User Interface API 6: 4: Accounting Metadata DB Proxy Catalogue 14: Grid 5: Monitoring 13: 12: File 7: Catalogue Workload Management 10: 9: Package Data Manager 8: Management 11: Computing 15: Element Job Monitor Storage Element Key ARDA Services CERN, 5 May 2006 - 13
ARDA API • An ARDA API, would be a library of functions used for building client applications like graphical Grid analysis environments, e.g. GANGA or Grid Web portals. • The same library can be used by Grid enabled application frameworks to access the functionality of the Grid services, access or upload files on the Grid CERN, 5 May 2006 - 14
Recommendation • The ARDA services should present an Grid Access Service and API (GAS/API) • enable applications, analysis shells, experiment frameworks etc to interface to the distributed analysis services through a well defined API, with bindings to a required set of programming languages. • A common API to the distributed Grid environment for analysis should be shared by all LHC experiments. • LCG should setup a project to develop the prototype considering these main goals • to develop the specifications for functionality and interfaces of the ARDA services and API • to allow the realistic investigation of the possible commonality between the experiments in the API • to perform real-world OGSI modeling, functionality and performance tests and to address the issue of how to deploy and run GRID services along with the existing ones on the LCG-1 resources • In synergy with R&D in the Grid middleware projects, e.g. in the areas of • VO management • security infrastructure • Timescale: 6 months CERN, 5 May 2006 - 15
Part II EGEE, gLite (insider story) CERN, 5 May 2006 - 16
Enabling Grids for e-Science in Europe • Mission: • Deliver 24/7 Grid service to European science; re-engineer and “harden” Grid middleware for production; “market” Grid solutions to different scientific communities • Be the first international multiscience production Grid facility • Key features: • 100 million euros/4years • >400 software engineers + service support • 70 European partners CERN, 5 May 2006 - 17
VDT EDG . . . AliEn LCG . . . EGEE Middleware Activity • Mandate: • Re-engineer and harden Grid middleware • (AliEn, EDG, VDT and others) • Provide productionquality middleware CERN, 5 May 2006 - 18
LCG-1 LCG-2 gLite-1 gLite-2 Globus 2 based Web services based EGEE Middleware - gLite • Starts with existing components (AliEn, EDG, VDT and others) • Prototyping short development cycles for fast user feedback • Initial web-services based prototypes being tested with representatives from the application groups Application requirements http://egee-na4.ct.infn.it/requirements/ CERN, 5 May 2006 - 19
gLite Design Team • Formed in December 2003 • Initial members: • UK: Steve Fisher • IT/CZ: Francesco Prelz • Nordic: David Groep • CERN: Predrag Buncic, Peter Kunszt, Frederic Hemmer, Erwin Laure • VDT: Miron Livny • Globus representatives joined in 2005 • Ian Foster/Kate Keahey • Started service design based on component breakdown defined by ARDA • Leverage experiences and existing components from AliEn, VDT, and EDG. CERN, 5 May 2006 - 20
Design team working document.. CERN, 5 May 2006 - 21
Design team working document.. CERN, 5 May 2006 - 22
gLite Architecture (DJRA1.1) CERN, 5 May 2006 - 23
ARDA and JRA1 CERN, 5 May 2006 - 24
gLite Prototype • AliEn “shell” as UI • Workload Management: • Alien Task Queue and Job Monitor • CE->Condor-G->blaph->PBS/Condor/LSF • Data Management • AliEn File & Metadata catalog • AliEn SE • Castor & D-Cache backends • SRM interface • gridFTP for transfers • Replica Location Service • AliEn File Transfer Queue and Daemons • Aiod/GFal for POSIX like file access • GAS (Grid Access Service) and API • Package Manager • AliEn implementation adapted to EGEE • Security • VOMS for certificate handling/SE gridmap files • MyProxy for certificate delegation in GAS CERN, 5 May 2006 - 25
GAS Accounting WM DM AuthZ Auth RB (TQ) FPS (FQ) IS PM FC JW (JA) CE SE (?) LRC SCE (LSF,..) SRM Middleware Services in gLite gLite Middleware Services API “Periodic System of Grid Middleware Services” CERN, 5 May 2006 - 26
File Catalogue Metadata WMS API* Grid Access Service (GAS) • The Grid Access Service represents the user entry point to a set of core services. client GAS • Many of the User Interface API functions are simply delegated to the methods of the GAS. In turn many of the GAS functions are delegated to the appropriate service. CERN, 5 May 2006 - 27
Storage Element • POSIX like I/O library • Can be used with Logical File Names or GUIDs • Authentication and authorization based on Grid credentials • Implementation based on Alien I/O and GFAL P O S I X Local Replica Catalog AIOD (gLite I/O) Server client Disk cache MSS (Castor) S R M S R M MSS (dCache) CERN, 5 May 2006 - 28
Tier1 |--./ | |--cern.ch/ | | |--user/ | | | |--a/ | | | | |--admin/ | | | | | | | | | |--aliprod/ | | | | | | | |--f/ | | | | |--fca/ | | | | | | | |--p/ | | | | |--psaiz/ | | | | | |--as/ | | | | | | | | | | | |--dos/ | | | | | | | | | | | |--local/ | | | | | | | |--b/ | | | | |--barbera/ ALICE LOCAL | |--36/ | | |--stderr | | |--stdin | | |--stdout | | | |--37/ | | |--stderr | | |--stdin | | |--stdout | | | |--38/ | | |--stderr | | |--stdin | | |--stdout ALICE ALICE USERS SIM |--simulation/ | |--2001-01/ | | |--V3.05/ | | | |--Config.C | | | |--grun.C File Catalogue CERN, 5 May 2006 - 29
Package Manager Common Packages (ROOT, POOL,..) VO & user Packages Site package cache Worker node cache CERN, 5 May 2006 - 30
“Open Issues” • Some services were not foreseen in the TA, and therefore do not fit well with the current organization • E.g. GAS; asynchronous communication • Prototype testbed system administration • People really stretched in providing promised formal deliverables as well as the prototype • IT/CZ cluster does not seem to have enough effort available to provide prototype components early enough • Current draft release plan only foresee components in September and November CERN, 5 May 2006 - 31
Result… • One year of development within EGEE • We enthusiastically adapted to EGEE software process • Provided the prototype which was used in first 9 months of the project • Exposed to new users • found and fixed many bugs • Represents an implementation of EGEE architecture • The “prototype” was retired by the management after 2nd EGEE Conference with explanation that the project can support only one software stack CERN, 5 May 2006 - 32
Reducing M/W Scope API Grid Middle ware Common Service Layer (description, discovery, resource access) Baseline Services Low level network, message transport layer (TCP/IP -> HTTP -> SOAP ) CERN, 5 May 2006 - 33
Part III Looking in the rearview mirror CERN, 5 May 2006 - 34
It does not help… • If project requires a subproject and 4 management layers • If projects is multipolar • with several strong partners, each one with different agenda • If projects is so big that it cannot react fast enough to accommodate for inevitable changes • in technology • In user requirements • If project starts having its own life… • Continuation of the projects and delivering paper deliverables becomes self-perpetuating are more important than end user satisfaction CERN, 5 May 2006 - 35
But, it helps… • To have simple and effective software process • We realized that lack of it was the biggest objection to AliEn • To have continuous build & test infrastructure • EGEE had one but it was complex and intrusive • To have simple installation/update procedure • Should be OS independent, user space • To separate development, test and deployment teams • Otherwise project will always stay in development stage • To have documentation CERN, 5 May 2006 - 36
AliEn BITS • Build Integration and Test System • This is an attempt to setup automatic system continuously build and test AliEn and its components • 5 platforms • i686-pc-linux-gnu • ia64-unknown-linux-gnu • powerpc-apple-darwin8.1.0 • i686-apple-darwin8.6.1 • x86_64-unknown-linux-gnu CERN, 5 May 2006 - 37
AliEn Tests • AliEn Unit Tests are run after building and installing all required packages • Save the log files for the failed tests and trigger warnings • Save the log files of all AliEn services during the test • Publish relevant logs on the Web • This is followed by the installation test (bootstrapping V.O.) and functional tests • Performance tests are logged and archived CERN, 5 May 2006 - 38
AliEn Release Process • XP Style planning • Individual tasks requiring no more than 7 days • 2-3 iterations + one week for testing and release preparation • Regular release cycles • 9 releases since 24.07.05 • new release each month • AliEn Wiki • Replacement for old portal • Users contribute with howto documents • Bug reporting via Savannah portal • Integrated with Wiki to help producing release notes • Deployment on LCG and AliEn sites • 40+ sites, 2-3 days CERN, 5 May 2006 - 39
New in AliEn 2 • Better File Catalogue backend • Central file catalogue keeps LGN,GUID -> SE mapping • Storage Element is responsible to provide GUID,LFN -> SURL mapping • Reducing number of tables • Much smaller and faster than before • Tested to up to 50M entries • New (tactical) SE and POSIX I/O • Using xrootd protocol in place of aiod (glite I/O) CERN, 5 May 2006 - 40
New in AliEn 2 • New API Service and ROOT API • GAPI as replacement for GAS and DB Proxy Service • Shell, C++, perl, java bindings • [python?] • Analysis support • Batch and interactive (HEPCAL II model) • ROOT/PROOF interfaces • Complex XML datasets and tag files for event level metadata • Handling of complex workflows CERN, 5 May 2006 - 41
Remaining issues • Security • How to reconcile (Web) Service Oriented Architecture with growing security paranoia aimed at closing all network ports? • How to fulfil the legal requirements for process and file traceability? • Software distribution • At present, a package manager service requires shared disk area on the site, can this service be avoided? • Data Management • It must work absolutely reliably but, can we make it simpler? • Intra-VO scheduling • How to manage priorities with VO in particular for analysis? CERN, 5 May 2006 - 42
Part IV Looking in the crystal ball CERN, 5 May 2006 - 43
Scale and scalability • Grid is Big • At least we would like it to be that way • Big distributed systems often spell just a big trouble • Scalability of services • Service discovery • Deployment • Interoperability • Security • Constructing a working system that meets all these requirements and scales (while not compromising security) still remains a challenge • However, on a smaller scale, we do have working solutions • The solution is to reduce ‘visible grid’ the scale to manageable size and then apply solutions that are already known to work CERN, 5 May 2006 - 44
Before (LCG-1): Flat Grid Site A Each user interacts directly with site services Virtual Organisation is an attribute of a user RB Site E Site C Site B Site D Site F CERN, 5 May 2006 - 45
V.O.#2 Site E Site D Site F Now (gLite): Hierarchical Grid(s) Grid Service Provider: Hosts Core Services (per V.O) Resource Provider: Hosts CE, SE Services (per V.O.) Virtual Organisation: Collection of Sites, Users & Services V.O. has identity on the Grid and can act on users behalf CERN, 5 May 2006 - 46
Virtual Organizations • Thanks to AliEn, the concept of VO identity got recognition • VO acts as an intermediary on behalf of its users • Task Queue – repository for user requests • AliEn, Dirac, Panda • Site services operating in pull mode • In gLite V.O. is allowed to ‘glide-in’ its own CE component which will run on the site under V.O. identity • Recently this model was extended down to the worker node • Job Agent running under the VO identity on the worker node seves many real users • To satisfy site concerns about traceability of user actions Job Agents have to obey the rules • Run user’s process under ‘sandbox’ created by Grid equivalent of ‘sudo’ (glexec) • A site will have the final say and can grant or deny access to resources (CPU, storage) • This approach was recently blessed by Design Team CERN, 5 May 2006 - 47
Virtual Machines • Once it is accepted that Job Agent can execute privileged commands, we are step closer in convincing the sites that they should let us run the Grid jobs within Virtual Machine • This can provide perfect process a file sandboxing • Software which is run inside a VM can not negatively affect the execution of another VM • Xen takes a novel approach by eliminating sensitive instructions directly in the guest OSs' original source code, which is called para-virtualization • These instructions are replaced with equivalent operations or emulated by replacing them with hypercalls, which call equivalent procedures in the VMM, or hypervisor • The hypervisor runs as the most privileged kernel, while guest OS kernels run less privileged on top of the hypervisor. • This method yields close-to-native performance CERN, 5 May 2006 - 48
Virtual File Systems • Once we are allowed to run a Virtual Machine, whole new world of possibilities opens • We can (re)use a lot of code which was previously is system/kernel domain • We can build dedicated VMs with special kernel modules built in to support various fancy file systems • For example, HTTP-FUSE-CLOOP file system could be used for software distribution (Xennopix = Xen + Knoppix, boots from 5Mb image!) CERN, 5 May 2006 - 49
Overlay Networks • Use Instant Messaging to create overlay network to avoid opening ports • IM can be used to route SOAP messages between central and site services • No need for incoming connectivity on site head node • Provides presence information for free • simplifies configuration and discovery • Jabber • A set of open technologies for streaming XML between two clients • Open XML protocols for IM, presence etc. • Many open-source implementation • Peer to peer server network • Distributed architecture • Clients connect to servers • Direct connections between servers • Domain-based routing, similar to email • All entities have presence – availability information CERN, 5 May 2006 - 50