110 likes | 119 Views
Eleonora Luppi Padova 16 ottobre 2003. BaBar-Grid Status and Prospects. Distributed Computing. Currently BaBar has a largely distributed computing system 5 Tier-A sites Sites that provide large resources for BaBar Resources agreed to via MoUs >20 Tier-C sites University sites
E N D
Eleonora Luppi Padova 16 ottobre 2003 BaBar-Grid Status and Prospects
Distributed Computing • Currently BaBar has a largely distributed computing system • 5 Tier-A sites • Sites that provide large resources for BaBar • Resources agreed to via MoUs • >20 Tier-C sites • University sites • Produce majority of BaBar Monte Carlo
Motivation • Three large analysis sites • Would like to be more flexible too and use other sites • Need way for users to automatically submit jobs that run where the data and cpu is available • Also intend that GridKa is only available via grid technology • Could reduce the manpower used to run Simulation Production • Each of 25 sites has ~0.5FTE to keep system running • Go to great lengths to make sure everything is identical • Data distribution to smaller sites could be integrated
CE: Computing Element SE: Storage Element RB: Resource Broker WN: Worker Node RC: Replica Catalogue VO: Virtual Organisation
Present Infrastructure • SLAC • will install LCG-1 • SLAC/SCS actively involved in security for Grid (PPDG and LCG) • Karlsruhe • LCG-1 User Interface installed and LCG-1 deployed • France • Still nothing released for users • INFN • LCG-1 installed in Ferrara (Grid.it) • Genius interface developed under EDG 1.4.x is migrating to new release • UK • EDG 2.x installed. • RB at Imperial College installed • Imperial College will have a LCG-1 RB soon
BaBarGrid Analysis • Successful test using EDG v 1.4.x • Prepare executable locally • Copy executable to SE and register to RC • Prepare JDL to select CE with a close SE where the executable is available • Copy executable to WN • Run and produce ntuple • Copy ntuple to close SE and register to RC • The aim is to reproduce this procedure using LCG-1
Simulation Production (SP) • Monte Carlo runs at 25 sites • Includes all Tier-As • Generate three times the hadronic rate • MC Data is largest part of computing resources • Generation, analysis, disk/tape, ... • Managed centrally • Allocations, modes, luminosity weighting
BaBarGrid MC Production • MC production system already tested on EDG v 1.4.x ported to LCG-1 • BaBar MonteCarlo Production rpm is already included in Grid.it distribution • First tests started ( in Ferrara, Padova, Legnaro, Milano, Trieste, Napoli, Bologna, Bari and Catania). • Genius is migrating to the new release with a basic set of services for MC
Future Plans • 11/2003: Genius running under LCG-1 with a more complete set of services for MC production • 12/2003: MonteCarlo Simulation Grid in production in a few sites • Objectivity federations accessible in different sites • 01/2004: A full scale analysis running on Grid and producing physics results (~10000 8 hour CPU jobs producing ~500 GB of output) • New security method being deployed at SLAC in next few months (Virtual Smart Cards) • 02/2004: LCG-1 deployed at SLAC • BaBar RB installed at Ferrara • 02/2004: MC in production in a larger number of site • Analysis package tested and ready • Increasing in BaBarGrid resources • RC integrated with the BaBar book-keeping system (discussion in progress)