160 likes | 173 Views
This article discusses the distributed computing resources used by BaBar, including Tier-A centres like SLAC, RAL, IN2P3, Padova, GridKa, and their usage for data analysis, skimming, and prompt reconstruction. It also highlights the success of the BaBar collaboration in breaking the centralized computing model and the future prospects of incorporating grid tools and grid authorizations.
E N D
Distributed Computing Resources • Tier A Centres • SLAC,RAL, IN2P3, Padova, GridKa • Grid BaBar Distributed Computing Resources
RAL • Analysis and skimming • Typically 40 users – mostly non-UK • 39TB now, 21 imminent • 368 CPUs (1.0-2.4 GHz) • 2 farms – running RH 7.2 and 7.3 • Validation of RedHat 7.3 by BaBar a top priority BaBar Distributed Computing Resources
CPUs just arrived BaBar Distributed Computing Resources
RAL Tier 1/A usage http://www.gridpp.ac.uk/tier1a BaBar Distributed Computing Resources
UK Tier B/C • ~9 University 80 CPU+ few TB sites • Used for SP (and a bit of analysis) • Not open to all BaBarians – yet • Expect new Regional Grid centres (ScotGrid…) to have strong BaBar use BaBar Distributed Computing Resources
IN2P3 • Objectivity centre • 453x2 CPUs available. (Good fraction used) • 30TB available, 16TB on order (Objectivity and Xrootd according to need) • 133TB available through HPSS (with new 200GB cartridges) BaBar Distributed Computing Resources
Padova • PromptReco and SP and analysis farm • 194 CPUs (various types) for reprocessing, 51 for SP, 53 for analysis (plus servers) • 38 TB now, 19TB arriving BaBar Distributed Computing Resources
Karlsruhe - GridKa • SP now (analysis later) • 16 node start (300+ nodes at the centre) BaBar Distributed Computing Resources
The Tier A Success Story • BaBar has broken the ‘centralised computing’ model – which based on (a) prejudice and (b) experience • This was essential as one site (SLAC) could not support all activities of the whole collaboration • Success thanks to generous BaBar rebates, imaginative funding agencies, proactive user support at sites, adaptability of collaboration. BaBar Distributed Computing Resources
Grid comes next • Tools to manage distributed resources • Grid Tools requirement for funding explicit for GridKa, but linked in other places too. And makes sense BaBar Distributed Computing Resources
Grid Authorisation • Have a VO (Virtual Organisation) that works • Upgrading to VSC (Virtual Smart Card) method as it becomes available • Pool accounts • Mutual recognition of certificates – set of Certificate Authorities recognised by all BaBarGrid sites BaBar Distributed Computing Resources
Data Location • Existing skimData tool gaining Grid features • Extended skimData tables enable users to get information about file existence at other sites BaBar Distributed Computing Resources
Data movements • BdbServer++ - a grid version of BdbServer – for accessing data across the network/grid (Tim Adye’s talk) BaBar Distributed Computing Resources
Job submission • BaBar a member of EDG (European Data Grid: LHC plus some others • EDG will hand over to LCG (LHC Computing Grid) • Use EDG and LCG technology - benefit from LHC computing manpower – but not tied to the whole package • Experience with Resource Broker at IC+Spain, Replica Catalog at Manchester • Job submission between sites – becoming routine - on the verge of becoming useful BaBar Distributed Computing Resources
Present status • Grid at many sites • Storage elements and Compute elements set up • Incompatible versions of EDG releases, globus, etc (Red Hat versions, EDG on 6.2) • LCG forking from EDG • LCG-1 deployment early July (in parallel with EDG-2) • Suggestion - go with LCG rather than EDG • Reliable service January 2004 BaBar Distributed Computing Resources
Conclusions • Tier A expansion will continue • Lots of BaBarGrid activity • Grid can be assimilated within BaBar computing model • Will see more and more Grid tools in use BaBar Distributed Computing Resources