60 likes | 70 Views
HEP/NP Computing Facility at Brookhaven National Laboratory (BNL) Bruce G. Gibbard. 17 May 2004. Primary Facility Mission. US Tier 1 Center for ATLAS Basic Tier 1 functions US repository for ATLAS data Generation of distilled data subsets Delivery of data subsets and analysis capability
E N D
HEP/NP Computing Facility atBrookhaven National Laboratory (BNL)Bruce G. Gibbard 17 May 2004
Primary Facility Mission • US Tier 1 Center for ATLAS • Basic Tier 1 functions • US repository for ATLAS data • Generation of distilled data subsets • Delivery of data subsets and analysis capability • Tier 1 hub for US ATLAS Grid computing • Host site computing facility for Relativistic Heavy Ion Collider (RHIC) • Tier 0 functions for 4 RHIC detectors (~1000 physicists) • Online recording of raw data; repository for all data • Reconstruction and distribution of resultant data to major collaborating facilities … LBNL, Riken, etc. • Also Tier 1 functions as described for ATLAS above B. Gibbard
Current Facility Scale • Unified operation for RHIC & ATLAS, staffed by 30 FTE’s • HSM based on HPSS & StorageTek • Capacity – 4.5 PBytes at 1000 MBytes/sec • Processor farms based on dual processor rack mounted Inter/Linux nodes • Capacity – 2600 CPU’s for 1.4 MSI2000 • Central Disk based on Sun/Solaris and FibreChanel SAN connected RAID 5 • Capacity – ~200 TBytes served via NFS (& AFS) • OC12 connected to US ESnet backbone B. Gibbard
Involvement in Multiple Grids • US ATLAS Grid Testbed • Tier 1 & ~11 US ATLAS Tier 2 & 3 sites • Evolving versions of Grid middleware over ~3 years • Used in production for ATLAS Data Challenge 1 (DC1) • Grid3+ (Follow on to Grid3) • ~24, mostly US, sites running ATLAS, CMS, SDSS, etc. • Strongly coupled to tools and services developed by US Grid projects • Currently in production for ATLAS DC2 • LCG-2 • Currently completing transition from LCG-1 • Focus of interests and efforts is … • Understanding and fostering Grid3+ and LCG2 commonality … while addressing near term issues of interoperability B. Gibbard
Guiding Principles • Use commodity hardware and open source software when possible while utilizing high performance commercial technology where necessary • Avoid major development projects in favor of existing commercial or community supported software and systems whenever possible • Maximize flexibility in and modularity of facility components while concealing as much of the complexity as possible from users • Present resource & services (especially on Grids) in as standardized and effective a way possible consistent with the constraints of primary user VO’s B. Gibbard
Goal for IHEPCCC • Foster interactions and exchanges leading to ... • Improved HEP computing effectiveness based on identifying, adapting, and developing very good shared solutions to the common problems we encounter … • within our facility fabrics • within the Grids in which we participate • within the virtual organizations we support • Standardized interfaces to users and between components where realities dictate distinct solutions B. Gibbard