100 likes | 256 Views
ATLAS Tier-3 in Geneva. Szymon Gadomski, Uni GE at CSCS, November 2009. the Geneva ATLAS Tier-3 cluster what is it used for recent issues and long-term concerns. ATLAS computing in Geneva. 26 8 CPU cores 180 TB for data 70 in a Storage Element special features:
E N D
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 • the Geneva ATLAS Tier-3 cluster • what is it used for • recent issues and long-term concerns S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
ATLAS computing in Geneva • 268 CPU cores • 180 TB for data • 70 in a Storage Element • special features: • direct line to CERN at 10 Gb/s • latest software via CERN AFS • SE in Tiers of ATLAS since Summer 2009 • FTS channels from CERN and from NDGF Tier 1 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Networks and systems S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Setup and use • Our local cluster • log in and have an environment to work with ATLAS software, both offline and trigger • develop code, compile, • interact with ATLAS software repository at CERN • work with nightly releases of ATLAS software, normally not distributed off-site but visible on /afs • disk space, visible as normal linux file systems • use of final analysis tools, in particular ROOT • a easy way to run batch jobs • A grid site • tools to transfer data from CERN as well as from and to other Grid sites worldwide • a way for ATLAS colleagues, Swiss and other, to submit jobs to us • ways to submit our jobs to other grid sites • ~55 active users, 75 accounts, ~90 including old • not only Uni GE; an official Trigger development site S. Gadomski, "Status and plans of the T3 in Geneva", Swiss ATLAS Grid Working Group, 7 Jan 2008
Statistics of batch jobs • NorduGrid production since 2005 • ATLAS never sleeps • local jobs taking over in recent months S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Added value by resource sharing local jobs come in peaks grid always has jobs little idle time, a lot of Monte Carlo done S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Some performance numbers Internal to the Cluster the data rates are OK Transfers to Geneva S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Test of larger TCP buffers Data rate per server • transfer from fts001.nsc.liu.se • network latency 36 ms (CERN at 1.3 ms) • increasing TCP buffer sizes Fri Sept 11th (Solaris default 48 kB) ~25 MB/s Why? Can we keep the FTS transfer at 25 MB/s/server? 1 MB 192kB S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Issues and concerns • recent issues • one crash of a Solaris file server in the DPM SE • two latest Solaris file servers with slow disk I/O, deteriorating over time, fixed by reboot • unreliable data transfers • frequent security updates of the SLC4 • migration to SLC5, Athena reading from DPM • long term concerns • level of effort to keep it all up • support of the Storage Element S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
Summary and outlook • A large ATLAS T3 in Geneva • Special site for Trigger development • In NorduGrid since 2005 • DPM Storage Element since July 2009 • FTS from CERN and from the NDGF-T1 • exercising data transfers • Short-term to do list • gradual move to SLC5 • write a note, including performance results • Towards a steady–state operation! S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09