180 likes | 362 Views
PRAGUE site report. Overview. Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience. Experiments and people. Three institutions in Prague Academy of Sciences of the Czech Republic Charles University in Prague
E N D
Overview • Supported HEP experiments and staff • Hardware on Prague farms • Statistics about running LHC experiment’s DC • Experience
Experiments and people • Three institutions in Prague • Academy of Sciences of the Czech Republic • Charles University in Prague • Czech Technical University in Prague • Collaborate on experiments • CERN – ATLAS, ALICE, TOTEM, *AUGER* • FNAL – D0 • BNL -STAR • DESY – H1 • Collaborating community 125 persons • 60researchers • 43 students and PHD students • 22 engineers and 21 technicians • LCG Computing staff – takes care of GOLIAS (farm at IOP AS CR) and SKURUT (farm located at CESNET) • Jiri Kosina – LCG, experiment software support, networking • Jiri Chudoba – ATLAS and ALICE SW and running • Jan Svec – HW, operating system, PbsPro, networking, D0 SW support (SAM, JIM) • Vlastimil Hynek – run D0 simulations • Lukas Fiala – HW, networking, web
Available HW in Prague GOLIAS • Two independent farms in Prague • GOLIAS – Institute of Physics AS CR • LCG2 (testZone - ATLAS & ALICE production), D0 (SAM and JIM installation) • SKURUT – CESNET, z.s.p.o. • EGEE preproduction farm, also used for ATLAS DC • Separate nodes used for GILDA (tool/interface developed at INFN to allow new users to easily use grid and demonstrate it’s power) with GENIUS installed on top of user interface • Sharing of resourcesD0:ATLAS:ALICE= 50:40:10 (dynamically changed when needed) • GOLIAS: • 80 nodes (2 CPUs each), 40 TB • 32 dual CPU nodes PIII1.13GHz, 1GB RAM • In July 04 bought new 49 dual CPU Xeon 3.06 GHz, 2 GB RAM (WN) • Currenlty considering, if HT should be on/off (memory, scheduler problems in older(?) kernels). • 10 TB disk space, we use LVM to create 3 volumes with 3 TB, one per experiment, nfs mounted on SE. • In July 04 + 30 TB disk space, now in tests (30 TB XFS NFS-exported partition. Unreliable with pre-2.6.5 kernels, newer seem reliable so far) • PBSPro batch system • New server room: 18 racks, more than half empty yet, 180 kW secured input electric power
Skurut – located at CESNET 32 dual CPU nodes PIII 700MHz, 1GB RAM (16 LCG2 + 16 GILDA) OpenPBS batch system LCG2 installation: 1xCE+UI, 1xSE, WNs (count varies) GILDA installation: 1xCE+UI, 1xSE, 1xRB(installation in progress). WNs are manually moved to LCG2 or GILDA, as needed. Will be used for EGEE tutorial Available HW in Prague
Network connection • General – Geant connection • 1 Gbps backbone at GOLIAS, over 10 Gbps Metropolitan Prague backbone • CZ - GEANT 2.5 Gbps (over 10 Gbps HW) • USA 0.8 Gbps (Telia) • Dedicated connection – provided by CESNET • Delivered by CESNET in Collaboration with NetherLight • 1 Gbps (10 Gbps line) optical connection Golias-CERN • Plan to provide the connection for other institutions in Prague • In consideration connections to FERMILAB, RAL or Taipei • Independent optical connection between the collaborating Institutes in Prague, will be finished by end 2004
ATLAS - July 1 – September 21 number of jobs in DQ: 1349 done 1231 failed = 2580 jobs, 52% number of jobs in DQ:362 done572 failed = 934 jobs, 38%
Local job distribution • GOLIAS • not enough ATLAS jobs 2 Aug 23 Aug ALICE D0 ATLAS
Local job distribution • SKURUT • ATLAS jobs • usage much better
ATLAS - CPU Time Xeon 3.06GHz PIII1.13GHz hours hours PIII700MHz queue limit: 48 hours later changed to 72 hours hours
ATLAS - Real and CPU Time very long tail for real time – some jobs were hanging during IO operation
ATLAS Total statistics • Total time used: • 1593 days of CPU time • 1829 days of real time
ALICE Total statistics • Total time used: • 2076 days of CPU time • 2409 days of real time
LCG installation • LCG installation on GOLIAS • We use PBSPro. In cooperation with Peer Haaselmayer (FZK), “cookbook” for LCG2+PBSPro was created (some patching is needed) • Worker nodes – the first node installation is done using LCFGng, then immediately it is switched off • From then on everything is done manually - we find it much more convenient and transparent and manual installation guide helps. • Currently installed LCG2 version 2_2_0 • LCG installation on SKURUT • almost default LCG2 installation, only with some PBS queues properties tweaking • we recently found that openpbs in LCG2 already contains required_property patch, which is very convenient for better resource management • currently trying somehow to integrate this feature into PBSPro