140 likes | 293 Views
Tier2 Centre in Prague. Ji ří Chudoba FZU AV ČR - Institute of Physics of t he Academy of Sciences of the Czech Republic. Outline. Supported groups Hardware Middleware and software Current status. Particle Physics in the Czech Republic. Groups located at Charles University in Prague
E N D
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic
Outline • Supported groups • Hardware • Middleware and software • Current status chudoba@fzu.cz
Particle Physics in the Czech Republic Groups located at • Charles University in Prague • Czech Technical University in Prague • Institute of Physics of the Academy of Sciences of the Czech Republic • Nuclear Physics Institute of the Academy of Sciences of the Czech Republic Main Applications • Projects ATLAS, ALICE, D0, STAR, TOTEM • Groups of theoreticians • Approximate size of the community60scientists, 20 engineers, 20techniciansand 40students and PhD students chudoba@fzu.cz
Hardware 2 independent computing farms • Golias located in FZU • Skurut from CESNET (= “academic network provider”) • older hardware (29 dual nodes, PIII700MHz) offered by CESNET • part used as a production farm, some nodes for tests and support for different VOs (VOCE, GILDA, CE testbed) • contribution to Atlas DC2 at a level of 2% of all jobs finished on LCG chudoba@fzu.cz
Computer hall 2 x 9 racks 2 air condition units 180 kW electrical power available from UPS, backed up by Diesel generator 1 Gbps optical connection to CESNET metropolitan network direct 1 Gbps optical connection to CzechLight shared with other FZU activities Available Resources - FZU chudoba@fzu.cz
Worker Nodes (September 2005) 67x HP DL140, dual Intel Xeon 3.06GHz withHT (enabled only on some nodes), 2 or 4 GB RAM, 80 GB HDD 1x dual AMD Opteron 1.6 GHz, 2 GB RAM, 40 GB HDD 24x HP LP1000r, 2xPIII1.13 GHz, 1 GB RAM, 18 GB SCSI HDD WN connected via 1 Gbps (DL140) or 100 Mbps (LP1000r) Network components 3x HP ProCurve Networking Switch 2848 (3x48 ports) HP 4108GL Hardware @ FZU ~ 30 KSI2K will be added this year chudoba@fzu.cz
Golias Farm Hardware - servers • PBS server: HP DL360 – 2xIntel Xeon 2.8, 2 GB RAM • CE, SE, UI: HP LP1000, 2x1.13Ghz PIII, 1 GB RAM, 100 Mbps (SE should be upgraded to 1 Gbps soon) • NFS servers • 1x HP DL145 – 2x AMD Opteron 1.6 GHz, connected disc array 30 TB (raw capacity), ATA discs • 1x HP LP4100TC, 1 TB disc array, SCSI discs • 1x embedded server in EasySTOR 1400RP (PIII), 10 TB, ATA discs • dCache server • HP DL140 upgraded by raid controller, 2x300GB discs • not used for productions, reserved for SC3 • Some other servers (www, SAM) chudoba@fzu.cz
Middleware, Batch System GOLIAS: • LCG2 (2_6_0): • CE, SE, UI – SLC3 • WNs - RH7.3 (local D0 group not yet ready for SLC3) • PBSPro server not on CE • CE submits jobs to the node golias (PBSPro server) • local users can submit to local queues on golias SKURUT: • LCG2 (2_6_0), OpenPBS server on CE • all nodes - SLC3 chudoba@fzu.cz
Queues • Separate queues for different experiments and privileged users: atlas, atlasprod, lcgatlas, lcgatlasprod alice, aliceprod, d0, d0prod, auger, star, ... short, long • priorities are set by some PBS parameters: • max_running, max_user_running, priority, node properties • still not optimal in heterogeneous environment chudoba@fzu.cz
Jobs statistics 2005/1-6 Used CPU time (in days) per activity, for January – June 2005 chudoba@fzu.cz
Simulations for ATLAS chudoba@fzu.cz
ALICE PDC – Phase2 (2004) 2004: ALICE jobs submitted to Golias via AliEn 2005: new version of AliEn is being installed chudoba@fzu.cz
Tier1 – Tier2 relations • Requirements defined by experiments ATLAS and ALICE • Direct network connection between FZU and GridKa will be provided by GEANT2 next year • “Know-how” exchange welcomed chudoba@fzu.cz
THE END chudoba@fzu.cz