50 likes | 213 Views
LHCb@CNAF. CPU resources. Unique farm for several VO’s (not only WLCG) Batch system: LSF Fair share enabled (1 queue for LHCb with ~17%) 2.5 GB memory max/job Pledges for LHCb 2009 pledges: 1200 HS06 2010 pledges: ~ 2700 HS06 (new resources already installed)
E N D
CPU resources • Unique farm for several VO’s (not only WLCG) • Batch system: LSF • Fair share enabled (1 queue for LHCb with ~17%) • 2.5 GB memory max/job • Pledges for LHCb • 2009 pledges: 1200 HS06 • 2010 pledges: ~ 2700 HS06 (new resources already installed) • mid 2010 pledges: ~5300 HS06 (to be ordered) • But benchmark measured on SL4: gain of ~ 15% with SL5 • Customized WN image for LHCb (via VM, using WNoDeS) • Supported CE’s: CREAM and LCG-CE (to be “gently” phased out) • Multi-user pilot jobs allowed only with glExecsetuid mechanism enabled
Storage resources • Storage system based on GEMSS (StoRM/GPFS/TSM) • Dedicated srm end-point and GPFS cluster for LHCb • Only gridftp (WAN) and file (LAN) protocols supported • Unique GPFS partitioned in SC/space tokens via quota mechanism • StoRM upgraded to the latest stable version (1.5.1-3) • Tape library (SL8550 with 20 T10KB drives) shared among all experiments • 70 TB-N as MSS cache • 2 HSM clients dedicated to LHCb (8 Gbps to tape library) • Pledged resources • 2009: 160 TB-N (disk), 214 TB (tape) • 2010: 450 TB-N (disk), 442 TB (tape) • Tapes already in place • Disk to be delivered end of this month.
Databas resources • Oracle RAC • LFC • Conditions db replica