150 likes | 171 Views
HEPSYSMAN Meeting 2003. QMUL Site Report by Dave Kant. Overview. Local Computing Grid Computing. What We’ve Got. 2 Dual Processor Servers. 450MHz. 36GB. NIS, NFS, SAMBA, SENDMAIL, DHCP, APACHE, Mozilla. Master. rsync slave every night. Slave. 36GB. 450MHz. DLT Tape.
E N D
HEPSYSMAN Meeting 2003 QMUL Site Report by Dave Kant D.Kant@qmul.ac.uk
Overview Local Computing Grid Computing D.Kant@qmul.ac.uk
What We’ve Got 2 Dual Processor Servers 450MHz 36GB NIS, NFS, SAMBA, SENDMAIL, DHCP, APACHE, Mozilla Master rsync slave every night Slave 36GB 450MHz DLT Tape Incremental backups to tape every night Cycle Tapes every 10 weeks D.Kant@qmul.ac.uk
Long Term Backup 1TB IDE Raid 5 with Promise TX2 Controllers 8 x 160GB Maxtor HD 650MHz 160GB 160GB 160GB 160GB Ultra133 TX2 features: slots into the 32bit portion of a 64bit PCI bus ( 48bit LBA (> 137GB) up to 140PB 160GB 160GB 160GB 160GB Redundancy of 1 drive (8-1)*160GB = 1120 GB D.Kant@qmul.ac.uk
Linux Desktops Base Platform: RedHat 7.3 Applications: VMware, OpenOffice, Mozilla 7 Dual Athlon MP2000+ Tyan Tiger MPX 2 Dual Intel 1000MHz 4 Dual Intel 500MHz D.Kant@qmul.ac.uk
Queen Mary Network Network Connection Upgraded to 100 Mbps in late 2001 No plans to upgrade the link to 1Gbps => expensive at about 30K/year PHYSICS D.Kant@qmul.ac.uk
EDG TESTBED D.Kant@qmul.ac.uk
EDG Testbed Five machines: CE, SE, UI, WN, LCFG server D.Kant@qmul.ac.uk
More computing on the way Science Research Infrastructure Fund (SRIF) => 12M to QMUL Round 1: HEP + Astronomy + Other small groups awarded 1.2M for 03/04 Computing facility 100m2, 48 Rack capacity, 80KW Air Cooling HEP awarded 220K Round 2: HEP + Astronomy share a further 630K for 05/06 D.Kant@qmul.ac.uk
Computing Facility 100m2 48 rack capacity Overhead power trunking Secure Access 200Amps / 3 Phase 32Amp circuit per 4-plug unit and cable bays FM200 GAS SYSTEM x4 20 KW Air Cooling Unit D.Kant@qmul.ac.uk
More computing on the way HEP approach is biased towards High Throughput Computing “As many CPU’s as possible” Astronomy approach is biased towards High Performance Computing “Low Latency Interconnects for MPI” There may be a significant technology overlap in the future... D.Kant@qmul.ac.uk
HEP Prototype • Front End Server: • Dual Athlon (2.0GHz), 2GHz RAM, x2 200GB HD in raid mirror, 64bit Gigabit Ethernet cards • Worker Nodes: • Dual Athlon (2.0GHz), 1GHz RAM, 120GB HD, Gigabit Ethernet • Storage: • Not yet decided but likely to be IDE Raid solution D.Kant@qmul.ac.uk
HEP Prototype MicroDirect Supplier 4u Front End Server (1.5K) 2u Worker Nodes (1.0K) D.Kant@qmul.ac.uk
HEP Prototype Gigabit Optical Fibre (multimode 50/125) 48 port terminal server (Cyclades) NetGear Gigabit switch Power Distribution Box D.Kant@qmul.ac.uk
Timescales • Next 3 months: prototype part of EDG testbed • End of 2003 : additional 32 dual nodes + 2TB storage • End 2004 : additional 64 dual nodes + 8TB storage • 05/06 : aiming towards a further 100 dual nodes . and 100TB storage LHC start: 400 CPUs + 100TB Storage D.Kant@qmul.ac.uk