110 likes | 184 Views
Edinburgh Site Report. 1 July 2004 Steve Thorn Particle Physics Experiments Group. Computing structure. Tiered approach: Physics and Astronomy – Computing Support Team (CST) PPE Research Group – system manager (physman) Works quite well, but
E N D
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group
Computing structure • Tiered approach: • Physics and Astronomy – Computing Support Team (CST) • PPE Research Group – system manager (physman) • Works quite well, but • don’t manage everything so have had some problems e.g. videoconferencing Steve Thorn - UK HEP System Managers Meeting
Computing Support Team • 5 FTEs • Provide common platform based on RHEL WS 3.0 or WinXP • Responsible for • Most department wide computing infrastructure: • network, DHCP, NIS, DNS, email, firewall, tape backups, updates • No support for other OS • Strict security regulations – firewall issues Steve Thorn - UK HEP System Managers Meeting
physman • ~0.3 FTE per research group (RA, PhD student) • First point of contact for all computing issues within group • Responsible for: • purchasing • group specific software and customization – OpenAFS, CERNLIB, etc • security implication of group specific software/customization • printer queue management with CUPS • laptops • intrusion detection monitoring – Tripwire Steve Thorn - UK HEP System Managers Meeting
Network Steve Thorn - UK HEP System Managers Meeting
Network • Physics and Astronomy: 100 Mbit/s • ‘Physics’ network for supported WinXP and RHEL • ‘Private’ network for anything else, laptops and visitors • SRIF funded network: 1 Gbit/s fibre parallel to EdLAN used by ScotGrid • Wireless in selected areas (University wide) Steve Thorn - UK HEP System Managers Meeting
PPE hardware • Sun Enterprise 250 Server, Solaris 8, + 0.5 TB RAID serving home directories, Web server • 23 desktops • 85 % RHEL WS 3.0 • 15 % Windows, MS Office, VRVS, lab • Almost all Dell • 0-6 years old • 5 laptops – mainly dual boot Windows/Linux • Babar compute farm: 4 * Sun Ultra 80, 1 * Ultra 5, currently offline Steve Thorn - UK HEP System Managers Meeting
ScotGrid • Storage bias • IBM xSeries 440, 8 * Xeon 1.9 GHz, 32 GB RAM • 2 * FastT900 Storage Server, total 22 TB RAID 5 • Front ends • 2 * IBM xSeries 205, P4 1.8 GHz, 256 MB RAM • 1 * IBM xSeries 340, 2 * PIII 1.0 GHz, 2 GB RAM • LTO Ultrium Tape Library • Need worker node(s) • Currently installing LCG2 – Test Zone in next 1-2 weeks Steve Thorn - UK HEP System Managers Meeting
RHEL experience • Subscription under Red Hat’s Education Programme • Base package (Proxy server, students) £1500 p.a. • RHEL WS @ £5 p.a. per FTE, ~100 FTEs • A few AS licences with phone support • < £4000 p.a. for Physics and Astronomy • Coverage: unlimited use for staff and students on University or privately owned hardware • In use since April 2004 • Updates via up2date and Proxy server, nightly cron job • Good value for money • Reduced OS upgrades • Updates come out quickly • Web based status monitoring an unexpected extra • Removal of some useful packages e.g. Pine • Red Hat not clear on exactly how unlimited licence works – we have a nominal upper limit of 200 seats Steve Thorn - UK HEP System Managers Meeting
Hack reports • No Linux based intrusions in last two years • Blaster worm before installation of firewall (Jan 2004) Steve Thorn - UK HEP System Managers Meeting
Future plans • Phase out Sun/Solaris • Purchase Linux server and unify PPE group storage • Move ScotGrid to dedicated computing facility (out of town) • There will be more and more ScotGrid hardware… Steve Thorn - UK HEP System Managers Meeting