170 likes | 287 Views
Clusters: Changing the Face of Campus Computing. Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton University Computing Services Simon Cox Department of Electronics and Computer Science. Talk Outline. Introduction Clusters background Procurement
E N D
Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton University Computing Services Simon Cox Department of Electronics and Computer Science
Talk Outline • Introduction • Clusters background • Procurement • Configuration, installation and integration • Performance • Future prospects • Changing the landscape
Introduction • University of Southampton • 20,000+ students (3000+ postgraduate) • 1600+ academic and research staff • £182 million turnover 1999/2000
"to acquire, support and manage general-purpose computing, data communications facilities and telephony services within available resources, so as to assist the University to make the most effective use of information systems in teaching, learning and research activities".
HEFCE Computational and Data Handling Project • Existing facilities outdated and overloaded • £1.01 million total bid, including infrastructure costs and Origin 2000 upgrade • Large compute facility to provide significant local HPC capability • Large data store – several Terabytes • Upgraded networking: Gigabit to the desktop • Staff costs to support new facility
Cluster Computing • Extremely attractive price/performance • Good scalability achievable with high performance memory interconnects • Fast serial nodes with lots of memory (up to 4 Gbytes) affordable • High throughput, nodes are cheap • Still require SMP for large (>4 Gigabytes) memory jobs – for now
Clusters at Southampton • ECS:8 node Alpha NT and 8 node AMD Athlon clusters • Social Statistics/ECS/SUCS: 19 node Intel PIII cluster • Chemistry: 39 AMD Athlon and 4 dual Intel PIII node cluster • Computational Engineering and Design Centre: 21 dual node and 10 dual node Intel PIII clusters • Aerodynamics and Flight Mechanics Group: 11 dual node Intel PIII cluster with Myrinet 2000 • ISVR: 9 dual node Intel PIII Windows 2000 cluster • Several high throughput workstation clusters on campus • Windows Clusters research
User Profiles • Users from many disciplines: • Engineering, Chemistry, Biology, Medicine, Physics, Maths, Geography, Social Statistics • Many different requirements: • Scalability, memory, throughput, commercial apps • Want to encourage new users and new applications
Procurement • Ask users what they want - open discussion • General-purpose cluster specification • Open tender process • Vendors, from big iron companies to home PC suppliers • Shortlist vendors for detailed discussions
Configuration • Varied user requirements • Limited budget – value for money crucial • Heterogenous configuration optimum • Balanced system: CPU, memory, disk • Boxes-on-shelves or racks? • Management options: serial network, power strips, fast ethernet backbone
IRIDIS Cluster • Boxes-on-Shelves • 178 Nodes • 146 × dual 1GHz PIIIs • 32 × 1.5GHz P4s • Myrinet2000 • Connecting 150 cpu’s • 100 Mbit/s fast Ethernet • APC Power strips • 3.2 Tb IDE-Fibre disk
Installation & Integration • Initial installation by vendor – Compusys plc • One week burn-in, still had 3 DOAs • Major switch problem fixed by supplier • Swap space increased on each node No problems since • Pallas, Linpack, NAS benchmarks and user codes for thorough system shakedown • Scheduler for flexible partitioning of jobs
NAS Serial Benchmarks Bigger is better
Chemistry Codes Smaller is better
Future Prospects • Roll-out Windows 2000/XP service • In response to user requirements • Increase HPC user-base • Drag-and-drop supercomputing • Expand as part of Southampton Grid • Integration with other compute resources on and off campus • Double in size over next few years
Changing the Landscape • Availability of serious compute power to many more users – HPC for the masses • Heterogenous systems - tailored partitions for different types of users easy to cater for • Compatability between desktops and servers improved – less intimidating • New pricing model for vendors – costs are transparent to the customer Affordable, Expandable, Grid-able