70 likes | 114 Views
SAMSON University of Delaware Cluster Supercomputer. Bob Robinson, Program Manager W. H. Matthaeus, Principal Investigator L. Pollock, D. Seckel, K. Szalewicz, G. Zank Walt Dabell (System Manager), P. Dmitruk (System Scientist) Partners: AMD, Dolphin/Scali, Racksaver.
E N D
SAMSONUniversity of Delaware Cluster Supercomputer Bob Robinson, Program ManagerW. H. Matthaeus, Principal Investigator L. Pollock, D. Seckel, K. Szalewicz, G. Zank Walt Dabell (System Manager), P. Dmitruk (System Scientist) Partners: AMD, Dolphin/Scali, Racksaver Supported by The National Science Foundation, Major Research Infrastructure Program and Atmospheric Sciences Division Bartol Research Institute Physics and Astronomy Computer and Information Sciences Mechanical Engineering Electrical Engineering Johns Hopkins, Mechanical Engineering
Scientific Supercomputing at Delaware Research Applications - nonlinear dynamics, turbulence, plasma physics - space physics and astrophysics - atomic and molecular physics - engineering and computer science Education and Training - undergraduate, graduate, postgraduate involvement in parallel computing - preparation for a parallel future CISC 879/PHYS 838 Parallelization for Scientific Applications (Fall Semester 2000) Prof. L.Pollock, Prof. W. Matthaeus
Architecture: Scalable Array of Microcomputers • CPU - 1 GHzAthlon • Memory - 1 GB/node • Network - Dolphin/Scali • Scalable - 132 nodes
Cluster Supercomputing … Network! • CPU speed is not the whole story. • Many problems need a fast interconnect to use cluster CPU power. • SAMSON design affords balanced performance.
SAMSON: supercomputing speed • Initial LINPAK benchmark: 101 Gigaflops • Expect improvements up to 160 Gflops • “Top 200” performance • Scalable hardware and system performance
SAMSON and Supercomputing • 100 Gflops (now) to 160 Gflops (June 01) • 132 Gbytes RAM • 80 Megabytes/sec all-to-all communication • Bang-for-the-buck: $2500 per Gigaflop -scalable