150 likes | 253 Views
State of the CCS. SOS 8 April 13, 2004 James B. White III (Trey) trey@ornl.gov virtual Buddy Bland. State of the CCS. CCS as a user facility CCS as a DOE Advanced Computing Research Testbed (ACRT) Future plans. Facilities. Computer facility 40,000 ft 2 over two floors
E N D
State of the CCS SOS 8 April 13, 2004 James B. White III (Trey) trey@ornl.gov virtual Buddy Bland
State of the CCS • CCS as a user facility • CCS as a DOE Advanced Computing Research Testbed (ACRT) • Future plans
Facilities • Computer facility • 40,000 ft2 over two floors • 36” raised floor (lower floor) • 8 MW power, 3600 tons cooling • Office space for 450 • Classrooms and training areas • Labs for visualization, computer science, and networking
User Facility • CCS designated by DOE as a user facility • Supports users from academia and industry • Pursuing agreements with Boeing and Dow Chemical
70% of usage is from users outside of ORNL User Community Users come from all around the country
CCS Usage Model • Small number of large projects • CCS supports liaisons for large projects • Center can be dedicated to single task of national importance • Human genome • HFIR restart • IPCC
Advanced Computing Research Testbed • ACRT examines promising new computer architectures for DOE SC • Determine usability for SC applications • Work with vendors to improve systems • Application-based evaluations
Intel i/PSC-2 1988 SRC Prototype 1999 KSR-1 1991 GSN Switch 2000 Compaq AlphaServer SC 2000 IBM Power4 and Federation 2001-2004 Intel Paragon XP/S-35 1992 Intel Paragon MP X/PS-150 1995 IBM S80 1999 Intel I/PSC-860 1990 IBM Winterhawk And Nighthawk 1999 Past Evaluations
Current Evaluations • Cray X1 - scalable vector • SGI Altix - large shared memory • IBM Federation Cluster - interconnect • http://www.csm.ornl.gov/evaluation/
Cray X1 • World’s largest X1 • 8 cabinets • 256 MSPs, 3.2 TF • 1 TB memory • 32 TB local disk • Cabinets half populated to test topology and facilitate upgrade
SGI Altix • Large memory, single-system image • 256 Itanium2 processors • 1.5 GHz, 6 GF, 6 MB cache • 1.5 TF • 2 TB shared memory (NUMA) • Targeting biology apps and data analysis
27 p690s 32 1.3-GHz Power4s 864 total processors 8 p655s 4 1.7-GHz Power4s Login and GPFS IBM Federation Cluster Federation vs. Colony Latency (µs) 12 19 bandwidth (MBs) 551 306 exchange(1x1)(MBs) 767 273 exchange (32x32) 2199 394 bisection (2 nodes) 619 284 bisection (32x32) 922 321
Cray X series Upgrade X1 to 512 MSPs Upgrade to 1024 X1E MSPs Black Widow Red Storm 10.5 TF in 2004 21 TF in 2005 Blue Gene at Argonne Cray XD1 (OctigaBay) SRC FPGA systems IBM Power5 SGI Altix (larger images) ADIC StorNext Lustre Evaluation Plans
Questions? James B. White III (Trey) trey@ornl.gov http://www.ccs.ornl.gov/ http://www.csm.ornl.gov/evaluation/