1 / 19

Michael L. Norman Principal Investigator Interim Director, SDSC

Michael L. Norman Principal Investigator Interim Director, SDSC. Allan Snavely Co-Principal Investigator Project Scientist. What is Gordon?. A “data-intensive” supercomputer based on SSD flash memory and virtual shared memory Emphasizes MEM and IO over FLOPS

Download Presentation

Michael L. Norman Principal Investigator Interim Director, SDSC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely Co-Principal Investigator Project Scientist

  2. What is Gordon? • A “data-intensive” supercomputer based on SSD flash memory and virtual shared memory • Emphasizes MEM and IO over FLOPS • A system designed to accelerate access to massive data bases being generated in all fields of science, engineering, medicine, and social science • The NSF’s most recent Track 2 award to the San Diego Supercomputer Center (SDSC) • Coming Summer 2011

  3. Why Gordon? • Growth of digital data is exponential • “data tsunami” • Driven by advances in digital detectors, networking, and storage technologies • Making sense of it all is the new imperative • data analysis workflows • data mining • visual analytics • multiple-database queries • on demand data-driven applications

  4. The Memory Hierarchy Potential 10x speedup for random I/O to large files and databases Flash SSD, O(TB) 1000 cycles

  5. Gordon Architecture: “Supernode” • 32 Appro Extreme-X compute nodes • Dual processor Intel Sandy Bridge • 240 GFLOPS • 64 GB • 2 Appro Extreme-X IO nodes • Intel SSD drives • 4 TB ea. • 560,000 IOPS • ScaleMPvSMP virtual shared memory • 2 TB RAM aggregate • 8 TB SSD aggregate 4 TB SSD I/O Node 240 GF Comp. Node 64 GB RAM 240 GF Comp. Node 64 GB RAM vSMP memory virtualization

  6. Gordon Architecture: Full Machine • 32 supernodes = 1024 compute nodes • Dual rail QDR Infiniband network • 3D torus (4x4x4) • 4 PB rotating disk parallel file system • >100 GB/s SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN SN D D D D D D

  7. Gordon Peak Capabilities

  8. Such applications involve “very large data-sets or very large input-output requirements” Two data-intensive application classes are important and growing Gordon is designed specifically for data-intensive HPC applications • Data Mining • “the process of extracting hidden patterns from data… with the amount of data doubling every three years,data mining is becoming an increasingly important tool to transform this data into information.” Wikipedia • Data-Intensive • Predictive Science • solution of scientific problems via simulations that generate large amounts of data

  9. High Performance Computing (HPC) vs High Performance Data (HPD)

  10. Data mining applicationswill benefit from Gordon • De novo genome assembly from sequencer reads & analysis of galaxies from cosmological simulations and observations • Will benefit from large shared memory • Federations of databases and Interaction network analysis for drug discovery, social science, biology, epidemiology, etc. • Will benefit from low latency I/O from flash

  11. Data-intensive predictive sciencewill benefit from Gordon • Solution of inverse problems in oceanography, atmospheric science, & seismology • Will benefit from a balanced system, especially large RAM per core & fast I/O • Modestly scalable codes in quantum chemistry & structural engineering • Will benefit from large shared memory

  12. Dash: towards a supercomputer for data intensive computing

  13. Project Timeline • Phase 1: Dash development (9/09-7/11) • Phase 2: Gordon build and acceptance (3/11-7/11) • Phase 3: Gordon operations (7/11-6/14)

  14. Comparison of the Dash and Gordon systems 1.2TB per 100K tpm-C IOPS per tpm-C 150K tpm-C 1996 = .7 - .8 smaller memory subsystem 2006 = .3 - .5 Doubling capacity halves accessibility to any random data on a given media

  15. Gordon project wins storage challenge at SC09 with Dash

  16. We won SC09 Data Challenge with Dash! • With these numbers: • IOR 4KB • RAMFS 4Million+ IOPS on up to .750 TB of DRAM (1 supernode’s worth) • 88K+ IOPS on up to 1 TB of flash (1 supernode’s worth) • Speed up Palomar Transients database searches 10x to 100x • Best IOPS per dollar • Since that time we boosted flash IOPS to 540K (hitting our 2011 performance targets – it is now 2009 

  17. Dash Update – early vSMP test results

  18. Dash Update – early vSMP test results

  19. Next Steps • Continue vSMP and flash SSD assessment and development on Dash • Prototype Gordon application profiles using Dash • New application domains • New usage modes and operational support mechanisms • New user support requirements • Work with TRAC to identify candidate apps • Assemble Gordon User Advisory Committee • International Data-Intensive Conference Fall 2010

More Related