1 / 17

Leadership Computing Facility (LCF) Roadmap

Explore the roadmap crafted by Buddy Bland, the Project Director of the Leadership Computing Facility (LCF), detailing the systems, facilities, and infrastructure upgrades until 2009. Learn about advancements in networking, storage, and software to prepare for petascale computing, ensuring optimal performance and scientific productivity. Witness the evolution of systems infrastructure, from network enhancements to storage expansions, empowering scientific progress in a production petascale environment. Discover the pivotal role of cutting-edge hardware and software in enabling researchers to leverage petascale systems for transformative scientific breakthroughs and educational advancements.

aknight
Download Presentation

Leadership Computing Facility (LCF) Roadmap

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Leadership Computing Facility (LCF) Roadmap Buddy Bland Leadership Computing Facility Project DirectorNational Center for Computational Sciences

  2. Outline • Systems • Facilities upgrade • Systems infrastructure • Overview • Networking • Storage • Software and science

  3. SystemsCCS firsts (1991–2008)

  4. Facilities upgradePreparing computer center for petascale computing Clear 7,000 ft2 for the computer Add 10 MW of power Install taps for liquid cooling Upgrade chilled water plant

  5. Systems infrastructure: OverviewCurrent and projected

  6. Systems infrastructure: Network 1000 TF 240 GB/s LAN 5 GB/s WAN • Shifting to a hybridInfiniBand/Ethernet network. • InfiniBand-based network helps meet the bandwidth and scaling needs for the center. • Wide-area network will scale to meet user demand using currently deployed routers and switches. 1000 TF 240 GB/s LAN 5 GB/s WAN 2009 2008 2007 100 TF 60 GB/s LAN 3 GB/s WAN

  7. Systems infrastructure: Network(continued) ORNL and LCFbackboneconnectivity • Consistent planned growth in ORNL external network bandwidth

  8. Systems infrastructure: StorageArchival storage 1000 TF 18 PB 19 GB/s 1000 TF 10 PB 10 GB/s 2009 2008 • HPSS software has already demonstrated ability to scale to many PB. • Add two silos/year. • Tape capacity and bandwidth, disk capacity and bandwidth are all scaled to maintain a balanced system. • Use new methods to improve data transfer speeds between parallel file systems and archival system. 2007 100 TF 4 PB 4 GB/s

  9. Systems infrastructure: StorageCentral storage 1000 TF 1 PB 60 GB/s 2009 • Increase scientific productivity by providing single repository for simulation data • Connect to all major LCF resources • Connect to both InfiniBand and Ethernet networks • Potentially becomes the primary file system for the 1000 TF system 2008 1000 TF 10 PB 240 GB/s 2007 100 TF 250 TB 10 GB/s

  10. Software and science: Overview • Cutting-edge hardware lays the foundation for science at the petascale—scientists usinga production petascale system with petascale application development software. • Establishing fully integrated computing environment. • Developing software infrastructure to enable productive utilization and system management. • Empowering scientific and engineering progressand allied educational activities using petascale system. • Developing and educating the next generationof computational scientists to use petascale system. • NCCS Management Plan coordinatesthe transition to petascale production. 10 Bland_LCFRoadmap_SC07

  11. Software and scienceRoadmap to deliver Science Day 1 2006 2007 2008 2009 1000 TF 250 TF 100 TF 50 TF Dualcore Quad-core Scale tests using Jaguar Scale tests using Jaguar Expand Expand Dual-coreLWK Quad-coreLinux LWK Quad-coreCatamount BakerLWK PetascaleLWK Establish CFS Lustrecenter of excellenceat ORNL XT4 LW IO interface MPI-Lustre failoverLCF-wide file system SIO Lustre clients andexternal clusterof Lustre servers C/R on XT3 Hardwaresupervisory system Mazama system Fault monitor andprediction system On 100 TF On 250 TF On 1000 TF Science Day 1

  12. Science drivers • Advanced energy systems • Fuel cells • Fusion and fission energy • Bioenergy • Ethanol production • Environmental modeling • Climate prediction • Pollution remediation • Nanotechnology • Sensors • Storage devices “University, laboratory, and industrial researchers using a broad arrayof disciplinary perspectivesare making use of the leadership computing resources to generate remarkable consequencesfor American competitiveness.” – Dr. Raymond L. Orbach,Undersecretary for Science,U.S. Department of Energy

  13. Software and science: Fusion Expected outcomes Integrated simulation—burning plasma 5 Years • Full-torus, electromagnetic simulation of turbulent transport with kinetic electrons for simulation times approaching transport time-scale • Develop understandingof internal reconnection events in extended MHD, with assessment of RF heating and current drive techniques for mitigation 10 years • Develop quantitative, predictive understandingof disruption eventsin large tokamaks • Begin integrated simulation of burning plasma devices –multi-physics predictions for ITER 10,000 • Turbulence simulation at transport time scale • Wave/plasma coupled to MHD and transport • MHD at long time scales 1,000 100 2D wave/plasma-mode conversion, all orders Computer performance (TFLOPS) 10 Gyrokinetic ion turbulence in full torus 1.0 Gyrokinetic ion turbulence in a flux tube 1D wave/plasma, reduced equation MHD Extended MHD of moderate scale devices 0.1 0 5 10 Years

  14. Software and science: Biology Expected outcomes Protein machine interactions 5 years • Metabolic fluxmodeling for hydrogen and carbon fixation pathways • Constrained flexible docking simulationsof interacting proteins 10 years • Multiscale stochastic simulations of microbial metabolic, regulatory, and protein interaction networks • Dynamic simulationsof complex molecular machines Molecular machine classical simulation 1000 Cell pathway and network simulation Community metabolic Regulatory signaling simulations 100 Constrainedrigid docking Computer performance (TFLOPS) Constraint-based Flexible docking 10 Genome scale Protein threading 1 Comparative genomics 0 5 10 Years

  15. Software and science: Climate Expected outcomes Cloud resolve 5 years • Fully coupled carbon-climate simulation • Fully coupled sulfur-atmospheric chemistry simulation 10 years • Cloud-resolving30-km spatialresolution atmosphere climate simulation • Fully coupled, physics, chemistry, biologyearth system model 750 Tflops 1000 Strat chem 340 Tbytes 154 250 Biogeochem 70 100 113 Eddy resolve 32 Computer performance (TFLOPS) 52 Dyn veg 15 23 Interactive 7 10 11 3 5 Trop chemistry 2 1 1 0 5 10 Years

  16. Software and science: Nanoscience Expected outcomes Computation-guided search for new nanostructures 5 years • Realistic simulationof self-assembly and single-molecule electron transport • Finite-temperature properties of nanoparticles/quantum corrals 10 years • Multiscale modelingof molecularelectronic devices • Computation-guided search for new materials/nanostructures Kinetics and catalysis of nano and enzyme systems Multiscale modeling of hysteresis Spin dynamics switching of nanobits 1000 Design of materials and structures Molecular and nanoelectronics 100 Magnetic phase diagrams First principles of domain wall interactions in nanowires Computer performance (TFLOPS) 10 Thermodynamics and kinetics of small molecules 1 Magnetic structure of quantum corrals 0 5 10 Magnetic nanoclusters Years

  17. Contact Arthur S. (Buddy) Bland Leadership Computing Facility Project DirectorNational Center for Computational Sciences (865) 576-6727 blandas@ornl.gov 17 Bland_LCFRoadmap_SC07

More Related