1 / 5

Bio-molecular Simulations on Future Computing Architectures

Join us for a workshop exploring bio-molecular simulations on upcoming computing architectures. Learn about significant changes in hardware, exascale computing, and the concept of hardware-software "Co-Design." Engage with experts in the field and discuss the future of scientific computing.

jmarlow
Download Presentation

Bio-molecular Simulations on Future Computing Architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome to Workshop on Bio-molecular Simulations on Future Computing Architectures Arthur Barney Mccabe, ORNL Organizing Committee: Kennie Merz (Florida), Qiang Cui (Wisconsin), Paul Crozier (SNL), Pratul Agarwal (ORNL) Sadaf Alam (Swiss National Supercomputing Centre), Adrian Roitberg (University of Florida), Ross Walker (University of California, San Diego) Organized in association with: NCCS, NICS Funding Support: NIH/NIGMS (R21GM083946 )

  2. Today, ORNL is DOE’s largest scienceand energy laboratory • $1.3B budget • 4,350 employees • 3,900 researchguests annually • $350 million investedin modernization • World’s most powerful open scientific computing facility • Nation’s largest concentrationof open source materials research • Nation’s most diverse energy portfolio • Operating the world’s most intense pulsed neutron source • Managing the billion-dollar U.S. ITER project

  3. Leading the developmentof ultrascale scientific computing • Leadership Computing Facility: • World’s most powerful open scientific computing facility • Jaguar XT5 operating at 1.75 petaflops (#1 on Top500) • Exascale system by the end of the next decade • Focus on computationally intensive projects of large scale and high scientific impact • Addressing key science and technology issues • Climate • Fusion • Materials • Bioenergy • NICS: • Kraken XT5 1.02 petaflops (#4 on Top500) The world’s most powerful system for open science 3 Managed by UT-Battellefor the Department of Energy

  4. Motivation for this workshop

  5. Motivation: • Computer hardware is changing • Significant architecture changes are expected in future • Exascale: Concurrency, Power and Resiliency • Heterogeneous: GPUs, FGPAs • How will scientific computing adjust? • Is the burden on the hardware designers or vendors? • Or on the scientific community • The concept of hardware-software “Co-Design” • Discussions between: • Hardware vendors, software developers and end-users

More Related