330 likes | 435 Views
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science. XSEDE’14 (16 July 2014) R. L. Moore, C. Baru, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S. Sinkovits, S. Strande (NCAR), M. Tatineni, R. P. Wagner, N. Wilkins-Diehr, M. L. Norman
E N D
Gateways to Discovery:Cyberinfrastructure for the Long Tail of Science XSEDE’14 (16 July 2014) R. L. Moore, C. Baru, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S. Sinkovits, S. Strande (NCAR), M. Tatineni, R. P. Wagner, N. Wilkins-Diehr, M. L. Norman UCSD/SDSC (except as noted)
Comet is in response to NSF’s solicitation (13-528) to • “… expand the use of high end resources to a much larger and more diverse community • … support the entire spectrum of NSF communities • ... promote a more comprehensive and balanced portfolio • … include research communities that are not users of traditional HPC systems.“ The long tail of science needs HPC
Jobs and SUs at various scales across NSF resources • 99% of jobs run on NSF’s HPC resources in 2012 used <2048 cores • And consumed ~50% of the total core-hours across NSF resources Cumulative Usage One node Job Size (Cores)
Comet: System Characteristics • Available January 2015 • Total flops ~1.8-2.0 PF • Dell primary integrator • Intel next-gen processors, former codename Haswell, with AVX2 • Aeon storage vendor • Mellanox FDR InfiniBand • Standard compute nodes • Dual Haswell processors • 128 GB DDR4 DRAM (64 GB/socket!) • 320 GB SSD (local scratch) • GPU nodes • Four NVIDIA GPUs/node • Large-memory nodes (Mar 2015) • 1.5 TB DRAM • Four Haswell processors/node • Hybrid fat-tree topology • FDR (56 Gbps) InfiniBand • Rack-level (72 nodes) full bisection bandwidth • 4:1 oversubscription cross-rack • Performance Storage • 7 PB, 200 GB/s • Scratch & Persistent Storage • Durable Storage (reliability) • 6 PB, 100 GB/s • Gateway hosting nodes and VM image repository • 100 Gbpsexternal connectivity to Internet2 & ESNet
Comet Architecture Node-Local Storage 72 HSWL 320 GB 18 N racks IB Core (2x) FDR 36p 72 FDR FDR 72 HSWL 320 GB 72 FDR FDR 36p 18 Mid-tier Bridge (4x) N GPU 40GbE 72 4 Large-Memory Arista 40GbE (2x) 64 128 10GbE 40GbE Additional Support Components (not shown for clarity) NFS Servers, Virtual Image Repository, Gateway/Portal Hosting Nodes, Login Nodes, Ethernet Management Network, Rocks Management Nodes 72 HSWL Internet 2 18 7x 36-port FDR in each rack wired as full fat-tree. 4:1 over subscription between racks. Juniper 100 Gbps Performance Storage 7 PB, 200 GB/s Durable Storage 6 PB, 100 GB/s R&E Network Access Data Movers Arista 40GbE (2x) Data Mover (4x)
SSDs – building on Gordon success Based on our experiences with Gordon, a number of applications will benefits from continued access to flash • Applications that generate large numbers of temp files • Computational finance – analysis of multiple markets (NASDAQ, etc.) • Text analytics – word correlations in Google Ngram data • Computational chemistry codes that write one- and two-electron integral files to scratch • Structural mechanics codes (e.g. Abaqus), which generate stiffness matrices that don’t fit into memory
Large memory nodes While most user applications will run well on the standard compute nodes, a few domains will benefit from the large memory (1.5 TB nodes) • De novo genome assembly: ALLPATHS-LG, SOAPdenovo, Velvet • Finite-element calculations: Abaqus • Visualization of large data sets
GPU nodes Comet’s GPU nodes will serve a number of domains • Molecular dynamics applications have been one of the biggest GPU success stories. Packages include Amber, CHARMM, Gromacs and NAMD • Applications that depend heavily on linear algebra • Image and signal processing
Key Comet Strategies • Target modest-scale users and new users/communities: goal of 10,000 users/year! • Support capacity computing, with a system optimized for small/modest-scale jobs and quicker resource response using allocation/scheduling policies • Build upon and expand efforts with Science Gateways, encouraging gateway usage and hosting via software and operating policies • Provide a virtualized environment to support development of customized software stacks, virtual environments, and project control of workspaces
Comet will serve a large number of users, including new communities/disciplines • Allocations/scheduling policies to optimize for high throughput of many modest-scale jobs (leveraging Trestles experience) • Optimized for rack-level jobs but cross-rack jobs feasible • Optimized for throughput (ala Trestles) • Per-project allocations caps to ensure large numbers of users • Rapid access for start-ups with one-day account generation • Limits on job sizes, with possibility of exceptions • Gateway-friendly environment: Science gateways reach large communities w/ easy user access • e.g. CIPRES gateway alone currently accounts for ~25% of all users of NSF resources, with 3,000 new users/year and ~5,000 users/year • Virtualization provides low barriers to entry (see later charts)
Changing the face of XSEDE HPC users • System design and policies • Allocations, scheduling and security policies which favor gateways • Support gateway middleware and gateway hosting machines • Customized environments with high-performance virtualization • Flexible allocations for bursty usage patterns • Shared node runs for small jobs, user-settable reservations • Third party apps • Leverage and augment investments elsewhere • FutureGrid experience, image packaging, training, on-ramp • XSEDE (ECSS NIP & Gateways, TEOS, Campus Champions) • Build off established successes supporting new communities • Example-based documentation in Comet focus areas • Unique HPC University contributions to enable community growth
Virtualization Environment • Leveraging expertise of Indiana U/ FutureGrid team • VM jobs scheduled just like batch jobs (not conventional cloud environment with immediate elastic access) • VMs will be easy on-ramp for new users/communities, including low porting time • Flexible software environments for new communities and apps • VM repository/library • Virtual HPC cluster (multi-node) with near-native IB latency and minimal overhead (SRIOV)
Single Root I/O Virtualization in HPC • Problem: complex workflows demand increasing flexibility from HPC platforms • Pro: Virtualization flexibility • Con: Virtualization IO performance loss (e.g., excessive DMA interrupts) • Solution: SR-IOV and Mellanox ConnectX-3 InfiniBand HCAs • One physical function (PF) multiple virtual functions (VF), each with own DMA streams, memory space, interrupts • Allows DMA to bypass hypervisor to VMs
High-Performance Virtualization on Comet • Mellanox FDR InfiniBand HCAs with SR-IOV • Rocks and OpenStack Nova to manage VMs • Flexibility to support complex science gateways and web-based workflow engines • Custom compute appliances and virtual clusters developed with FutureGrid and their existing expertise • Backed by virtualized Lustre running over virtualized InfiniBand
Benchmark comparisons of SR-IOV Cluster v AWS (early 2013):Hardware/Software Configuration
50x less latency than Amazon EC2 • SR-IOV • < 30% overhead for Messages < 128 bytes • < 10% overhead for eager send/recv • Overhead 0% for bandwidth-limited regime • Amazon EC2 • > 5000% worse latency • Time dependent (noisy) OSU Microbenchmarks (3.9, osu_latency)
10x more bandwidth than Amazon EC2 • SR-IOV • < 2% bandwidth loss over entire range • > 95% peak bandwidth • Amazon EC2 • < 35% peak bandwidth • 900% to 2500% worse bandwidth than virtualized InfiniBand OSU Microbenchmarks (3.9, osu_bw)
Weather Modeling – 15% Overhead • 96-core (6-node) calculation • Nearest-neighbor communication • Scalable algorithms • SR-IOV incurs modest (15%) performance hit • ...but still still 20% faster*** than Amazon WRF 3.4.1 – 3hr forecast *** 20% faster despite SR-IOV cluster having 20% slower CPUs
Quantum ESPRESSO: 5x Faster than EC2 • 48-core(3 node) calculation • CG matrix inversion (irregular comm.) • 3D FFT matrix transposes (All-to-all communication) • 28% slower w/ SR-IOV • SR-IOV still > 500% faster*** than EC2 Quantum Espresso 5.0.2 – DEISA AUSURF112 benchmark *** 20% faster despite SR-IOV cluster having 20% slower CPUs
SR-IOV is a huge step forward in high-performance virtualization • Shows substantial improvement in latency over Amazon EC2, and it provides nearly zero bandwidth overhead • Benchmark application performance confirms significant improvement over EC2 • SR-IOV lowers performance barrier to virtualizing the interconnect and makes fully virtualized HPC clusters viable • Comet will deliver virtualized HPC to new/non-traditional communities that need flexibility without major loss of performance
NSF 13-528: Competitive proposals should address: • “Complement existing XD capabilities with new types of computational resources attuned to less traditional computational science communities; • Incorporate innovative and reliable services within the HPC environment to deal with complex and dynamic workflows that contribute significantly to the advancement of science and are difficult to achieve within XD; • Facilitate transition from local to national environments via the use of virtual machines; • Introduce highly useable and cost efficient cloud computing capabilities into XD to meet national scale requirements for new modes of computationally intensive scientific research; • Expand the range of data intensive and/or computationally-challenging science and engineering applications that can be tackled with current XD resources; • Provide reliable approaches to scientific communities needing a high-throughput capability.”
VCs on Comet: Operational Details- one VM per physical node - VC0 VC1 HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node VC2
VCs on Comet: Operational Details- Head Node remains active after VC shutdown - VC0 VC1 HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node VC2
VCs on Comet: Spinup/shutdown- Each VC has its own ZFS file system for storing VMIs – - latency hiding tricks on startup - VC0 VC1 Virtual machine disk image HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node ZFS pool VC2