50 likes | 156 Views
Computational Infrastructure. Ion I. Moraru. UConn Health HPC Facility. Originated out of the computational needs of another NIH P41 grant (NRCAM, continuously funded since 1998)
E N D
Computational Infrastructure Ion I. Moraru
UConn Health HPC Facility • Originated out of the computational needs of another NIH P41 grant (NRCAM, continuously funded since 1998) • Developed large scale biological simulation service well before SaaS approaches (Virtual Cell, a.k.a. VCell, distributed architecture) • Incorporates Enterprise IT services and support, including extensive virtualization infrastructure for both mission-critical and research applications • Since 2010 has dedeicated • New datacenter, major upgrades, virtualization • 2014 – Science DMZ • Cross campus, 100GbE, private cloud
Scope • Kinetic modeling and simulation platform • Compartmental or spatially-resolved (1D/2D/3D) • Stochastic, deterministic, hybrid • Reaction-diffusion, advection, electrophysiology • Major emphasis on reuse and reproducibility • SaaS: VCell simulations from 2001 still 100% reproducible • Standards development: SBML, SED-ML Editors, HARMONY Service Total Registered VCell Users Users Who Ran Simulations Currently Stored Models Currently Stored Simulations Publicly Available Models 17,048 4,030 58,798 353,603 597
HPC Facility Resources • Storage (> 1 PB): • Main shared scale-out storage cluster (330 TB EMC2Isilon “SmartPools”) • Multiple dedicated 30/50 TB “scratch storage areas” for primary applications (VCell, NGS pipeline, etc.) • Private cloud object store (650 TB Amplistor), geo-dispersed across 3 datacenters • Compute (> 40 Tflops): • Large CPU-only and hybrid CPU/GPGPU compute clusters + OSG cluster • Currently 40+ Tflop compute capacity and 5.8 TB RAM • Choice of 3 batch scheduler systems (PBS, SGE, MJS) • Virtualization Infrastructure: • Redundant VMWare server and desktop virtualization hosts(456 CPU cores, 1 TB RAM) hosting 100+ Windows/Linux virtual machines • SSD high IOPS performance cache tier • Datacenter Infrastructure: • UPS generator backed power (160 kW), redundant cooling (50 tons) • Dedicated 3x40 GbE dark fiber connection to off-site DR location • Network (100+ GbE): • Full non-oversubscribed 10/40 GbE datacenter network core layer • BioScienceCTResearch Network – 100 GbE to Storrs and Internet2 • NSF CC-NIE Science DMZ – low latency, non-firewalled