130 likes | 317 Views
NAMD Development Goals. L.V. (Sanjay) Kale Professor Dept. of Computer Science http://www.ks.uiuc.edu/Research/namd/. NAMD Vision. Make NAMD a widely used MD program For large molecular systems, Scaling from PCs, clusters, to large parallel machines For interactive molecular dynamics
E N D
NAMD Development Goals L.V. (Sanjay) Kale Professor Dept. of Computer Science http://www.ks.uiuc.edu/Research/namd/
NAMD Vision • Make NAMD a widely used MD program • For large molecular systems, • Scaling from PCs, clusters, to large parallel machines • For interactive molecular dynamics • Goals: • High performance • Ease of use: • configuration and run • Ease of modification (for us and advanced users) • Maximize reuse of communication and control patterns • Push parallel complexity down into Charm++ runtime • Incorporation of features needed by Scientists
NAMD 3 New Features • Software Goal: • Modular architecture to permit reuse extensibility • Scientific/Numeric Modules: • Implicit solvent models (e.g, generalized Born) • Replica exchange (e.g., 10 on 16 processors) • Self-consistent polarizability with a (sequential) CPU penalty of less than 100%. • Hybrid quantum/classical mechanics • Fast nonperiodic (and periodic) electrostatics using multiple grid methods. • A Langevin integrator that permits larger time steps (by being exact for constant forces). • An integrator module that computes shadow energy.
Design • NAMD 3 will be a major rewrite of NAMD • Incorporate lessons learned in the past years • Use modern features of Charm++ • Refactor software for modularity • Restructure for supporting planned features • Algorithms that scale to even larger machines
Programmability • NAMD3 Scientific Modules: • Forces, integration, steering, analysis • Keep code with a common goal together • Add new features without touching old code • Parallel Decomposition Framework: • Support common scientific algorithm patterns • Avoid duplicating services for each algorithm • Start with NAMD 2 architecture (but not code)
MDAPI New Science modules Replica exchange QM Implicit Solvents Polarizable Force Field Bonds related Force calculation Integration Pair-wise Forces calculation PME NAMD Core Charm++ modules FFT Fault Tolerance Grid Scheduling Collective communication Load balancer Core CHARM++ Clusters Lemieux … Teragrid
MDAPI Modular Interface • Separate “front end” from modular “engine” • Same program or over a network or grid • Dynamic discovery of engine capabilities, no limitations imposed by interface • Front ends: NAMD 2, NAMD 3, Amber, CHARMM, VMD • Engines: NAMD 2, NAMD 3, MINDY
Terascale Biology and Resources PSC LeMieux TeraGrid CRAY X1 NCSA Tungsten ASCI Purple Riken MDGRAPE Red Storm Thor’s Hammer
NAMD on Charm++ • Active computer science collaboration (since 1992) • Object array - A collection of chares, • with a single global name for the collection, and • each member addressed by an index • Mapping of element objects to processors handled by the system User’s view A[0] A[1] A[2] A[3] A[..] System view A[0] A[3]
NAMD3 Features Based on Charm++ • Adaptive load balancing • Optimized communication • Persistent Communication, Optimized concurrent multicast/reduction • Flexible, tuned, parallel FFT libraries • Automatic Checkpointing • Ability to change the number of processors • Scheduling on the grid • Fault tolerance • Fully automated restart • Survive loss of a node • Scaling to large machines • fine-grained parallelism for PME: bonded and nonbonded force evaluations
Efficient Parallelization for IMD • Characteristics • Limited parallelism on small systems • Real time response needed • Fine grained parallelization • Improve speedups on 4K-30K atom systems • Time/step goal • Currently 0.2s/step for BrH on single processor (P4 1.7GHz) • Targeting on 0.003s/step on 64 processors of faster machine, that is 20picosecond/minute • Flexible use of clusters • Migrating jobs (shrink/expand) • Better utilization when machine is idle
Integration with CHARMM/Amber? • Goal: NAMD as parallel simulation engine for CHARMM/Amber • Generate input files in CHARMM/Amber • NAMD must read native file formats • Run with NAMD on parallel computer • Need to use equivalent algorithms • Analyze simulation in CHARMM/Amber • NAMD must generate native file formats