220 likes | 409 Views
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations. Josh Hursey. Villin Folding. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Overview.
E N D
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey
Z Z Z Z Z Z Z Z Z Z Z Z
Z Z Z Z Z Z Z Z Z Z Z Z
Overview Folding@Clusters, is an adaptive framework for harnessing low latency parallel compute resources for protein folding research. It combines capability discovery, load balancing, process monitoring, and checkpoint/re-start services to provide a platform for molecular dynamics simulations on a range of grid-based parallel computing resources including clusters, SMP machines, and clusters of SMP machines (sometimes known as constellations).
Design Goals • Provide an easy to use, open source interface to significant computing resources for scientists performing molecular dynamics simulations on large biomolecular systems. • Automate process of running molecular systems on a variety of parallel computing resources • Handle failures gracefully & automatically • Don’t hinder performance possibilities • Ease of use for scientists, sys. admin.s, & contributors • Provide low friction install, configuration, & run-time interfaces • Sustain tight linkage with Folding@Home project.
Open Source Building Blocks • GROMACS: Molecular dynamics software package. Primary Scientific Core • FFTW: Fast Fourier Transform Library. Used internally in GROMACS. • LAM/MPI: Message Passing Interface implementation. Supports the MPI-2.0 specification. • COSM: Distributed computing library to aid in portability. Provides capability discovery, logging, & base utilities. • NetPipe: Common tool for measuring bandwidth & latency. Used in capability discovery. • Folding@Home: Large-scale distributed computing project. Foundation for this project.
Contributor Setup Create a user to run Folding@Clusters Download & unpack the distribution Confirm LAM/MPI installation & configuration Start LAM/MPI: $ lamboot Configure Folding@Clusters using mother.conf Start Folding@Clusters: $ mpirun -np 1 bin/mother $ lamnodes n0 c1.cluster.earlham.edu:2:origin,this_node n1 c2.cluster.earlham.edu:2: n2 c3.cluster.earlham.edu:2: n3 c4.cluster.earlham.edu:2: n4 c5.cluster.earlham.edu:2: n5 c6.cluster.earlham.edu:2: n6 c7.cluster.earlham.edu:2: n7 c8.cluster.earlham.edu:2: n8 c9.cluster.earlham.edu:2: n9 c10.cluster.earlham.edu:2: $ cat conf/mother.conf [Network] LamHosts=n0,n1,n2,n3,n4,n5,n6 LamMother=n0
Testing Environment: Cairo • Network Fabric: 2 Netgear GSM712 1000 MB Switches Linked together by dual GBIC/1000 BT RJ45 modules • OS: Yellow Dog Linux (4.0 Release, 2.6.8-1 SMP Kernel) • GCC: 3.3.3-16
Performance Proteasome (Stable) DPPC Villin
Future Directions • New scientific cores (Amber, NAMD, etc…) • Remove dependencies on pre-installed software • Extend testing suite of molecules • Extend range of parallel compute resources used in testing • Abstract the @Clusters framework • Investigate load balancing & resource usage improvements • Architecture addition: Grandmothers • Beta Release!
About Us Josh Hursey Charles Peck Josh McCoy John Schaefer Vijay Pande Erik Lindahl Adam Beberg
Speedup Proteasome (Stable) DPPC Villin
Testing Environment: Bazaar • Network Fabric: 2 Switches (3Com 3300XM 100 MB, 3Com 3300 100 MB) Linked together by a 3Com MultiLink cable • OS: SuSE Linux (2.6.4-52 SMP Kernel) • GCC: 3.3.3