1 / 17

Hybrid MPI/Pthreads Parallelization of the RAxML Phylogenetics Code

Hybrid MPI/Pthreads Parallelization of the RAxML Phylogenetics Code. Wayne Pfeiffer (SDSC/UCSD) & Alexandros Stamatakis (TUM) December 3, 2009. What was done? Why is it important? Who cares?. Hybrid MPI/Pthreads version of RAxML phylogenetics code was developed

haamid
Download Presentation

Hybrid MPI/Pthreads Parallelization of the RAxML Phylogenetics Code

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hybrid MPI/Pthreads Parallelizationof the RAxML Phylogenetics Code Wayne Pfeiffer (SDSC/UCSD) & Alexandros Stamatakis (TUM) December 3, 2009

  2. What was done? Why is it important? Who cares? • Hybrid MPI/Pthreads version of RAxML phylogenetics code was developed • MPI code was added to previous Pthreads-only code • This allows multiple multi-core nodes in a cluster to be used in a single run • Typical problems now run well on 10 nodes (80 cores) of Dash (a new cluster at SDSC) as compared to only on one node (8 cores) before • Hybrid code is being made available on TeraGrid via CIPRES portal • Work was done as part of ASTA project supporting Mark Miller of SDSC, who oversees the portal • This will give biologists easy access to the code

  3. What does a phylogenetics code do? It starts from a multiple sequence alignment (matrix of taxa versus characters) . . . ...... . . Homo_sapiens AAGCTTCACCGGCGCAGTCATTCTCATAAT... Pan AAGCTTCACCGGCGCAATTATCCTCATAAT... Gorilla AAGCTTCACCGGCGCAGTTGTTCTTATAAT... Pongo AAGCTTCACCGGCGCAACCACCCTCATGAT... Hylobates AAGCTTTACAGGTGCAACCGTCCTCATAAT... & generates a phylogeny (usually a tree of taxa) /-------------------------- Homo_sapiens | |-------------------------- Pan + | /----------------- Gorilla | | \---96---+ /-------- Pongo \---100--+ \-------- Hylobates

  4. A little more about molecular phylogenetics • Multiple sequence alignment • Specified by DNA bases, RNA bases, or amino acids • Obtained by separate analysis of multiple sequences • Possible changes to a sequence • Substitution (point mutation or SNP: considered in RAxML) • Insertion & deletion (also handled by RAxML) • Structural variations (e.g., duplication, inversion, & translocation) • Recombination & horizontal gene transfer (important in bacteria) • Common methods, typically heuristic & based on models of molecular evolution • Distance • Parsimony • Maximum likelihood (used by RAxML) • Bayesian

  5. RAxML is a maximum likelihood (ML) codewith parallelism at various algorithmic levels • Heuristic searching within a tree can exploit fine-grained parallelism using Pthreads • A variant of subtree pruning and grafting (SPR) called lazy subtree rearrangement (LSR) • Three types of searches across trees can exploit coarse-grained parallelism using MPI • Multiple ML searches on the same data set starting from different initial trees to explore solution space better • Multiple bootstrap searches on resampled data sets to obtain confidence values on interior branches of tree • Comprehensive analysis that combines the two previous analyses

  6. Comprehensive analysis combinesmany rapid bootstraps followed by a full ML search • Complete, publishable analysis in a single run • Four stages (with typical numbers of searches) • 100 rapid bootstrap searches • 20 fast ML searches • 10 slow ML searches • 1 thorough ML search • Coarse-grained parallelism via MPI • In first three stages, but decreasing with each stage • Fine-grained parallelism via Pthreads • Available at all stages • Tradeoff in effectiveness between MPI and Pthreads • Typically 10 or 20 MPI processes max • Optimal number of Pthreads increasing with number of patterns, but limited to number of cores per node

  7. Some noteworthy points regardingthe MPI parallel implementation • A thorough search is done for every MPI process (instead of just a single search as in Pthreads-only code) • Increases run time only a little, because load is reasonably balanced • Often leads to better quality solution • Only two MPI calls are made • MPI_Barrier after bootstraps • MPI_Bcast after thorough searches to select best one for output • Treatment of random numbers is reproducible • At least for a given number of MPI processes

  8. 5 benchmark problems & 4 benchmark computerswere used Benchmark problems Benchmark computers (all with quad-core x64 processors) Abe at NCSA 8-way nodes with 2.33-GHz Intel Clovertowns Dash at SDSC 8-way nodes with 2.4-GHz Intel Nehalems Ranger at TACC 16-way nodes with 2.3-GHz AMD Barcelonas Triton PDAF at SDSC 32-way nodes with 2.5-GHz AMD Shanghais

  9. For problem with 1.8k patterns, RAxML achieves speedupof 35 on 80 cores of Dash using 10 MPI processes & 8 Pthreads

  10. Parallel efficiency plot of same data clarifies performance differences;on ≥ 8 cores, 4 or 8 Pthreads is optimal;on 8 cores, 4 Pthreads is 1.3x faster than 8 Pthreads

  11. First three stages scale well with MPI,but last (thorough search) stage does not; that limits scalability

  12. For problem with 19k patterns, performance on Dash is best usingall 8 Pthreads; scaling is poorer than for problem with 1.8k patterns

  13. For problem with 19k patterns, scaling is betteron Triton PDAF than on Dash using all 32 Pthreads available

  14. For problem with 19k patterns, Triton PDAF is fasterthan other computers at high core counts

  15. With more bootstraps (per bootstopping):speedup is better & parallel efficiency ≥ 0.44 on 80 cores;optimal number of threads drops; &run time for longest problem goes from > 4 d to < 1.8 h

  16. The hybrid code provides three notable benefitsfor comprehensive analyses • Multiple computer nodes can be used in a single run • For problem with 1.8k patterns, speedup on 10 nodes (80 cores) of Dash is 6.5 compared to one node (using 8 Pthreads in both cases) • Speedup is 35 on 80 cores compared to serial case • Number of Pthreads per node can be adjusted for optimal efficiency • For same problem, using 2 MPI processes & 4 Pthreads on a single node of Dash is 1.3x faster than using 8 Pthreads alone • Additional thorough searches often lead to better solution

  17. Some final points • Newest computers are fastest • Dash is fastest for 4 out of 5 problems considered • Triton PDAF is fastest for problem with 19k patterns because it allows 32 Pthreads • Hybrid code is available in RAxML 7.2.4 and later • Available now from RAxML Web site (http://wwwkramer.in.tum.de/exelixis/software.html) • Available soon on TeraGrid via CIPRES portal (http://www.phylo.org/portal2)

More Related