1 / 18

Pegasus A Framework for Workflow Planning on the Grid

Pegasus is a powerful framework that maps abstract workflows onto the Grid, with APIs for information gathering, resource selection, and data transfer mechanisms. It supports various workflow executors and offers advanced features like deferred planning, optimization, and partitioning. Explore its components and future potential in diverse applications.

webber
Download Presentation

Pegasus A Framework for Workflow Planning on the Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PegasusA Framework for Workflow Planning on the Grid Ewa Deelman USC Information Sciences Institute Pegasus Acknowledgments: Carl Kesselman, Gaurang Mehta, Mei-Hui Su, Gurmeet Singh, Karan Vahi

  2. Pegasus • Flexible framework, maps abstract workflows onto the Grid • Possess well-defined APIs and clients for: • Information gathering • Resource information • Replica query mechanism • Transformation catalog query mechanism • Resource selection • Compute site selection • Replica selection • Data transfer mechanism • Can support a variety of workflow executors pegasus.isi.edu Ewa Deelman

  3. Job c Job a Job b Job f Job d Job e Job g Job h Job i Pegasus • May reduce the workflow based on available data products • Augments the workflow with data stage-in and data stage-out • Augments the workflow with data registration KEY The original node Pull transfer node Registration node Push transfer node Inter-pool transfer node Job c Job a Job b Job f Job d Job e Job g Job h Job i pegasus.isi.edu Ewa Deelman

  4. Pegasus Components pegasus.isi.edu Ewa Deelman

  5. Original Pegasus configuration Simple scheduling: random or round robin using well-defined scheduling interfaces. pegasus.isi.edu Ewa Deelman

  6. Deferred Planning through Partitioning A variety of planning algorithms can be implemented pegasus.isi.edu Ewa Deelman

  7. Mega DAG is created by Pegasus and then submitted to DAGMan pegasus.isi.edu Ewa Deelman

  8. Re-planning capabilities pegasus.isi.edu Ewa Deelman

  9. Complex Replanning for Free (almost) pegasus.isi.edu Ewa Deelman

  10. Optimizations • If the workflow being refined by Pegasus consists of only 1 node • Create a condor submit node rather than a dagman node • This optimization can leverage Euryale’s super-node writing component pegasus.isi.edu Ewa Deelman

  11. Planning & Scheduling Granularity • Partitioning • Allows to set the granularity of planning ahead • Node aggregation • Allows to combine nodes in the workflow and schedule them as one unit (minimizes the scheduling overheads) • May reduce the overheads of making scheduling and planning decisions • Related but separate concepts • Small jobs • High-level of node aggregation • Large partitions • Very dynamic system • Small partitions pegasus.isi.edu Ewa Deelman

  12. Montage • Montage (NASA and NVO) • Deliver science-grade custom mosaics on demand • Produce mosaics from a wide range of data sources (possibly in different spectra) • User-specified parameters of projection, coordinates, size, rotation and spatial sampling. • Bruce Berriman, John Good, Anastasia Laity, Caltech/IPAC • Joseph C. Jacob, Daniel S. Katz, JPL • Doing large: 6 and 10 degree dags (for the m16 cluster). • The 6 degree runs had about 13,000 compute jobs and the 10 degree run had about 40,000 compute jobs Mosaic created by Pegasus based Montage from a run of the M101 galaxy images on the Teragrid. pegasus.isi.edu Ewa Deelman

  13. Data Stage in nodes Montage compute nodes Data stage out nodes Inter pool transfer nodes Montage Workflow pegasus.isi.edu Ewa Deelman

  14. Future work • Staging in executables on demand • Expanding the scheduling plug-ins • Investigating various partitioning approaches • Investigating reliability across partitions pegasus.isi.edu Ewa Deelman

  15. Non-GriPhyN applications using Pegasus • Galaxy Morphology (National Virtual Observatory) • Investigates the dynamical state of galaxy clusters • Explores galaxy evolution inside the context of large-scale structure. • Uses galaxy morphologies as a probe of the star formation and stellar distribution history of the galaxies inside the clusters. • Data intensive computations involving hundreds of galaxies in a cluster The x-ray emission is shown in blue, and the optical mission is in red. The colored dots are located at the positions of the galaxies within the cluster; the dot color represents the value of the asymmetry index. Blue dots represent the most asymmetric galaxies and are scattered throughout the image, while orange are the most symmetric, indicative of elliptical galaxies, are concentrated more toward the center. pegasus.isi.edu Ewa Deelman

  16. BLAST: set of sequence comparison algorithms that are used to search sequence databases for optimal local alignments to a query • 2 major runs were performed using Chimera and Pegasus: • 60 genomes (4,000 sequences each), • In 24 hours processed Genomes selected from DOE-sponsored sequencing projects • 67 CPU-days of processing time delivered • ~ 10,000 Grid jobs • >200,000 BLAST executions • 50 GB of data generated • 2) 450 genomes processed • Speedup of 5-20 times were achieved because the compute nodes we used efficiently by keeping the submission of the jobs to the compute cluster constant. Lead by Veronika Nefedova (ANL) as part of the PACI Data Quest Expedition program pegasus.isi.edu Ewa Deelman

  17. Biology Applications (cont’d) Tomography (NIH-funded project) • Derivation of 3D structure from a series of 2D electron microscopic projection images, • Reconstruction and detailed structural analysis • complex structures like synapses • large structures like dendritic spines. • Acquisition and generation of huge amounts of data • Large amount of state-of-the-art image processing required to segment structures from extraneous background. Dendrite structure to be rendered by Tomography Work performed by Mei Hui-Su with Mark Ellisman, Steve Peltier, Abel Lin, Thomas Molina (SDSC) pegasus.isi.edu Ewa Deelman

  18. Southern California Earthquake Center The SCEC/IT project, funded by (NSF), is developing a new framework for physics-based simulations for seismic hazard analysis building on several information technology areas, including knowledge representation and reasoning, knowledge acquisition, grid computing, and digital libraries. People involved: Vipin Gupta, Phil Maechling (USC) pegasus.isi.edu Ewa Deelman

More Related