170 likes | 328 Views
Pegasus and Condor. Gaurang Mehta, Ewa Deelman, Carl Kesselman, Karan Vahi Center For Grid Technologies USC/ISI. PEGASUS. Pegasus – Planning for Execution in Grid Pegasus is a configurable system that can plan, schedule and execute complex workflows on the Grid.
E N D
Pegasus and Condor Gaurang Mehta, Ewa Deelman, Carl Kesselman, Karan Vahi Center For Grid Technologies USC/ISI
PEGASUS • Pegasus – Planning for Execution in Grid • Pegasus is a configurable system that can plan, schedule and execute complex workflows on the Grid. • Algorithmic and AI based techniques are used. • Pegasus takes an abstract workflow as input. The abstract workflow describes the transformations and data in terms of their logical names. • It then queries the Replica Location Service (RLS) for existence of any materialized data. If any derived data exists then it is reused and a workflow reduction is done Condor-Week
f.a1 f.a2 f.c f.b Execution nodes E1 E2 Transfer nodes Registration nodes f.b f.c E3 E3 f.d f.d Workflow Reduction f.b and f.c exist in RLS Reduced workflow Original Abstract workflow Condor-Week
Pegasus (Cont) • It then locates physical locations for both components (transformations and data) • Uses Globus Replica Location Service (RLS) and the Transformation Catalog (TC) • Finds appropriate resources to execute • Via Globus Monitoring and Discovery Service (MDS) • Adds the stage-in jobs to transfer raw and materialized input files to the computation sites. • Adds the stage out jobs to transfer derived data to the user selected storage location. • Both input and output staging is done Globus GridFtp • Publishes newly derived data products for reuse • RLS, Chimera virtual data catalog (VDC) Condor-Week
T1 T2 f.c f.b Execution nodes Transfer nodes f.c f.b Registration nodes E3 E3 f.d f.d T3 Final Dag R1 Workflow Modification Reduced workflow Condor-Week
Pegasus (Cont) • Pegasus generates the concrete workflow in Condor Dagman format and submits them to Dagman/Condor-G for execution on the Grid. • These concrete Dags have the concrete location of the data and the site where the computation is to be performed. • Condor-G submits these jobs via Globus-Gram to remote schedulers running Condor, PBS, LSF and Sun Grid Engine. • Part of a software package distributed by GriPhyN called Virtual Data System (VDS). • VDS-1.2.3(Pegasus+Chimera) currently included in the Virtual Data Toolkit 1.1.13 (VDT). Condor-Week
VDL CHIMERA VDLX VDC DAX TC PEGASUS USER SUPPLIED DAX RLS Dag/Submit Files MDS DAGMAN CONDOR-G GRID Workflow Construction Condor-Week
Current System Condor-Week
Deferred Planning in Pegasus • Current Pegasus implementation plans the entire workflow before submitting it for execution. (Full ahead) • Grids are very dynamic and resources come and go pretty often. • Currently adding support for deferred planning where in only a part of the workflow will be planned and executed at a time. • Chop the abstract workflow into partitions. • Plan on one partition and submit it to Dagman/Condor-G • The last job in the partition calls Pegasus again and plans the next partition and so on.. • Initial partitions will be level based on breadth-first search. Condor-Week
Incremental Refinement • Partition Abstract workflow into partial workflows Condor-Week
Meta-DAGMan Condor-Week
Current Condor Technologies Used • Dagman to manage the dependencies in the acyclic workflow. • Provides support to resume a failed workflow using rescue dag generated by Dagman. • Condor-G to submit jobs to the grid (globus-jobmanager). • Jobs are submitted using Globus GRAM and the stdout/stdin/stderr is streamed back using GLOBUS GASS. • Condor as a scheduler to harness idle cpu cycles on existing desktops. • ISI has a small 36 node condor pool consisting of primarily Linux and Solaris machines. Condor-Week
Future Condor Technologies to be integrated. • Nest • We are looking at integrating support for nest which allows disk space reservation on remote sites • Stork (Data Placement Scheduler) • Allows support of multiple transfer protocols. (ftp, http, nest/chirp, gsiftp, srb, file) • Reliably transfers your file across the grid. Condor-Week
Applications Using Pegasus and Condor Dagman • GriPhyN Experiments • Laser Interferometer Gravitational Wave Observatory (Caltech/UWM) • ATLAS (U of Chicago) • SDSS (Fermilab) • Also IVDGL/GRID3 • National Virtual Observatory and NASA • Montage • Biology • BLAST (ANL, PDQ-funded) • Neuroscience • Tomography for Telescience(SDSC, NIH-funded) Condor-Week
A small Montage workflow 1202 nodes Condor-Week
Pegasus Acknowledgements • Ewa Deelman, Carl Kesselman, Gaurang Mehta, Karan Vahi, Mei-Hui Su, Saurabh Khurana, Sonal Patil, Gurmeet Singh (Center for Grid Computing, ISI) • James Blythe, Yolanda Gil (Intelligent Systems Division, ISI) • Collaboration with Miron Livny and the Condor Team (UW Madison) • Collaboration with Mike Wilde, Jens Voeckler (UofC) - Chimera • Research funded as part of the NSF GriPhyN, NVO and SCEC projects and EU-funded GridLab • For more information • http://pegasus.isi.edu • http://www.griphyn.edu/workspace/vds • Contacts: deelman , gmehta , vahi @isi.edu Condor-Week