1 / 24

CONDOR DAGMan and Pegasus

Learn about DAGMan (Directed Acyclic Graph Manager) to efficiently manage Condor jobs by defining dependencies, running, recovering from failures, and leveraging additional features. Detailed instructions will guide you through submitting, running, and recovering a DAG in a fault-tolerant manner.

kdubois
Download Presentation

CONDOR DAGMan and Pegasus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations

  2. DAGMan Directed Acyclic Graph Manager DAGMan allows you to specify the dependencies between your Condor jobs, so it can manage them automatically for you. (e.g., “Don’t run job “B” until job “A” has completed successfully.”)‏ 2

  3. What is a DAG? A DAG is the datastructure used by DAGMan to represent these dependencies. Each job is a “node” in the DAG. Each node can have any number of “parent” or “children” nodes – as long as there are no loops! Job A Job B Job C Job D 3

  4. A DAG is defined by a .dagfile, listing each of its nodes and their dependencies: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child D each node will run the Condor job specified by its accompanying Condor submit file Defining a DAG Job A Job B Job C Job D 4

  5. Submitting a DAG To start your DAG, just run condor_submit_dag with your .dag file, and Condor will start a personal DAGMan daemon which to begin running your jobs: % condor_submit_dag diamond.dag condor_submit_dag submits a Scheduler Universe Job with DAGMan as the executable. Thus the DAGMan daemon itself runs as a Condor job, so you don’t have to baby-sit it. 5

  6. Running a DAG DAGMan acts as a “meta-scheduler”, managing the submission of your jobs to Condor-G based on the DAG dependencies. DAGMan A Condor-G Job Queue .dag File A B C D 6

  7. Running a DAG (cont’d) DAGMan holds & submits jobs to the Condor-G queue at the appropriate times. DAGMan A Condor-G Job Queue B B C C D 7

  8. Running a DAG (cont’d) In case of a job failure, DAGMan continues until it can no longer make progress, and then creates a “rescue” file with the current state of the DAG. DAGMan A Condor-G Job Queue Rescue File B X D 8

  9. Recovering a DAG -- fault tolerance Once the failed job is ready to be re-run, the rescue file can be used to restore the prior state of the DAG. DAGMan A Condor-G Job Queue Rescue File B C C D 9

  10. Recovering a DAG (cont’d) Once that job completes, DAGMan will continue the DAG as if the failure never happened. DAGMan A Condor-G Job Queue B C D D 10

  11. Finishing a DAG Once the DAG is complete, the DAGMan job itself is finished, and exits. DAGMan A Condor-G Job Queue B C D 11

  12. Additional DAGMan Features Provides other handy features for job management… nodes can have PRE & POST scripts failed nodes can be automatically re-tried a configurable number of times job submission can be “throttled”

  13. HANDS-ON • http://users.cs.fiu.edu/~skala001/DAGMan_Lab.htm

  14. Ewa Deelman, deelman@isi.edu www.isi.edu/~deelman pegasus.isi.edu

  15. Ewa Deelman, deelman@isi.edu www.isi.edu/~deelman pegasus.isi.edu

  16. Pegasus: Planning for Execution in Grids Abstract Workflows - Pegasus input workflow description workflow “high-level language” only identifies the computations that a user wants to do devoid of resource descriptions devoid of data locations Pegasus (http://pegasus.isi.edu) a workflow “compiler” target language - DAGMan’s DAG and Condor submit files transforms the workflow for performance and reliability automatically locates physical locations for both workflow components and data finds appropriate resources to execute the components provides runtime provenance DAGMan A workflow executor Scalable and reliable execution of an executable workflow

  17. Pegasus Workflow Management System client tool with no special requirements on the infrastructure Abstract Workflow A reliable, scalable workflow management system that an application or workflow composition service can depend on to get the job done A decision system that develops strategies for reliable and efficient execution in a variety of environments Pegasus mapper DAGMan Reliable and scalable execution of dependent tasks Condor Schedd Reliable, scalable execution of independent tasks (locally, across the network), priorities, scheduling Cyberinfrastructure: Local machine, cluster, Condor pool, OSG, TeraGrid

  18. Generating a Concrete Workflow Information • location of files and component Instances • State of the Grid resources Select specific • Resources • Files • Add jobs required to form a concrete workflow that can be executed in the Grid environment • Data movement • Data registration • Each component in the abstract workflow is turned into an executable job

  19. Information Components used by Pegasus • Globus Monitoring and Discovery Service (MDS) • Locates available resources • Finds resource properties • Dynamic: load, queue length • Static: location of GridFTP server, RLS, etc • Globus Replica Location Service • Locates data that may be replicated • Registers new data products • Transformation Catalog • Locates installed executables

  20. Example Workflow Reduction • Original abstract workflow • If “b” already exists (as determined by query to the RLS), the workflow can be reduced

  21. Mapping from abstract to concrete • Query RLS, MDS, and TC, schedule computation and data movement

  22. Pegasus Research • resource discovery and assessment • resource selection • resource provisioning • workflow restructuring • task merged together or reordered to improve overall performance • adaptive computing • Workflow refinement adapts to changing execution environment

  23. Benefits of the workflow & Pegasus approach • The workflow exposes • the structure of the application • maximum parallelism of the application • Pegasus can take advantage of the structure to • Set a planning horizon (how far into the workflow to plan) • Cluster a set of workflow nodes to be executed as one (for performance) • Pegasus shields from the Grid details

  24. Benefits of the workflow & Pegasus approach • Pegasus can run the workflow on a variety of resources • Pegasus can run a single workflow across multiple resources • Pegasus can opportunistically take advantage of available resources (through dynamic workflow mapping) • Pegasus can take advantage of pre-existing intermediate data products • Pegasus can improve the performance of the application.

More Related