520 likes | 667 Views
Farming with Condor. Douglas Thain thain@cs.wisc.edu INFN Bologna, December 2001. Outline. Introduction What is Condor? Why Condor on the Farm? Components Daemons, pools, flocks, ClassAds Short Example Executing 1000 jobs. Complications Firewalls, security, etc….
E N D
Farming with Condor Douglas Thain thain@cs.wisc.edu INFN Bologna, December 2001
Outline • Introduction • What is Condor? Why Condor on the Farm? • Components • Daemons, pools, flocks, ClassAds • Short Example • Executing 1000 jobs. • Complications • Firewalls, security, etc…
The Condor Project (Est. 1985) Distributed systems CS research performed by a team that faces • software engineering challenges in a UNIX/Linux/NT environment, • active interaction with users and collaborators, • daily maintenance and support challenges of a distributed production environment, • and educating and training students. Funding - NSF, NASA,DoE, DoD, IBM, INTEL, Microsoft and the UW Graduate School
“I am idle.” “I have work.” Job Job Job Central Manager Job Job “I am idle.” A Bird of Opportunity Busy Job Idle Busy Over the course of a week, 80% of a desktop machine’s time is wasted. Job Idle
The Condor Principle: The owner is absolutely in charge! The Condor Corollary: The visitor must be prepared for the unexpected!
Tricky Details • What if the user returns? • Checkpoint the job periodically. • Restart the job elsewhere from a checkpoint. • What if the machine does not have your files? • Perform I/O via Remote System Calls • These two features require that you link with the Condor C library. • Can’t relink? You may still use Condor, but with some loss in opportunities.
Checkpointing Job Checkpoint Restart Job
Remote System Calls Just like home! Shadow Remote System Calls Job Disk
Top 10 Condor Pools: 226 Condor Pools 5576 Condor Hosts
Back to the Farm • The cluster is the new engine of scientific computing. • Inexpensive to: • procure • expand • repair
The Ideal Cluster • The ideal cluster has every node identical, in every way: • CPU • Memory • File system • User accounts • Software installation • Users expect to be able to execute on any node. • Some models (MPI) require perfectly matched nodes.
The Bad News • Keeping the entire cluster available for use is very difficult, when users expect complete symmetry! • Software failures: • Full disk, wild process, etc... • Hardware failures: • Replace with exact match? (not best buy) • Replace with better hardware? (goes unused) • Much better to query rather than assume state of the cluster.
High Throughput Computingis a 24-7-365 activity. FLOPY (60*60*24*7*52)*FLOPS
Why Condor on the Farm? • Condor is expert at managing very heterogeneous resources for high-throughput computing. • Large clusters, despite our best efforts, will always be slightly heterogeneous. • (It may not be in your financial interests to keep them perfectly homogeneous.) • Condor assists users in making progress, despite the imperfections of the cluster. • Few users *require* the whole identical cluster. • The pursuit of cluster perfection is then an in issue of small throughput improvement, rather than 0 or max.
Basic HTC Mechanisms • Matchmaking - enables requests for services and offers to provide services find each other (ClassAds). • Persistence - records are kept in stable storage -- any component may crash and reboot. • Asynchronous API - enables management of dynamic (opportunistic) resources. • Checkpointing - enables preemptive resume scheduling (go ahead and use it as long as it is available!). • Remote I/O - enables remote (from execution site) access to local (at submission site) data.
City Bird, Country Farm • The lessons learned and techniques used in stealing cycles from workstations are just as important when trying to maximize the throughput of a homogeneous luster.
Outline • Introduction • What is Condor? Why Condor on the Farm? • Components • Daemons, pools, flocks, ClassAds • Short Example • Executing 1000 jobs. • Complications • Firewalls, security, etc…
Components • Condor can be quite complicated: • Many daemons, many connections, many logs... • The complexity is necessary and desirable: • Each process represents an independent interest: • Machine requirements (startd) • User requirements (schedd) • System requirements (central manager) • Explain the structure by working from the bottom up.
“Some-thing is wrong!” Machine state and policy. condor master Central Manager Size? Avail? Speed? Load? User present? Size? A Single Machine administrator email “Only run jobs submitted from Bologna or Milan. Prefer jobs owned by thain. Evict jobs that don’t fit in memory. “ condor startd disk Local policy file cpu RAM keyboard
condor startd condor startd condor startd condor startd condor startd condor startd Central Manager disk disk disk disk disk disk cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM Local Policy: “I prefer thain” Local Policy: “I prefer mazzanti” Local Policy: “I don’t care.” A Single Pool Machine state and policy. Machine state and policy. Global Policy: “All things being equal, Bologna gets 2x as many machines as Milan.”
Central Manager condor startd condor startd condor startd condor startd condor startd cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM A Typical Pool condor startd cpu RAM Global Policy: “All things being equal, Bologna gets 2x as many machines as Milan.” Uniform Local Policy: “All machines except #3 prefer mazzanti” NFS / AFS Server disk cpu RAM
Job Job condor schedd condor schedd I am idle. I have work. Job Job Job Job Job Job I am idle. Job Job Central Manager condor startd condor startd condor startd condor startd condor startd condor startd Job Job I have work. Job Job I am idle. cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM Job Job Schedulers
Job Job Job Job Job Job condor schedd Job Job Job Job Job Job Job Job Job UWCS Central Manager INFN Central Manager condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd condor startd Job Job cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM RAM Job Job Job Job Job Job Multiple Pools
Matchmaking • Each Central Manager is an introduction service that matches compatible machines and jobs. • A simple language (ClassAds) is used to represent everyone’s needs and desires. • The match is not binding contract -- each side is responsible for enforcing its own needs. • If a central manager crashes, jobs will continue to run, but no further introductions are made.
Job Ad: Type = “Job” Cmd = “cmsim.exe” Owner = “thain” Requirements = (OpSys==LINUX) && (Memory>128) Machine Ad: Type = “Machine” Name = “vulture” OpSys = “LINUX” Memory = 256 Requirements = (Owner==“thain”) ClassAd Example
Job Ad Machine Ad match I am idle. I have work. Match notification Claim and execute Execute again. …and again! Matchmaking with ClassAds Central Manager Schedd Startd
Placement vs. Scheduling • A Condor Central Manager suggests the placement of jobs on machines, with the understanding that all jobs are ready to run. • A Condor scheduleris responsible for executing a list of jobs with various requirements. It may order jobs according to the users requests. • Neither component plans ahead to make a schedule or a reservation for execution -- it is assumed change is so frequent that schedules are not useful.
Can we Schedule? • Of course, schedule is important for users that have strict time contraints. • Scheduling is more important to High-Performance Computing (HPC) than High-Throughput Computing (HTC.) • Scheduling requirements may be worked into Condor in one of two ways: • 1 - Users may share a single submission point. • 2 - The administrator may periodically reconfigure policy according to a schedule established elsewhere.
Job Job condor schedd I am idle. Job Job Job Job Job Job I am idle. Job Job Central Manager condor startd condor startd condor startd condor startd condor startd condor startd Job Job Job Job I am idle. cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM Job Job 8:00: All nodes prefer thain. 10:00: All nodes prefer mazzanti. Method 2: Modify global policy when necessary. Scheduling Method 1: All users share a schedd.
Outline • Introduction • What is Condor? Why Condor on the Farm? • Components • Daemons, pools, flocks, ClassAds • Short Example • Executing 1000 jobs. • Complications • Firewalls, security, etc…
How Many Machines? % condor_status Name OpSys Arch State Activity LoadAv Mem lxpc1.na.infn LINUX-GLIBC INTEL Unclaimed Idle 0.000 30 axpd21.pd.inf OSF1 ALPHA Owner Idle 0.266 96 vlsi11.pd.inf SOLARIS26 SUN4u Claimed Busy 0.000 256 . . . Machines Owner Claimed Unclaimed Matched Preempting ALPHA/OSF1 115 67 46 1 0 1 INTEL/LINUX 53 18 0 35 0 0 INTEL/LINUX-GLIBC 16 7 0 9 0 0 SUN4u/SOLARIS251 1 1 0 0 0 0 SUN4u/SOLARIS26 6 2 0 4 0 0 SUN4u/SOLARIS27 1 1 0 0 0 0 SUN4x/SOLARIS26 2 1 0 1 0 0 Total 194 97 46 50 0 1
Submit the Job • Create a submit file: • vi sim.submit • Submit the job: • condor_submit sim.submit Executable = sim Input = sim.in Output = sim.out Log = sim.log queue
Watch the Progress % condor_q -- Submitter: axpbo8.bo.infn.it : <131.154.10.29:1038> : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 5.0 thain 6/21 12:40 0+00:00:15 R 0 2.5 sim.exe Each job gets a unique number. Status: Unexpanded, Running or Idle Size of program image (MB)
Receive E-mail When Done This is an automated email from the Condor system on machine "axpbo8.bo.infn.it". Do not reply. Your condor job /tmp_mnt/usr/users/ccl/thain/test/sim 40 exited with status 0. Submitted at: Wed Jun 21 14:24:42 2000 Completed at: Wed Jun 21 14:36:36 2000 Real Time: 0 00:11:54 Run Time: 0 00:06:52 Committed Time: 0 00:01:37 . . .
Running Many Processes • The real benefit of Condor comes from managing 1000s of jobs. • First, get organized. Write a script to make 1000 input files. • Now, simply adjust your submit file: Executable = sim.exe Input = sim.in.$(PROCESS) Output = sim.out.$(PROCESS) Log = sim.log Queue 1000
What can go wrong? • If an execution site crashes: • Your job will restart elsewhere. • If the central manager crashes: • Jobs will continue to run, no new matches will be made. • If the submit machine crashes: • Jobs will stop, but be re-started when it reboots. • The only way to lose a job is to throw away the disk on the submit machine!
Outline • Introduction • What is Condor? Why Condor on the Farm? • Components • Daemons, pools, flocks, ClassAds • Short Example • Executing 1000 jobs. • Complications • Firewalls, security, etc…
Firewalls • Why a firewall? • Prevent all outside contact. • Prevent non-approved contact. • Carefully securing every node is too much work. • What’s the problem? • A variety of processes comprise Condor. • A variety of ports must be used at once. • Submit and execute machines must communicate directly, not through the CM.
condor schedd Central Manager condor startd condor startd condor startd condor startd condor startd condor startd cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM The Firewall Problem Firewall Public Network Private Network
condor schedd Central Manager condor startd condor startd condor startd condor startd condor startd condor startd cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM Use only ports 1000-1010. Firewall Solution #1 Allow ports 1000-1010. Firewall Public Network Private Network
Firewall Solution #1 • Pros: • Easy to configure Condor. • Easy to configure firewall. • Machine remain a part of the pool. • Cons: • Number of ports limits number of simultaneous interactions with the node. (running jobs + queue ops + negotiations, etc.) • More ports = more connections, less security
Central Manager condor startd condor startd condor startd condor startd condor startd condor startd cpu cpu cpu cpu cpu cpu RAM RAM RAM RAM RAM RAM Firewall Solution #2 Private Network Firewall condor schedd Public Network ssh
Firewall Solution #2 • Pros: • Only port through router is ssh. • Cons: • Pool is partitioned! • Users must manually submit to every pool that is behind a firewall. (I.e. they won’t.) • No global policy possible. • No global management/status possible.
Network Address Translation • Both solutions only work as long as the firewall simply drops packets it doesn’t like. • If the firewall is a Network Address Translator (masquerade,) then only solution #2 works. • Research in Progress: A Condor NAT that runs on the firewall and exports the pool to the outside world.
Security • Current Condor security: • Authenticate via DNS. • Authorize classes of hosts for certain tasks. • New Condor (6.3.X?) security: • Authenticate with encrypted credentials. • Authorize on a per-user basis. • Forward credentials to necessary sites.
Condor 6.2 Security • Authentication: DNS is queried for each incoming connection in order to determine the name. • Authorization: Each participant permits a class of hosts to perform certain tasks. At UW-CS: • HOSTALLOW_READ = *.wisc.edu, *.infn.it • Hosts that may query the machine state. • HOSTALLOW_WRITE = *.cs.wisc.edu, *.infn.it • Hosts that may execute jobs, send updates, etc... • HOSTALLOW_OWNER= $(FULL_HOSTNAME) • Hosts that may cause this machine to vacate. • HOSTALLOW_ADMINISTRATOR= condor.cs.wisc.edu • Hosts that may change priorities, turn Condor on/off
Condor 6.3.X? Security • Principle: No single security mechanism is appropriate for all sites. Condor must have many tools. • United States Air Force: • Kerberos authentication, all connections encrypted • Cluster behind a firewall: • Host authentication, no encryption • Grid Computing: • GSI credentials from certain authorities, encryption is up to the user.
Central Manager condor startd condor schedd cpu cpu RAM RAM Condor 6.3.X Security Execute GSI ? YES! GSI KRB 5 ? GSI ? FORWARD CERT I/O NO YES! Submit Data storage Disk