350 likes | 412 Views
Learn how Quincy optimizes fairness and data locality in cluster computing, balancing job allocation for efficiency. Explore graph-based scheduling, minimizing matching costs while ensuring fairness constraints are met.
E N D
Quincy: Fair Scheduling for Distributed Computing Clusters Michael Isard, Vijayan Prabhakaran, Jon Currey, Udi Wieder, Kunal Talwar, and Andrew Goldberg @ Microsoft Research Presenter: Weiyue Xu 22nd ACM Symposium on Operating Systems Principles
Credit • Modified version of www.sigops.org/sosp/sosp09/slides/quincy/QuincyTestPage.html www.cs.uiuc.edu/class/sp11/cs525/slides.021711.ppt
Outline • Introduction • Goal of Quincy • Baseline: Queue Based Scheduler • Flow Based Scheduler: Quincy • Evaluation • Conclusion
Motivation • Popularity of data-intensive cluster computing • Fairness • More than 50% are small jobs ( less than 30 minutes) • Large job should not monopolize the cluster • If Job X takes t seconds when it runs exclusively on a cluster, X should take no more than Jt seconds when cluster has J concurrent jobs. (For N computers and J jobs, each job should get at-least N/J computers) • Data locality • Large disks directly attached to the computers • Network bandwidth is expensive
Problem setting & assumptions • Homogenous environment • Dryad distributed execution platform • Similar with MapReduce, Hadoop • For each job, it containsone “root task” and several “worker tasks” • Tasks are independent of each other
For MPI (message-passing) jobs, coarse grain scheduling • Devote a fixed set of computers for a particular job • Staticallocation, rarely change the allocation • Tasks’dependencies costly to kill a task • No direct-attached storage • For Dryad jobs, fine grain resource sharing • Multiplex all computers in the cluster between all jobs • When one task completes, computer resources may be reassigned to another job • Independent tasks (less costly to kill a task and restart) • Large datasets attached to each computer
Example of Fine Grain Sharing N/J computers are used by one job at a time but the set in use varies over lifetime
Data Locality • Data transfer cost depends on the size and location.
Goal of Quincy • Fairness + data locality • N computers, J concurrent jobs • Each job gets at-least N/J computers • With data locality • place tasks near data to avoid network bottlenecks • Joint optimization of fairness and data locality • A multi-constrained optimization problem with trade-offs!
Baseline: Queue Based Scheduler • Greedy (G): • Locality is computed by the root task for each worker task by computing the amount of data need to be transferred if computer m is be assigned to the task. (preferred list Cm > Rl > X) • Without considering fairness • Simple Greedy Fairness (GF): • “blocked” job will not be assigned more computers. But pre-existing tasks from now-blocked jobs are allowed to run to completion (Similar as Hadoop Fair Scheduler) • Fairness with preemption (GFP) • The over-quota tasks will be killed
Flow Based Scheduler: Quincy Main Idea: [Matching = Scheduling] Construct a graph based on scheduling constraints and cluster architecture Assign costs to each matching Finding a min cost flow on the graph is equivalent to finding a feasible schedule Each task is either scheduled on a computer or it remains unscheduled Fairness constrains the number of tasks scheduled for each job
New Goal Minimize matching cost while obeying fairness constraints Instead of making local decisions [queue based], solve it globally Issues: How to construct the graph? How to embed fairness and locality constraints in the graph?
Graph Construction Start with a directed graph representation of the cluster architecture
Graph Construction (2) Add an unscheduled node Uj Each worker task has an edge to Uj There is a single edge from Uj to the sink High cost on edges from tasks to Uj. The cost and flow on the edge from Uj to the sink controls fairness Fairness controlled by adjusting the number of tasks allowed for each job
Graph Construction (3) Add edges from tasks (T) to computers (C), racks (R), and the cluster (X) Control over data locality cost(T-C) << cost(T-R) << cost(T-X) 0 cost edge from root task to computer to avoid preempting root task
A Feasible Matching Cost of T-U edge increases over time New cost assigned to scheduled T-C edge: increases over time
Evaluation Typical Dryad jobs (Sort, Join, PageRank, WordCount, Prime) Prime used as a worst-case job that hogs the cluster if started first 240 computers in cluster. 8 racks, 29-31 computers per rack More than one metric used for evaluation
Conclusion New computational model for data intensive computing Elegant mapping of scheduling to min-cost flow/matching problem
Discussion • Homogenous environment • Centralized Quincy controller: single point of failure. • No theoretical stability guarantee. • Cost measure: fairness, cost of kill