1 / 78

Scheduling Conflicting Jobs: Problems and Techniques

Scheduling Conflicting Jobs: Problems and Techniques. Guy Kortsarz, Rutgers University, Camden. Scheduling dependent jobs. Jobs compete on resources Create a graph. Each job is a vertex. Two vertices are adjacent if dependent Two possibilities:

zook
Download Presentation

Scheduling Conflicting Jobs: Problems and Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling Conflicting Jobs: Problems and Techniques Guy Kortsarz, Rutgers University, Camden

  2. Scheduling dependent jobs • Jobs compete on resources • Create a graph. Each job is a vertex. • Two vertices are adjacent if dependent • Two possibilities: • Single unit jobs • Jobs that require more than one unit of processing • Two conflicting jobs can not be executed at the same time unit

  3. Relation of single units jobs to graph coloring • Given a graph G, find a mapping :VNso that if (v)=(u) then uvE • Objective, usually, is to minimize the number of colors used. Classic applications: Timetabling, frequency allocation

  4. Multicoloring bipartite graphs 3 4 2 2 1 4

  5. Multicoloring bipartite graphs 3 4 2 2 1 4

  6. Formal definition of the multicoloring problem • Input: Graph G=(V,E), with lengths x(v) on the vertices • Output: An assignment :V 2Nsuch that adjacent vertices do not receive overlapping times. (v)(u)   implies that uvE • Goal: minimize the maximum integer: • Minimize MaxuV {f(u) } • f(u)- The largest color of u.

  7. 6-clique Multicoloring is often easy More generally, chordal graphs, even perfect, are no harder to multicolor than to color. Maybe one reason why multicolorings are not more common.

  8. Some other objectives than makespan (= number of colors) • Sum of completion times of jobs • For each vertex, count the last time unit assigned, and sum these values up • Weighted sum of completion times • Vertices additionally have importance value attached • Total lateness • Assumes deadline for each task • Sum of flow times • Assumes release time of each job

  9. “Sum of completion times” in Graph coloring? • Recall we color with numbers, :VN • Sum of a coloring = Sum of the values assigned to the vertices • Sum coloring problem: • Find a coloring that minimizes the chromatic sum, • This measure is more favorable to the users (as a whole), while the makespan is desired by the system (machines)

  10. The Sum Multicoloring problem • Each vertex vV requires x(v)  1 distinct colors. • A proper coloring of G is afunction Ψ: V→ 2N,such that adjacent vertices are assigned distinct sets of numbers (colors). Let fΨ(v) denotethe maximum color assigned to v byΨ. • The sum multicoloring (SMC) problem: find a multicoloring Ψthat minimizes vV fΨ(v). • The preemptive model (pSMC): each vertex can get any set of colors. • The no-preemption model (npSMC): assign to each vertex a contiguous set of colors.

  11. 1 2 3 Is “Sum coloring” any different from ordinary coloring? YES! 12 11

  12. Ok, isn’t at least the sum of an ordinary coloring “good enough”? Factor k for k-chromatic graphs. Any 3-coloring has sum of 2n... ... while a certain 4-coloring has sum of n+6

  13. Applications • Wire minimization in VLSI design; The conflict graph is an interval graph • Train scheduling; Permutation graphs • Minimizing distance in storage allocation; Interval graphs • Session scheduling in path arrays • Biprocessor scheduling; Line graphs • Data migration; Line graphs • Dynamic storage allocation; Interval graph • Open shop scheduling; Line graphs of bipartite graphs

  14. Some interesting results in sum coloring; Upper bounds • NP-hard on planar graphs, but approximation scheme (PTAS) exists. Halldorsson, K. • Bipartite graphs: 27/26 approximation. The following problem is polynomial: Color the vertices by 3 colors, so that the sum is minimized but the third color-class may not be an independent set. M. Malafiejski , K. Giaro, R. Janczewski and M. Kubale  • Approximable within factor 4, with an oracle for Max Independent Set It was well known that produces O(log n) colors Bar-Noy, Bellare, Halldorsson, Schachnai,Shapira

  15. Some hardness results • APX-hard on bipartite graphs. [Bar-Noy, K. 98] • Sum coloring is as hard as Graph Coloring in general, i.e. n1-o(1) -approximation hardness [Feige-Killian 96, Bar-Noy+ 96] • Therefore, SMC is at least as hard. • APX-hard on interval graphs [Gonen 01] • Open shop scheduling APX-hard; [Hoogoveen, Shurman and Woeginger 98] • Hardness not well understood

  16. Very Few Exact Results • Sum multicoloring non preemptively of trees is in P. [Halldórsson, K, Proskurowski, Salman, Shachnai and Telle] • Preemptive sum multicoloring of Paths is in P. An ICALP paper (!) [Kovács] • Preemptive coloring of trees is NPC (!) [Marx] • See the collection of papers maintained by Daniel Marx for more results: http://www2.informatik.hu-berlin.de/~dmarx/sum.php

  17. Tools and techniques “The craftsman is known by his tools”

  18. Techniques • Geometric series and randomization • Grouping by length • Universal colorings • Delaying long vertices • Reducing to Independent Set, via SC • Rounding and scaling • Reducing sum to makespann • LP techniques

  19. A simple guessing game • Player A decides on a number x. • Player B tries a sequence x1, x2, ..., of guesses until it finds xi that Player A says satisfies xi ≥ x. • The value of the game is the performance ratio

  20. A simple deterministic strategy • Guess 1, 2, 4, 8, 16, ... • Performance ratio of 4: • The last number is at most 2x • The next-to-last number is less than x • The previous numbers are a geometric series, at most x. • Bad instance: • x=2k+1, then =(2k+2-1)/(2k+1)  4 • This is also best possible...deterministic.

  21. Lower bound on a deterministic algorithm • We show that no deterministic approximation can have ratio better than 4- • Let >0 be a small enough constant • xi=iji-1xj; • xi+1=µixi

  22. Assumptions • The start point assumption: x1 max{4,1/} Otherwise, wait for the guess to be large enough • We may assume that i 4. Otherwise the adversary waits a bit and then stops. Declares x=xi+1. • The ratio is (4xi+xi)/(xi+1)=5xi/(xi+1)4 The last inequality follows as x14

  23. Strategy for an adversary • Stopping condition: If at some iteration µi (1-/8)(1+i) the adversary stops and says x=xi+1 • Else: let r=x2/x1 and =8ln (3/r)/ • If for +2 times the stopping condition does not apply then the adversary stops • Answers: your last guess xq equals x

  24. Analysis • If at one of the +2 iterations µi (1-/8)(1+i) • The ratio is: (1/i +1+µi)xi/(xi+1)  4- The last inequality is proved using: •  small enough • i4 • x1  1/2 • x2+1/x  2 for every x

  25. If +2 consecutive failures • For every i: i+1=µii/(i+1)  (1-/8) i • Thus +2<(1-/8)<1/3 • Thus: 3xq< j q-1 xj • The ratio is at least: (xq+3xq)/xq=4

  26. A randomized strategy • Defeat the worst-case instance, by randomizing the initial guess. • Work in the log scale • Choose an arithmetic series with a fixed multiple; for instance, e. • Randomize the starting point, R[0,1) • Define guess xi = ei+  +1 +2 +3 +4 0 log length

  27. t i-1 i i-2 Analysis of randomized strategy • Write x=et, or t = ln x and let i=i be such that i-1+ ≤ t ≤ i+. • As  ranges from 0 to 1, i+-t also ranges uniformly from 0 to 1. • Thus,

  28. Analysis, cont. • The expected amount spent by the algorithm is then • Hence, a performance ratio of 2.71

  29. Applications of strategy • Geometric series has been used on different measures for different problems • Lengths of vertices [BBHST’98] Non-preemptive SMC of bipartite graphs • Maximum k-colorable subgraph [HKS’01]Sum coloring interval graphs • LP values of vertices [GHKS, WAOA’04]SMC of line graphs (pre and non-pre) and interval graphs (non-pre) • All, in some sense, are lower bounds on the optimal solution

  30. Grouping by length • The challenge of multicoloring is that vertices can have widely divergent lengths • Handling separately vertices with roughly similar lengths often simplifies the problem • It is also natural to expect that vertices of like length go together

  31. V1 V2 V3 V4 V5 length Grouping by length • Divide the real line into segments: • Vertices of lengths that fall within the same induce a subproblem that is solved separately. • Subsolutions are pasted together in order to produce final solution

  32. How to group • Cost of whole (given that we break)= Cost of coloring VSMALL+ | VBIG | * #colors used on VSMALL + Cost of coloring VBIG • Avoid VSMALL VBIG length x x xx x x BIG SMALL

  33. The Markov Inequality: Several Break Points • The Markov inequality: Given a collection of n positive numbers with average µ : • |{ ai | ai 2µ }| n/2 • |{ ai | ai 3µ }| n/3 • What happens if we consider all break points r, r+1, r+2,…….,s? Can we get a point where there are less than 1/i numbers that are at least i times the average?

  34. The basic breakpoint lemma Consider the collection of breakpoints r,r+1,…..,s There exists r  j  s so that: |{ ai | ai jµ }| n/(j ln(s/r))

  35. Breakpoint lemma [Halldorsson,K ’98] • Can break into size groups BIG and SMALL such that • The largest item in SMALL is negligible in comparison with the average item in BIG • Thus, if the number of colors used is proportional to the largest item, then the cost of breaking up is minor • Holds for constant-colorable graphs Breakup overhead = | VBIG | * #colors used on VSMALL

  36. Breakpoint Lemma • For any q, there is a sequence of breakpoints bi satisfyingsuch that the total breakpoint overhead is at most b2 b3 b4 b1

  37. PTAS for planar graphs of roughly equal length vertices • Let [a,b] be the range of vertex lengths • Round the lengths of the jobs to a multiple of a  factor overhead • Scale the lengths by a factor [-2 , -1b/a]  No overhead for nonpreemptive problems • Baker decomposition into k-outerplanar Gk and outerplanar and small GO. • Solve optimally by dp, but truncate the coloring after (1/)b/a colors.  Cost ≤ OPT • Finish by 4-coloring GO and remaining vertices  Cost of O(n/k)* c (1/)b/a≤  n/c when k >> -2 (b/a).

  38. Input graph G Round and scale, by factor of a Baker’s decomposition Solve optimally by DP Color with minimal amount of colors Retain the (1/)b/a smallest colors Combine Output a coloring

  39. Preemptive multicoloring of planar graphs • A general tool for O(1) makspan coloring: preemptive scaling. We are able to reduce job lengths paying only (1+) • Claim 1: Say that all x(v) are divisible by q. Then then if x(v)/qc ln n for every v then: (I) q(I/q)(1+)(I)

  40. Proof • Take the optimum for I • Include every independent set with probability (1+’)/q in a solution for I/q • With constant probability the makespan is not larger than (1+’)(1+ ’’) (I)/q • By the Chernoff bound and the union bound every vertex gets at least x(v)/q scheduling units

  41. A general lemma on reducing jobs lengths • Assume that a graph is constant colorable • It is possible to reduce all job lengths to O(log n) with only (1+) penalty • The number of preemption can be reduce to O(log n) • This holds true even if tasks are initially of size exponential in n (!)

  42. Proof • Split jobs to (roughly) log n most significant bits andthe rest • The large parts of numbers are all divisible by some large q • Reduce these numbers to log n bits (small loss). The O(log n) is derived • Color firs by round robin the small parts non-preemptively • Small delay because constant colors and small numbers • Then take the solution to the O(log n) instance • The coloring for x’/q is repeated q times

  43. Planar decomposition • A planar graph G can be decomposed into two graphs G’ and G’’ so that: • G’ has treewidth k2 • G’’ has treewidth 2, and color requirement sum O(S/k2) • If we have an approximation for p-SMC for graphs with treewidth k, we can use to get c’ and c’’ for G’ and G’’ • To get the combine coloring: Each k color classes of c’ put one of c’’

  44. Universal family of colorings • Works for graphs that have constant treewidth • It is a family of colorings so that for every instance there exists a coloring in the family that approximates the SMC by (1+) factor • The key property: the number of different colorings of v in the family is polynomial in n • This allows finding the best solution via DP if treewidth constant • The existence is proven by modifying OPT

  45. A somewhat surprising fact • The colorings in our family depend only on a specific k-coloring of the graph, on n and on p • In particular, it does not depend on the actual connections in the graph, nor on the distribution of color requirements • Hence the name universal

  46. The universal family • Split the colors of every vertex in powers of (1+) • Segment i: colors (1+)i to (1+)i+1 • In every segment, treat the coloring as a makespann instance • Thus in every segment O(log n) colors • Make the number of segments in which a vertex is colored, constant

  47. How to bound # of segments • For every vertex v, its colors that are smaller than x(v) or larger than (2/)x(v) are removed from OPT • Instead they are replace by round robins that are performed every (roughly) 1/ rounds • The key: the number of “non-standard” segments for every v is O(log 1+ (2 ))

  48. The number of preemption for v In every segment v has O(log n) preemptions • # colors per segment clog n. Choose c’ log n of them for v • The number of possibilities for v is O((2clog n) f() ) hence polynomial in n • The round robin, executed only every 1/ rounds and adds O( opt)

  49. Delay of large jobs • This technique is illustrated via its Application on two classical problems: • 1) Data migration • 2) Open shop scheduling

  50. pe pe pe pe Data migration pe = length of data transfer along edge e • At most one active transfer per storage device • Minimize (weighted) sum of completion times of the storage devices Storage Area Networks

More Related