190 likes | 279 Views
Minimizing Makespan and Preemption Costs on a System of Uniform Machines. Hadas Shachnai Bell Labs and The Technion IIT Tami Tamir Univ. of Washington Gerhard J. Woeginger Univ. of Twente. Preemptive (a j = ). Non-preemptive (a j = 0 ). Generalized Multiprocessor Scheduling.
E N D
Minimizing Makespan and Preemption Costs ona System of Uniform Machines Hadas Shachnai Bell Labs and The Technion IIT Tami Tamir Univ. of Washington Gerhard J. Woeginger Univ. of Twente
Preemptive (aj=). Non-preemptive (aj=0). Generalized Multiprocessor Scheduling We need to schedule n jobs on m uniformmachines. • The machine Mi has the rate ui1. • The job Jj has processing time tj. The goal: Minimum makespan • Jj can be preempted at most aj times, aj0. Generalizes the classic preemptive and non-preemptive scheduling problems
What is Known: • The non-preemptive scheduling problem (j, aj = 0) is stronglyNP-hard (admits PTAS [HS88, ES99]). • The preemptive scheduling problem (j, aj=) can be solved optimally, using at most 2(m-1) preemptions [GS78] (at most m-1 preemptions on identical machines). Note: Similar solvability/approximability results in the identical and uniform machine environments. Question: How many preemptions (in total, or per job) suffice in order to guarantee an optimal polynomial time algorithm?
We Investigate this Hardness Gap: • The GMS problem(generalized multiprocessor scheduling): Minimize the makespan, where we have job-wise or total bound on the number of preemptions throughout a feasible schedule. • The MPS problem(minimum preemptions scheduling): The only feasible schedules are preemptive schedules with smallest possible makespan.The goal is to find a feasible schedule that minimizes the overall number of preemptions.
(A distinction between the uniform and identical machine environments) Our Results • Hardness of GMS for j, aj=1 and for any total < 2(m-1). • A PTAS for GMS instances with fixed number of machines. The scheme has linearrunning time, and can be applied to instances with release dates,unrelated machines, and arbitrary preemption costs. • PTASs for arbitrary number of machines, and a bound on the total number of preemptions. • For MPS, matching upper and lower bound on the number of preemptions required by any optimal schedule. Jjcan be processed simultaneously by jmachines (i.e.,j, j1).
The Power of Unlimited Preemption For a given instance I, withaj0and m uniform machines, let w denote the minimum makespan of a schedule with preemptions. Theorem 1: It is easy to compute w (a function of the tj’s andthe ui’s). Consider the LPT algorithm, that assigns jobs of I to the machines in order of non-increasing processing times. This is a feasible non-preemptive schedule of I. Theorem 2: Any LPT schedule yields a 2-approximation for w. Holds also for parallelizable jobs (i.e.,j, j1).
Non-preemptive Makespan = 2t. preemptive Makespan = t(1+1/m). The Power of Unlimited Preemption This 2 bound is tight already for identical jobs and identical machines: Consider an instance with m machines and m+1 jobs, j, aj0, tj=t. i, ui=1.
A PTAS for GMS: Overview • ‘Guess’ the minimum makespan, Topt, and the maximum load on any machine, P, to within factor of (1+). • Partition the jobs to the sets ‘big’, ‘small’ and ‘tiny’ (asin[SW-98] ). • Find a feasible preemptive schedule of the big jobs in the interval [0, Topt (1+)]. • The small jobs have non-negligible processing times; however, the overall processing time of the set is small relative to an optimal solution. Schedule the small jobs non-preemptively. • Add the tiny jobs greedily, with no preemptions.
A PTAS for GMS (Cont.) big a1 =3 a2 =6 small tiny u1u2 u3 u4 ; after schedulingB: C1 = C2 = C3 = C4 = Topt 0 P 1 2 3 4
Analysis of the Scheme • The big jobs can be scheduled optimally with preemptions in [0, Topt (1+)] in polynomial time,by taking B= {Jj| tj > P}, (for some (0,1]). Use a fixed number of scheduling points, and a fixed number of segment sizes. • We may add all the small jobs at the end of the schedule on the fastest machine. Let = (, m) be a parameter. Lemma: There exists(,m)(0,1], such that taking S={Jj | ·P < tjP}, we get that tj P. JjS
Theorem: The above scheme yields a (1+ )-approximation to the minimum makespan, in O( ) steps. Analysis of the Scheme (Cont.) • Thetiny jobscontribute at mostP processing unitstothe maximally loaded machine, since the number of tiny jobs that extendthe schedule is bounded by the number of ‘holes’ on any machine. We take T= {Jj| tj P/m}.
Minimizing the Number of Preemptions We count the number of segments, Ns(I), generated for an instance I. Ns(I) = #preemptions(I) + n. 1. Lower Bound: m,b, there exists an instance, I, in whichj, j b, and in any optimal schedule of I, Ns(I) m+n+m/b-2 2. Upper Bound: We present an algorithm that produces for any m,b, and any instance I in whichj, j b, an optimal schedule with Ns(I) m+n+m/b-2
idle idle 0 w An idle machine forms a DPS (r=1) 0 w Two machines that form a DPS (r=2) Upper Bound Proof (Algorithm) Assume that j, j=1. Step 1: calculate w (the optimal makespan). Step 2: schedule the jobs one after the other, each job is scheduled on one DPS. A DPS (Disjoint Processor System): a union of disjoint idle-segments of r machines with non-decreasing rates, such that the union of the idle segments is the interval [0,w].
1/2 M3M2 M1 532 M2,3M1 3 5 2 Upper Bound Proof (Algorithm) An Example: 3 machines, with rates 2,3,5 3 jobs, with lengths 4,3,3. In this case, w=tj/ui = 1. Initially, each machine forms a DPS. Available DPSs The longest job (t=4) is scheduled for 1/2 time unit on each of the machines M2, M3. The remainders of M2 and M3 form a new DPS.
1/2 M3M2 M1 M3M2 M1 532 M1,3 2 5 M2,3M1 3 5 2 2/3 Upper Bound Proof (Algorithm) Available DPSs The next (t=3) is scheduled on a DPS consisting of M2,3 and M1. We need to solve one equation in order to find the time 2/3 (3 = 3*1/2 + 5*1/6 + 2*1/3). We are left with one DPS, consisting of the remaining idle segments of M1,M3. The last job is scheduled on this DPS (3 = 2*2/3 + 5*1/3).
Algorithm Analysis • Using amortized analysis, we show that the total number of segments allocated by the algorithm is at most n+2(m-1). • Since each job is scheduled on a single DPS, the requirement j, j=1 is preserved. • For arbitrary values of j’s the algorithm schedules each job on at most j consecutive DPSs, and by similar arguments we have Ns(I) n+m+m/b-2. • For the special case of j, j=1, our algorithm and its analysis are simpler than the known algorithm [GS-78].
Related Work • Preemptive scheduling (aj= ,j) . (Horvath, Lam and Sethi, 1977). • Non-preemptive scheduling (aj=0, j). • LPT is 2-optimal (Gonzalez, Ibarra and Sahni, 1977) • PTASs (Hochbaum and Shmoys, 1987; Hochbaum and Shmoys, 1988; Epstein and Sgall, 1999) • MPS for non-parallelizable jobs • (Gonzalez and Sahni, 1978) • A wide literature on scheduling parallelizable jobs
What is Known: • The non-preemptive scheduling problem (j, aj = 0) is stronglyNP-hard (admits PTAS [Hochbaum and Shmoys, 1987, 1988; Epstein and Sgall, 1999]). • The preemptive scheduling problem (j, aj=) can be solved optimally [Horvath, Lam and Sethi, 1977], using at most 2(m-1) preemptions [Gonzalez and Sahni 1978](at most m-1 preemptions on identical machines [McNaughton 1959]). Note: Similar solvability/approximability results in the identical and uniform machine environments. Question: How many preemptions (in total, or per job) suffice in order to guarantee an optimal polynomial time algorithm?
Non-preemptive, Cmax= 2t. Preemptive, Cmax=t(1+1/m). The Power of Unlimited Preemption This 2 bound is tight already for identical jobs and identical machines: Consider an instance with m machines and m+1 jobs, j, aj0, tj=t. i, ui=1. • This result extends the result known for non-preemptive scheduling [Gonzalez, Ibarra and Sahni, 1977].