540 likes | 704 Views
Alexander Kononov Sobolev Institute of Mathematics Siberian Branch of Russian Academy of Science Novosibirsk, Russia. R u s s i a. Novosibirsk. How to design a PTAS. adapted from the novel by P. Schuurman and G. Woeginger directed by Alexander Kononov. Garry Potter problem.
E N D
Alexander Kononov Sobolev Institute of MathematicsSiberian Branch of Russian Academy of ScienceNovosibirsk, Russia
R u s s i a Novosibirsk
How to design a PTAS adapted from the novel by P. Schuurman and G. Woeginger directed by Alexander Kononov
Could you find a schedule for my new project with the minimal cost? We can do that! Real sorcerers can do everything! And we guess the cost of the project will be 1000000 $. Sounds great ! Wonderful ! Go ahead and determine this schedule! Tomorrow we start my new project! We can not do that by tomorrow Real sorcerers can do everything! But finding the schedule is going to take us 23,5 years!
Tomorrow… 2000000 $ But …Iwant … 1000000 $ Then…after 23,5 years The day after tomorrow 1500000 $ Three days from now 1333333 $ 1000000(1+1/X) What if I call you up exactly X days from now
NP-hard problems • Almost all interesting combinatorial problems are NP-hard. • Nobody knows a polynomial time exact algorithm for any NP-hard problem. • If there exists a polynomial time exact algorithm for some NP-hard problem then there exists a polynomial time exact algorithm for many NP-hard problems. • The most researchers guess the a polynomial time exact algorithm for NP-hard problems does not exist. • We have to solve NP-hard problems approximately.
Approximation algorithm An algorithm A is called ρ-approximationalgorithmfor problemΠ, if for all instances IofΠ it delivers a feasible solution with objective value A(I ) such that A(I ) ≤ ρOPT(I ).
Polynomial time approximation scheme (PTAS) • An approximation scheme for problemΠ is a family of (1+ε) –approximation algorithmsAε for problemΠ over all 0< ε <1. • A polynomial time approximation schemefor problemΠ is an approximation scheme whose time complexity is polynomial in the input size.
A Fully polynomial time approximation scheme (FPTAS) • A fully polynomial time approximation schemefor problemΠ is an approximation scheme whose time complexity is polynomial in the input size and also polynomial in 1/ε.
Remarks • Running time • PTAS: | I |2/ε, | I |2/ε10, (| I |2/ε)1/ε. • FPTAS: | I |2/ε, | I |/ε2, | I |7/ε3. • With respect to worse case approximationan FPTAS is the strongest possible result that we can derive foran NP–hard problem.
P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.
How to get a PTAS • Simplification of instance I. • Partitionof output space. • Adding structure to the execution of an algorithm A. Instance I Algorithm A Output A(I)
Simplification of instance I The first idea is to turn a difficult instance into a more primitive instance that is easier to tackle. Then we use the optimal solution for the primitive instance to get a near optimal solution of the original instance. Translate back App OPT # OPT Solve Simplification I I #
Approaches of simplification • Rounding • Merging • Cutting • Aligning
Rounding 3 4 16 29 32 0 12
Rounding 3 4 16 29 32 0 12
Merging 2 4 6 29 32 0
Merging 2 4 6 27 29 32 0
Cutting 2 29 32 0
Cutting 2 29 32 0
Aligning 2 4 6 29 32 0
Aligning 6+6+5+5 = 45.5 5+5+4+4 = 44.5 4+4+3+2+2 = 53 2 4 6 29 32 0
P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.
How to simplify an instance (I I#) • Big = { j J | pj ≥ εL} • New instanceI#contains all the big jobs fromI. • Small = { j J | pj < εL} • LetX= ΣjSmall pj. • New instanceI# contains X/εL jobsof lengthεL. • The small jobs in I are first glued together to give a long job of length X, and then this long job is cut into lots of chunks of length εL.
I and I# The optimal makespan of I# is fairly close to the optimal makespan of I: OPT(I#) (1+ ε)OPT(I ).
Proof • Xi– the total size of all small jobs on machineMiin optimal schedule forI. • On Mi, leave every big job where it is in optimal schedule. • Replace the small jobs onMibyXi/εL chunks of lengthεL. • X1/εL + X2/εL X1/εL + X2/εL = X/εL • Xi/εLεL – Xi (Xi/εL + 1) εL – Xi εL • OPT(I#) OPT + εL (1+ ε)OPT(I)
How to solve the simplified instance • How many jobs in instanceI#? • pj ≥ εLfor all jobs inI#. • The total length of all jobsin I#:psum 2L. • The number of jobs inI# 2L/εL= 2/ε. • The number of jobs inI#is independent ofn. • We may simply try all possible schedules. • The number of all possible schedules 22/ε! • Running time is O(22/εn)!
How to translate solution back • Letσ# be an optimal schedule for instanceI#. • Let Li#be the load of machineMiinσ#. • Let Bi#be the total length of the big jobson Miinσ#. • Let Xi be the total size of the small jobson Miinσ#. • Li#= Bi# + Xi#.
σ#(I#) σ(I) • Every big job is put onto the same machine as in scheduleσ#. • Reserve an intervalof length X1#+ 2εLon machineM1and an interval of lengthX2#on machineM2. • Pack small jobs into the reserved interval on machine M1until meet some small job that does not fit in anymore. • Pack remaining unpacked jobs into the reserved interval on machine M2.
Structuring the output The main idea is to cut output space (i.e. the set of feasible solutions) into lots of smaller regions over which the optimization problem is easy to approximate. Solve the problem separately for each smaller region and taking the best approximate solution over all region will then yield a globally good approximate solution. • Partition. • Find representatives. • Take the best.
Partition * * - the global optimal solution
Find representatives * - the global optimal solution * - an optimal solution in his district * - a representative in his district * * * * * * * * * * * * * * * * * * * * * *
Take the best * - the global optimal solution * - an optimal solution in his district * - a representative in his district * * * * * * * * * * * * * * * * * * * * * *
P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.
How to define the districts • Big = { j J| pj ≥ εL} • Small = { j J| pj < εL} • Let Φbe the set of feasible solutions for I. • Every feasible solutionσΦspecifies an assignment of the n jobs to the two machines. • Define the districtsΦ(1), Φ(2),…according to theassignmentof big jobs to the two machines: Two feasible solutionsσ1 иσ2 liein the same district if and only if σ1assigns everybig job to the same machine asσ2 does.
Number of districts • The number of big jobs 2L/εL =2/ε. • The number of different ways for assigning these jobs to two machines 22/ε. • The number of districts 22/ε! • The number of districts depends onε and is independent of the input size!
How to find good representatives • The assignments of big jobs to their machines are fixed inΦ(l). • Let OPT(l) be the makespan of the best schedule inΦ(l). • Let Bi(l)be the total length of big jobs assigned to machineMi. • T := max{Bi(1), Bi(2)} OPT(l) • The initial workload of machine Mi isBi(l). • We assign the small jobs one by one to the machines;every time a job is assigned to the machine with the currently smaller workload. • The resulting scheduleσ(l) with makespanA(l)is our representative for the districtΦ(l).
How close is A(l)to OPT(l) • IfA(l) =T, thenA(l) = OPT(l). • LetA(l)>T. • Consider the machine with higher workload in the scheduleσ(l). • Then the last job that was assigned to the machine is a small job and it has length at mostεL. • At the moment when this small job was assignedto the machine the workload of this machine was at mostpsum / 2. • A(l) (psum / 2) + εL (1 + ε)OPT (1 + ε)OPT(l)
Structuring the execution of an algorithm • The main idea is to take an exact but slow algorithm A, and to interact with it while it is working. • If the algorithm accumulates a lot of auxiliary data during its execution, then we may remove part of this data and clean up the algorithm’s memory. • As a result the algorithm becomes faster.
P2||Cmax • J={1,..., n} – jobs. • {M1, M2} – identical machines. • j : pj > 0 (j =1,…, n). • Each job has to be executed by one of two machines. • All jobs are available at time 0 and preemption is not allowed. • Each machine executes at most one job at time. • The goal is to minimize the maximum job completion time.
Code of feasible solution • Letσkbe a feasible schedule ofk first jobs{1,..., k}. • We encodea feasible schedule σkwith machine loads L1 andL2by the two dimensional vector[L1, L2]. • LetVk bethe vector setcorrespondingto feasible schedulesof k jobs{1,..., k}.
Dynamic programming Input (J={1,..., n},p: J→ Z+) • SetV0={[0,0]}, i=0. • While i n do: for every vector[x,y] Viput[x+ pi,y] and[x,y + pi]inVi+1; i:= i +1; • Find the vector[x*,y*] Vnthat minimize the valuemax [x,y]Vn{x,y}. Output([x*,y*])
Running time • The coordinates of all vectors are integer in the range from 0 topsum. • The cardinality of every vector setViis bounded from above by (psum)2. • The total number of vectors determined by the algorithm is at mostn(psum)2. • The running time of the algorithm is O(n(psum)2). • The size|I| of the input I satisfies |I| ≥ log(psum) = const · ln(psum). • The running time of the algorithm is not polynomial of the size of the input!
How to simplify the vector sets (psum, psum) psum ΔK Δ = 1+ (ε/2n) K = logΔ(psum) = = ln(psum)/ln Δ ≤ ≤ ((1+2n )/ε) ln(psum) Δ3 Δ2 Δ 1 0 psum 1 Δ Δ2 Δ3 ΔK
Trimmed vector set (psum, psum) psum Δ = 1+ (ε/2n) ΔK K = logΔ(psum) = = ln(psum)/ln Δ ≤ ≤ ((1+2n )/ε) ln(psum) Δ3 Δ2 Δ 1 0 psum 1 Δ Δ2 Δ3 ΔK
Algorithm FPTAS Input (J={1,..., n},p: J→ Z+) • SetV0#={[0,0]}, i=0. • While i n do: • for every vector [x,y] Vi#put [x+ pi,y] and[x,y + pi]inVi+1; • i:= i +1; • TransformViintoVi#. • Find the vector[x*,y*]Vn#, that minimize the value max [x,y]Vn#{x,y}. Output([x*,y*])
Running time of FPTAS • The trimmed vector setVi#contains at most one vector in each box. • There are K2 boxes. • Running time of FPTAS O(nK2). • nK2= n((1+2n )/ε) ln(psum)2. • Algorithm FPTAShas a time complexity that is polynomial in the input size and in 1/ε.