140 likes | 159 Views
Algorithms Lecture 10. Lecturer: Moni Naor. Linear Programming in Small Dimension. Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2 … c d ¢ x d Subject to: a 1,1 ¢ x 1 + a 1,2 ¢ x 2 … a 1,d ¢ x d · b 1
E N D
AlgorithmsLecture 10 Lecturer:Moni Naor
Linear Programming in Small Dimension Canonical form of linear programming Maximize: c1 ¢x1 + c2 ¢ x2 … cd ¢xd Subject to: a1,1 ¢ x1 + a1,2 ¢ x2 … a1,d ¢ xd· b1 a2,1 ¢ x1 + a2,2 ¢ x2 … a2,d ¢ xd· b1 ... an,1 ¢ x1 + an,2 ¢ x2 … an,d ¢ xd· bn n – number of constraints and d – number of variables or dimension
Linear Programming in Two Dimensions Feasible region Optimal vertex
What is special in low dimension • Only d constraints determine the solution • The optimal value on those d constraints determine the global one • Problem is reduced to finding those constraints that matter • We know that equality hold in those constraints • Generic algorithms: • Fourier-Motzkin: (n/2)2d • Worst case of Simplex: number of bfs/vertices (nd )
Key Observation • If we know that an inequality constraints is defining: • we can reduce the number variables • Projection • Substitution 4X1 - 6x 2 =4 Feasible region Optimal vertex
Incremental Algorithm B Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|=d output B(H)=H • Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program
Correctness: by induction… Termination: if the non defining constraints chosen No need to rerun Analysis: probability that his one the defining constraints is d/n 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program Correctness, Termination and Analysis
Analysis: probability that his one the defining constraints is d/n T(d,n) = d/n T(d-1,n-1) + T(d,n-1) by induction = d/n (d-1)! (n-1) + d! (n-1) = (n-1)d!(1/n +1) · nd! 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program Analysis
The algorithm is wasteful: When the solution does not fit the new a new is computed from scratch 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program How to improve
Random Sampling idea Build the basis by adding the constraints in a manner related to history Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|= c d2return simplex on H S Ã Repeat • Pick random R ½ H of size r • Solve recursively on S [ R solution is u • V = set of constraints in H violated by u • If |V| · t, then S Ã S [ V Until V=
Correctness, Termination and Analysis Claim: Each time we augment S (S Ã S [ V), we add to S a new constraint from the ``real” basis B of H • If u did not violate any constraint in B it would be optimal • So V must contain an element from B which was not in S before • Since |B|=d, we can augment S at most d times • Therefore the number of constraints in the recursive call is |R|+|S| · r +dt • Important factor for analysis: what is the probability of successful augmentation
Sampling Lemma For any H and S ½ H The expected (over R) number of constraints V that violate u (optimum on S [ R) is at most nd/r Proof Let X(R,h) be 1 iff h violated h(S [ R) Need to bound ER [h X(R,h)] = 1 / #R |R|=rh X(R,h) instead consider all subsets Q = R [ h of size r+1 = 1 / #R |Q|=r+1h2 Q X(Q\{h},h) = (#Q/#R) (r+1)¢ ProbQ,h 2 Q X(Q\{h},h) · n ¢ d / (r+1)
Analysis • Setting t= 2 nd /r implies (from Markov’s inequality): • number of recursive call until a successful(V · t) augmentation is constant Number of constraints in recursive call bounded by r+O(d2n/r) Setting r=d n1/2 means that this isO(r) Total expected running time T(n) · 2 d T(d n1/2 ) + O(d2 n) Result O( (log n)log d (Simplex time) ) + O(d2 n) Can be improved to O(dd +d2n) Can be improved to O(dd1/2 +d2n) using [Kalai, Matousek-Sharir-Welzl]
References • Motwani and Raghavan, Randomized Algorithms Chapter 9.10 • Michael Goldwasser, A Survey of Linear Programming in Randomized Subexponential Timehttp://euler.slu.edu/~goldwasser/publications/SIGACT1995_Abstract.html • Pioter Indyk’s course at MIT, Geometric Computation http://theory.lcs.mit.edu/~indyk/6.838/ • Applet: http://web.mit.edu/ardonite/6.838/linear-programming.htm