790 likes | 1k Views
Min Cost Flow: Polynomial Algorithms. Overview. Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity Scaling Successive Shortest Path Algorithm Recap Incorporating Scaling Cost Scaling Preflow /Push Algorithm Recap
E N D
Overview • Recap: • Min Cost Flow, Residual Network • Potential and Reduced Cost • Polynomial Algorithms • Approach • Capacity Scaling • Successive Shortest Path Algorithm Recap • Incorporating Scaling • Cost Scaling • Preflow/Push Algorithm Recap • Incorporating Scaling • Double Scaling Algorithm - Idea
Min Cost Flow - Recap • G=(V,E) is a directed graph • Capacity u(v,w) for every • Balances: For every we will have a number b(v) • Cost c(v,w) for every • can assume non-negative 4,1 v2 v1 3,3 5,1 -3 5 1,1 v3 v4 3,4 -2 v5
Min Cost Flow - Recap fdsfds Compute feasible flow with min cost
Residual Network - Recap • We replace each arc by two arcs and . • The arc has cost and residual capacity . • The arc has cost and residual capacity . • The residual network consists only of arcs with positive residual capacity. • A For a feasible flow x, G(x) is the residual network that corresponds to the flow x.
Reduced Cost - Recap • Let be a potential function of the nodes in V. • Reduced cost: of an edge (u,v) :
Reduced Cost - Recap • Theorem(Reduced Cost Optimality):A feasible solution x* is an optimal solution of the minimum cost flow problemExists Node potential function that satisfies the following reduced cost optimality conditions: • Idea: No negative cycles in G(x*)
Approach • We have seen several algorithm for the MCF problem, but none polynomial – in logU, logC. • Idea: Scaling! • Capacity/Flow values • Costs • both • Next Week: Algorithms with running time independent of logU, logC • Strongly Polynomial • Will solve problems with irrational data
Successive Shortest Path - Recap • Pseudo-Flow – Flow which doesn’t necessarily satisfy balance constraints • Define imbalance function efor node i: • Define E and D as the sets of excess and Deficit nodes. • Observation:
Successive Shortest Path - Recap • Lemma: Suppose pseudoflow x has potential , satisfying reduced cost optimality.Letting d be the vector of distances from some node s in G(x) with as the length of (i,j), then: • Potential also satisfies optimality conditions • for edges in shortest paths from s. • Idea: triangle inequality
Successive Shortest Path - Recap • Algorithm: • Maintain pseudoflow x with pot. , satisfying reduced cost optimality. • Iteratively, choose node s with positive excess – e(s) > 0, until there none. • compute shortest path distances d from s in G(x) with as the length of (i,j). • Update . also satisfies optimality conditions • Push as much flow as possible along path from s to some node t with e(t) < 0. (retains optimality conditions w.r.t. ).
Successive Shortest Path - Recap • Algorithm Complexity: • Assuming integrality, at most nU iterations. • In each iteration, compute shortest paths, • Using Dijkstra, bounded by O(m+nlogn) per iteration
Capacity Scaling - Scheme • Successive Shortest Path Algorithm may push very little in each iteration. • Fix idea: use scaling • Modify algorithm to push units of flow • Ignore edges with residual capacity < • until there is no node with excess or no node with deficit • Decrease by factor 2, and repeat • Until < 1.
Definitions • Define - nodes with access at least - nodes with deficit at least • The -residual network G(x,) is the subgraph of G(x) with edges with residual capacity at least . 3 4 1 G(x) 3 3 2 3 4
Definitions • Define - nodes with access at least - nodes with deficit at least • The -residual network G(x,) is the subgraph of G(x) with edges with residual capacity at least . 3 4 G(x, 3) 3 3 3 4
Main Observation in Algorithm • Observation: Augmentation of units must start at a node in S(), along a path in G(x, ), and end at a node in T(). • In the -phase, we find shortest paths in G(x,), and augment over them. • Thus, edges in G(x, ), will satisfy the reduced optimality conditions. • We will consider edges with less residual capacity later.
Initializing phases • In later -phases, “new” edges with residual capacity rij < 2 may be introduced to G(x,). • We ignored them until now, so possibly • Solution: saturate such edges. j i
Capacity Scaling Algorithm Initial values. 0 pseudoflow and potential (optimal!) Large enough
Capacity Scaling Algorithm In beginning of -phase, fix optimality condition on “new” edges with resid. Cap. rij < 2 by saturation
Capacity Scaling Algorithm augment path in G(x,) from node in S() to node in T()
Capacity Scaling Algorithm - Correctness • Inductively, The algorithm maintains a reduced cost optimal flow x w.r.t. in G(x,). • Initially this is clear. • At the beginning of the -phase, “new” arcs s.t. rij < 2 are introduced, may not satisfy optimality.Saturation of edges such that < 0 suffices, since the reversal satisfies . • Augmenting flow in shortest path procedure in G(x,) retains optimality
Capacity Scaling Algorithm - Correctness • When =1, G(x,) = G(x). • The algorithm ends with , or . • By integrality, we have a feasible flow.
Capacity Scaling Algorithm - Assumption • We assume path from k to l in G(x, ) exists. • And we assume we can compute shortest distances from k to rest of nodes. • Quick fix: initially, add dummy node D with artificial edges (1,D) and (D,1) with infinite capacity and very large cost.
Capacity Scaling Algorithm – Complexity • The algorithm has O(log U) phases. • We analyze each phase separately.
Capacity Scaling Algorithm – Phase Complexity • Let’s assess the sum of excesses at the beginning. • Observe that when -phase begins, either, or . • Thus the sum of excesses (= sum of deficits) is less than D E
Capacity Scaling Algorithm – Phase Complexity – Cont. • Saturation of edges in the beginning of the phase saturate edges with rij < 2. This may add at most to the sum of excesses
Capacity Scaling Algorithm – Phase Complexity – Cont. • Thus the sum of excesses is less than • Therefore, at most augmentations can be performed in a phase. • In total: per phase.
Capacity Scaling Algorithm –Complexity • per phase. •
Approximate Optimality • A pseudoflow (flow) x is said to be -optimalfor some >0 if for some pot. ,for every edge (i,j) in G(x).
Approximate Optimality Properties • Lemma: For a min. cost flow problem with integer costs, any feasible flow is C-optimal. In addition, if , then any -optimal feasible flow is optimal. • Proof: • Part 1 is easy: set . • Part 2: for any cycle W in G(x),.applying integrality, it follows . The lemma follows. b -5 4 c a -2 3 d
Algorithm Strategy • The previous lemma suggests the following strategy: • Begin with feasible flow x and , which is C-optimal • Iteratively improve from and -optimal flow (x,) to an/2-optimal flow (x’,), until < 1/n. • We discuss two methods to implement the underlying improvement procedure. • The first is a variation of the Preflow Push Algorithm.
Distance Labels • Distance Labels Satisfy: • d(t) = 0, d(s) = n • d(v) ≤ d(w) + 1 if r(v,w) > 0 • d(v) is at most the distance from v to t in the residual network. • s must be disconnected from t …
Terms • Nodes with positive excess are called active. • Admissible arc in the residual graph: v w d(v) = d(w) + 1
The preflow push algorithm While there is an active node { pick an active node v and push/relabel(v) } Push/relabel(v) { If there is an admissible arc (v,w) then { push = min {e(v) , r(v,w)} flow from v to w } else { d(v) := min{d(w) + 1 | r(v,w) > 0} (relabel) }
Running Time • The # of relabelings is (2n-1)(n-2) < 2n2 • The # of saturating pushes is at most 2nm • The # of nonsaturating pushes is at most 4n2m– using potential Φ = Σv active d(v)
Applying PreflowPush’s technique • Our motivation was to find a method to improve an -optimal flow (x,) to an /2-optimal flow (x’,). • We use Push Preflow’s technique: • Labels: (i) • Admissible edge in residual network: if • Relabel: Increase(i) by i j
Initialization • We first transform the input-optimal flow (x,) to an /2-optimalpseduoflow(x’,). • This is easy • Saturate edges with negative reduced cost. • Clear flow from edges with positive reduced cost. -10 w v
Correctness • Lemma 1: Let x be pseudo-flow, and x’ a feasible flow. Then, for every node v with excess in x, there exists a path P in G(x) ending at a node w with deficit, and its reversal is a path in G(x’). • Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed).
Lemma 1: Proof Cont. • Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed). 3 2 3 4 0 1 - 3 4 3 3 2 2 2 5 3 4
w v S Lemma 1: Proof Cont. • Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed). • There must be a node with deficit reachable, otherwise x’ isn’t feasible
Correctness (cont) Corollary: There is an outgoing residual arc incident with every active vertex Corollary: So we can push/relabelas long as there is an active vertex
Correctness – Cont. • Lemma 2: The algorithm maintains an/2-optimalpseudo-flow (x’,), • Proof: By induction on the number of operations. • Initially, optimal preflow.