160 likes | 292 Views
Warshall’s and Floyd’sAlgorithm Source http://www.aw-bc.com/info/levitin. 3. 3. 1. 1. 4. 4. 2. 2. Warshall’s algorithm: transitive closure (TC). Definition TC: Computes the transitive closure of a relation (Alternatively: all paths in a directed graph)
E N D
Warshall’s and Floyd’sAlgorithm Source http://www.aw-bc.com/info/levitin
3 3 1 1 4 4 2 2 Warshall’s algorithm: transitive closure (TC) • Definition TC: Computes the transitive closure of a relation • (Alternatively: all paths in a directed graph) • Example of transitive closure: 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0
Warshall’s algorithm • Main idea: a path exists between two vertices i, j, iff • there is an edge from i to j; or • there is a path from i to j going through vertex 1; or • there is a path from i to j going through vertex 1 and/or 2; or • … • there is a path from i to j going through vertex 1, 2, … and/or k; or • ... • there is a path from i to j going through any of the other vertices
Warshall’s algorithm • Idea: dynamic programming • Let V={1, …, n} and for k≤n, Vk={1, …, k} • For any pair of vertices i, jV, identify all paths from i to j whose intermediate vertices are all drawn from Vk: Pijk={p1, p2, …}, if Pijk then Rk[i, j]=1 • For any pair of vertices i, j: Rn[i, j], that is Rn • Starting with R0=A, the adjacency matrix, how to get R1 … Rk-1 Rk … Rn Vk P1 i j p2
Warshall’s algorithm • Idea: dynamic programming • pPijk: p is a path from i to j with all intermediate vertices in Vk • If k is not on p, then p is also a path from i to j with all intermediate vertices in Vk-1: pPijk-1 k Vk Vk-1 p i j
Warshall’s algorithm • Idea: dynamic programming • pPijk: p is a path from i to j with all intermediate vertices in Vk • If k is on p, then we break down p into p1 and p2 • What are P1 and P2? p k Vk p1 p2 Vk-1 i j
Warshall’s algorithm • Idea: dynamic programming • pPijk: p is a path from i to j with all intermediate vertices in Vk • If k is on p, then we break down p into p1 and p2 where • p1 is a path from i to k with all intermediate vertices in Vk-1 • p2 is a path from k to j with all intermediate vertices in Vk-1 p k Vk p1 p2 Vk-1 i j
Warshall’s algorithm • In the kth stage determine if a path exists between two vertices i, j using just vertices among 1, …, k • R(k-1)[i,j] (path using just 1, …, k-1) • R(k)[i,j] = or • (R(k-1)[i,k] and R(k-1)[k,j]) (path from i to k • and from k to j • using just 1, …, k-1) { k i kth stage j
Warshall’s algorithm R0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 R1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 R2 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 R3 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 2b 1a 4d k=1 3c k=2 generating R1 k=1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 generating R1 k=2, R2 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 i 1 2 3 4 R4 1 1 1 0 1 1 1 1 0 0 0 0 1 1 1 1 i 1 2 3 4 FLOYD(G) RA for k in [1..n] for i in [1..n] for j in [1..n] r[i,j](k) {r(k-1)[i,j] OR (r(k-1)[i,k] AND r(k-1)[k,j])} r(1)[4,1] {r(0)[4,1] or (r(0)[4,1] and r(0)[1,1])} r(1)[4,2] {r(0)[4,2] or (r(0)[4,1] and r(0)[1,2])} r(1)[4,3] {r(0)[4,3] or (r(0)[4,1] and r(0)[1,3])} r(1)[4,4] {r(0)[4,4] or (r(0)[4,1] and r(0)[1,4])} j= 1 2 3 4 j= 1 2 3 4 r[i,j](k) {r(k-1)[i,j] OR (r(k-1)[i,k] AND r(k-1)[k,j])} r(2)[4,1] {r(1)[4,1] or (r(1)[4,2] and r(1)[2,1])} r(2)[4,2] {r(1)[4,2] or (r(1)[4,2] and r(1)[2,2])} r(2)[4,3] {r(1)[4,3] or (r(1)[4,2] and r(1)[2,3])} r(2)[4,4] {r(1)[4,4] or (r(1)[4,2] and r(1)[2,4])}
3 3 3 3 3 1 1 1 1 1 4 2 4 4 2 2 4 4 2 2 Warshall’s algorithm R2 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R1 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R4 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 R3 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 FLOYD(G) RA for k in [1..n] for i in [1..n] for j in [1..n] r[i,j] {r[i,j] OR (r[i,k] AND r[k,j])}
a. Apply here, and what is the time efficiency of Warshall’s algorithm? • b. What is the time efficiency of Warshall’s algorithm? • c. How to solve this “finding all paths in a directed graph” problem by a traversal-based algorithm (BFS-based or DFS-based)? • d. Explain why the time efficiency of Warshall’s algorithm is inferior to that of traversal-based algorithm for sparse graphs represented by their adjacency lists. 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
4 3 1 1 6 1 5 4 2 3 Floyd’s algorithm: all pairs shortest paths • In a weighted graph, find shortest paths between every pair of vertices • Same idea: construct solution through series of matrices D(0), D(1), … using an initial subset of the vertices as intermediaries. • In D(k), dij(k): weight of the shortest path from ui to uj with all intermediate vertices in an initial subset {u1, u2, … uk} • Example:
Floyd’s algorithm: all pairs shortest paths • Idea: dynamic programming • Let V={u1,…,un} and for k≤n, Vk={u1,…,uk} • To construct D(k), we need to get dijk • For any pair of vertices ui, ujV, consider all paths from ui to uj whose intermediate vertices are all drawn from Vk and find p the shortest path among them, weight of p is dijk Vk p ui uj
Floyd’s algorithm: all pairs shortest paths • Idea: dynamic programming • If uk is not in p, then a shortest path from ui to uj with all intermediate vertices in Vk-1 is also a shortest path in Vk, i.e., dij(k) = dij(k-1). • If uk is in p, then we break down p into p1 and p2 where • p1 is the shortest path from ui to uk with all intermediate vertices in Vk-1 • p2 is the shortest path from uk to uj with all intermediate vertices in Vk-1 • i.e., dij(k) = dik(k-1)+ dkj(k-1).
Dynamic programming • Construct matrices D(0), D(1), … D(k-1), D(k) … D(n) • dij(k): weight of the shortest path from ui to uj with all intermediate vertices in Vk • dij(0)=wij • dij(k)=min (dij(k-1), dik(k-1)+ dkj(k-1)) for k≥1 • Dynamic programming is a technique for solving problems with overlapping subproblems. It suggests solving each smaller subproblem once and recording the results in a table from which a solution to the original problem can be then obtained. • What are the overlapping subproblems in Floyd’s algorithm? • General principle that underlines dynamic programming algorithms for optimization problems: • Principle of optimality: an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances. • The principle of optimality holds in most cases. (A rare example: it fails for finding longest simple paths).
Floyd’s algorithm D0 0 - 3 - 2 0 - - - 7 0 1 6 - - 0 D1 0 - 3 - 2 0 5 - - 7 0 1 6 - 9 0 D2 0 - 3 - 2 0 5 - 9 7 0 1 6 - 9 0 D2 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 2 b2 a1 7 3 6 d4 1 c3 D5 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 FLOYD(G) for i,j in [1..n] d[i,j]w(ui,uj) // D0A for k in [1..n] for i in [1..n] for j in [1..n] d[i,j]min(d[i,j],dk-1[i,k]+dk-1[k,j]) O(V3) better then BF that wld cost be O(V4) //similar to relaxation Dm = Dm-1*A raising to power m Strassen’s cannot?.. MatrixMultiplication(A, B) for for i,j in [1..n] c[i,j]0 for k in [1..n] for all keys for p in [1..n] for q in [1..n] for r in [1..n] c[p,q]c[p,q]+a[p,r] . b[r,q] 16