1 / 27

CS200: Algorithm Analysis

Learn how to find the shortest path between all vertices in a weighted graph using Bellman-Ford algorithm. Understand the recursive nature of the algorithm and its runtime complexities. Discover a Dynamic Programming approach for solving the problem efficiently. Explore the Slow-APSP Algorithm to compute shortest paths.

Download Presentation

CS200: Algorithm Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS200: Algorithm Analysis

  2. ALL PAIRS SHORTEST PATHS Problem To find the shortest path between all vertices v ∈ V for a weighted graphG = (V , E ).

  3. ALL PAIRS SHORTEST PATHS Interested in a directed graph G = (V,E) with weight function, w:E–>R, |V| = n. Goal of algorithm is to create an n X n matrix of shortest path distances, z(u,v). The matrix holds the shortest path between each vertex in the graph. How could we solve the problem using one of the single source shortest-path algorithms we have looked at?

  4. ALL PAIRS SHORTEST PATHS Bellman-Ford could be run once from each vertex (that is, each vertex could be considered as a source vertex). Runtime of this solution is _________? which on a dense graph is _________? Runtime of Dijkstra’s on each vertex is O(V2 logE) with fancy data structure. Perhaps we could do better with a different technique (neg-weight edges are allowed but not used in following examples). O(V2E) O(V4)

  5. ALL PAIRS SHORTEST PATHS Method assumes the following : 1. Adjacency matrix representation of graph where the n X n matrix W = (wij) of edge weights. 2. Algorithm produces a table of all pairs shortest paths – n X n matrix D = (dij). • Where dij = weight of shortest path from i to j.

  6. ALL PAIRS SHORTEST PATHS 1. Adjacency matrix representation of graph where the n X n matrix W = (wij) of edge weights. Initial Matrix

  7. ALL PAIRS SHORTEST PATHS The first algorithm we will look at is a Dynamic Programming algorithm that is recursive in nature. Discussion: consider a shortest path p from vertex i to vertex j and suppose that p contains at most m edges (assume no neg-weight edges => m is finite). In the following discussion, l replaces D.

  8. ALL PAIRS SHORTEST PATHS

  9. ALL PAIRS SHORTEST PATHS If i = j then p has weight = 0 and no edges. If i ≠ j then decompose path p into where p' now contains at most m-1 edges. By lemma 25.1, p' is a shortest path from i to k and this => z(i,j) = z(i,k) + w(k,j).

  10. Recursive description of algorithm: Let dijm= weight of shortest path from i to j that uses at most m edges. thus dij0 = 0 if i = j = infinity if i ≠ j and dijm = min (dikm-1 + wkj) correct since 1 ≤ k ≤ n and for k= j, wjj = 0.

  11. The figure shows all shortest paths with ≤ m-1 edges from i to j and from i to k that couldprecede j on a path. ExtendShortestPaths(D,W){D is nXn matrix, W is edge weight} for i = 1 to n do //n = |V| for j = 1 to n do D'ij = infinity {initialize table} for k = 1 to n do if D'ij > Dik + wkj then {relax vertices} D'ij = min(D'ij, Dik + wkj) return D’ Runtime ___________? O(n3)

  12. SLOW-APSP // Compute each D(m) SLOW-APSP(W, n) //n = |V| D(1)= W for m = 2 to n - 1 D(m)= ExtendShortestPaths(D(m-1) ,W) return D(n-1) Runtime ____________? O(n4)

  13. Slow-APSP Algorithm Idea: Find all vertices reachable in two hops, D(2) , save the matrix, and use it to find all vertices reachable in three hops, D(3), save the matrix, and use it to find all vertices reachable in four hops, D(4), and so on until we find D(n-1) . This matrix will contain the shortest path between every pair of vertices in the graph.

  14. Ignore diagonal entries for our trace, but algorithm considers them. For row 1, col 2: consider the following paths thru 1 interm. vertex, 1-3-2, 1-4-2, 1-5-2, For row 1, col 3: consider the following paths thru 1 interm. vertex, 1-2-3, 1-4-3, 1-5-3, For row 1, col 4: consider the following paths thru 1 interm. vertex, 1-2-4, 1-3-4, 1-5-4, For row 1, col 5: consider the following paths thru 1 interm. vertex, 1-2-5, 1-3-5, 1-4-5,

  15. Slow-APSP Algorithm The runtime of ESP pseudo code is O(n3) because of the 3 nested for loops. The ESP algorithm must be executed n-1 times (Slow-APSP), - it requires n-1 passes for each d'ij to converge to z(i,j) just as in Bellman-Ford. Idea: Start with matrix D1 , initial values for dij. After 1 pass of ExtendShortestPaths we have D2 which is fed back into ExtendShortestPaths to produce D3... and so on until we compute the shortest paths, z(i,j) = dijn-1 = Dn-1.

  16. Instead of viewing the computation of shortest paths through a relaxation step (as in pseudo-code) we can perform the computation via a set of matrix multiplications.

  17. Example : C = A*B, for nXn matrices where cij=S(aik*bkj) which requires Q(N3) ops. 1<=k<=n We can replace + (summation) with minimum and we can replace * (multiplication) with addition, giving: cij = min (aik+bkj) 1<=k<=n

  18. Observation: Extend Shortest Paths is like matrix multiplication: D0= A W = B Dn-1 = C min = + + = *  = 0 create C, an n X n matrix for i = 1 to n for j = 1 to n cij= 0 for k = 1 to n cij= cij+ aik· bkj

  19. So, we can view Extend Shortest Paths as matrix multiplication! Why do we care? Because our goal is to compute D(n-1)as fast as we can. We don’t need to compute • all the intermediate D(1) , D(2) , D(3) , . . . , D(n-1). • Why?

  20. If we further substitute Cm-1 for A and W for B and Cm for C we have Cm = Cm-1”*" W. The identity matrix for this new "multiplication" is I = 0 ∞ ∞ ∞ ∞...... ∞ 0 ∞ ∞ ∞ ........ ∞ ∞ 0 ∞ ∞ ....... ........................

  21. This new multiplication with min as + and + as * is associative and the algebraic structure is referred to as a closed semi-ring. Thus C1 = C0 "*" W = W, where C0 = I C2 = C1 "*" W = W2 C3 = C2 "*" W = W3 ............. Cn-1 = Cn-2 "*" W = Wn-1 where Cn-1 = z(i,j).

  22. Runtime for this MATRIX MULTIPLICATION is also Q(n4) which is no better than Bellman-Ford. In order to improve the performance we don't have to compute all Cm matrices since we are only interested in Cn-1. We can compute Cn-1 with log (n-1) matrix products by computing the sequence: C1 = W C2 = W2 C4 = W4 .......... (log(n-1)) (log(n-1)) C2 = W2 since 2log(n-1) ≥ n-1 we need to perform Q(logn) squarings.

  23. It is ok to overshoot Cn-1 since once the shortest path values converge Cn-1 = Cn = Cn+1 = ......... The runtime for this modified algorithm is ____________? We can do better – Floyd Warshall algorithm next lecture. O(n3lgn)

More Related