1 / 19

Dynamic Programming

Dynamic programming is a general algorithm design technique that was invented by Richard Bellman in the 1950s to solve optimization problems. The main idea is to solve smaller overlapping subproblems and record their solutions in a table. This text is from the Design and Analysis of Algorithms - Chapter 8.

rdillard
Download Presentation

Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming • Dynamic Programming is a general algorithm design technique • Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems • “Programming” here means “planning” • Main idea: • solve several smaller (overlapping) subproblems • record solutions in a table so that each subproblem is only solved once • final state of the table will be (or contain) solution Design and Analysis of Algorithms - Chapter 8

  2. Example: Fibonacci numbers • Recall definition of Fibonacci numbers: • f(0) = 0 • f(1) = 1 • f(n) = f(n-1) + f(n-2) • Compute the nth Fibonacci number recursively (top-down) • f(n) • f(n-1) + f(n-2) • f(n-2) + f(n-3) f(n-3) + f(n-4) • ... Design and Analysis of Algorithms - Chapter 8

  3. Example: Fibonacci numbers (2) • Compute the nth Fibonacci number using bottom-up iteration: • f(0) = 0 • f(1) = 1 • f(2) = 0+1 = 1 • f(3) = 1+1 = 2 • f(4) = 1+2 = 3 • f(n-2) = • f(n-1) = • f(n) = f(n-1) + f(n-2) Design and Analysis of Algorithms - Chapter 8

  4. Examples of Dynamic Programming Algorithms • Computing binomial coefficients • Optimal chain matrix multiplication • Constructing an optimal binary search tree • Warshall’s algorithm for transitive closure • Floyd’s algorithms for all-pairs shortest paths • Some instances of difficult discrete optimization problems: • travelling salesman • knapsack Design and Analysis of Algorithms - Chapter 8

  5. Binomial coefficients • Algorithm based on identity • Algorithm Binomial(n,k) • for i  0 to n do • for j 0 to min(j,k) do • if j=0 or j=i then C[i,j]  1 • else C[i,j]C[i-1,j-1]+C[i-1,j] • return C[n,k] • Pascal’s Triangle • Example • Space and Time efficiency Design and Analysis of Algorithms - Chapter 8

  6. 3 3 1 1 4 4 2 2 Warshall’s Algorithm: Transitive Closure • Computes the transitive closure of a relation • Alternatively: all paths in a directed graph • Example of transitive closure: 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 Design and Analysis of Algorithms - Chapter 8

  7. 3 3 3 3 3 1 1 1 1 1 4 2 4 4 2 2 4 4 2 2 Warshall’s Algorithm (2) • Main idea: a path exists between two vertices i, j, iff • there is an edge from i to j; or • there is a path from i to j going through vertex 1; or • there is a path from i to j going through vertex 1 and/or 2; or • ... • there is a path from i to j going through any of the other vertices R0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R1 0 0 1 0 1 01 1 0 0 0 0 0 1 0 0 R2 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R3 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R4 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 Design and Analysis of Algorithms - Chapter 8

  8. Warshall’s Algorithm (3) • In the kth stage find if a path exists between two vertices i, j using just vertices among 1,…,k • R(k-1)[i,j] (path using just 1 ,…,k-1) • R(k)[i,j] = or • (R(k-1)[i,k] & R(k-1)[k,j]) (path from i to k and from • k to i using just 1,…,k-1) { k i kth stage j Design and Analysis of Algorithms - Chapter 8

  9. Warshall’s Algorithm (4) Algorithm Warshall(A[1..n,1..n]) R(0)A for k1 to n do for i1 to n do for j1 to n do R(k)[i,j] R(k-1)[i,j] or R(k-1)[i,k] and R(k-1)[k,j] return R(k) Space and Time efficiency Design and Analysis of Algorithms - Chapter 8

  10. 4 3 1 1 6 1 5 4 2 3 Floyd’s Algorithm: All pairs shortest paths • In a weighted graph, find shortest paths between every pair of vertices • Same idea: construct solution through series of matrices D(0), D(1), … using an initial subset of the vertices as intermediaries. • Example: Design and Analysis of Algorithms - Chapter 8

  11. Floyd’s Algorithm (2) • Algorithm Floyd(W[1..n,1..n]) • DW • for k1 to n do • for i1 to n do • for j1 to n do • D[i,j] min(D[i,j],D[i,k]+D[k,j]) • return D • Space and Time efficiency • When does it not work? • Principle of optimality Design and Analysis of Algorithms - Chapter 8

  12. Optimal Binary Search Trees • Keys are not searched with equal probability • Suppose keys A, B, C, D with probabilities 0.1, 0.2, 0.3, 0.4 • What is the average search cost in each possible structure? • How many different structures can be produced with n nodes ? • Catalan number C(n)=comb(2n,n)/(n+1) Design and Analysis of Algorithms - Chapter 8

  13. Optimal Binary Search Trees (2) • C[i,j] denotes the smallest average search cost in a tree including items i through j • Based on the principle of optimality, the optimal search tree is constructed by using the recurrence • C[i,j] = min{C[i,k-1]+C[k+1,j]} + Σps • for 1≤i ≤ j ≤ n and i ≤ k ≤ j • C[i,i] = pi Design and Analysis of Algorithms - Chapter 8

  14. Optimal Binary Search Trees (3) Algorithm OptimalBST(P[1..n]) for i  1 to n do C[i,i-1]0; C[i,i]P[i], R[i,i]I C[n+1,n]0 for d1 to n-1 do for i1 to n-d do ji+d; minval∞ for ki to j do if C[i,k-1]+C[k+1,j]<minval minval C[I,k-1]+C[k+1,j]; kmink R[i,j]kmin; sumP[i]; for si+1 to j do sumsum+P[s] C[i,j] minval+sum return C[1,n], R Design and Analysis of Algorithms - Chapter 8

  15. The Knapsack problem • Problem: given n items with weight w1,w2, …, wn, values v1, v2, …, vn and capacity W. Find the most valuable subset that fit into the knapsack. • Solution with exhaustive search • Let V[i,j] denote the optimal solution for the first i items and capacity j. Purpose to find V[n,W] • Recurrence • max{V[i-1,j],vi+V[i-1,j-wi]} if j-wi≥0 • V[i,j] = • V[i-1,j] if j-wi<0 { Design and Analysis of Algorithms - Chapter 8

  16. The Knapsack problem (2) Example W=5 Space and Time efficiency Design and Analysis of Algorithms - Chapter 8

  17. Memory functions [1] A disadvantage of the dynamic programming approach is that is solves subproblems not finally needed. An alternative technique is to combine a top-down and a bottom-up approach (recursion plus a table for temporary results) Design and Analysis of Algorithms - Chapter 8

  18. Memory functions [2] Algorithm MFKnapsack(i,j) if V[i,j]<0 if j<Weights[i] value MFKnapack[i-1,j] else valuemax{MFKnapsack(i-1,j), Values[i]+MFKnapsack[i-1,j-Weights[i]]} V[i,j]value return V[i,j] Design and Analysis of Algorithms - Chapter 8

  19. Memory functions (3) Example W=5 Space and Time efficiency Design and Analysis of Algorithms - Chapter 8

More Related