1 / 77

Chapter 6 Dynamic Programming

Chapter 6 Dynamic Programming. Algorithmic Paradigms. Greed. Build up a solution incrementally , only optimizing some local criterion.

ura
Download Presentation

Chapter 6 Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6Dynamic Programming

  2. Algorithmic Paradigms • Greed. Build up a solution incrementally, only optimizing some local criterion. • Divide-and-conquer. Break up a problem into two sub-problems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original problem. • Dynamic programming. Break up a problem into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems.

  3. Dynamic Programming History • Richard Bellman. Pioneered the systematic study of dynamic programming in the 1950s. • CHOICE OF THE NAME DYNAMIC PROGRAMMING. • Dynamic programming = planning over time. • Secretary of Defense was hostile to mathematical research. • Bellman sought an impressive name to avoid confrontation. • "it's impossible to use dynamic in a pejorative sense" • "something not even a Congressman could object to" Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography.

  4. Dynamic Programming Applications • Areas. • Bioinformatics. • Control theory. • Information theory. • Operations research. • Computer science: theory, graphics, AI, systems, …. • Some famous dynamic programming algorithms. • Bellman-Ford for shortest path routing in networks. • Smith-Waterman for sequence alignment. • Viterbi for hidden Markov models. • Unix diff for comparing two files.

  5. Dynamic Programming • Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. • Like divide and conquer, DP solves problems by combining solutions to subproblems. • Unlike divide and conquer, subproblems are not independent. • Subproblems may share subsubproblems, • However, solution to one subproblem may not affect the solutions to other subproblems of the same problem. • DP reduces computation by • Solving subproblems in a bottom-up fashion. • Storing solution to a subproblem the first time it is solved. • Looking up the solution when subproblem is encountered again. • Key: determine structure of optimal solutions

  6. Steps in Dynamic Programming • Characterize the structure of an optimal solution. • Define value of optimal solution recursively. • Compute optimal solution values either top-downwith caching or bottom-upin a table. • Construct an optimal solution from computed values.

  7. Examples • Longest increasing subsequence • Problem: • Given a sequence of numbers a1, a2, a3, …, an • Find max value of k such that ai1 < ai2 < ai3 … < aikand i1 < i2 < … < ik. • Example: 0 5 2 8 6 3 6 9 7

  8. Naïve Algorithm • For every subsequence of X, check whether it’s increasing. • Time: ? Θ(n2n). • 2nsubsequences of X to check. • Each subsequence takes Θ(n)time to check: scan Y for first letter, for second, and so on.

  9. 0 5 2 8 6 3 6 9 7  0 n + 1 Longest increasing subsequence Length of the subsequence ended with 2 { 5 } 0 1 1 { 5, 2 } 0 1 1 2 1 { 5, 2, 8 } 0 2 1 2 1 { 5, 2, 8, 6} 0 2 2 1 2 { 5, 2, 8, 6, 3} 0 1 3 2 2 1 2 { 5, 2, 8, 6, 3, 6 } 0 1 3 4 2 1 2 { 5, 2, 8, 6, 3, 6, 9 } 0 1 2 3 4 4 1 2 {5, 2, 8, 6, 3, 6, 9, 7} 0 1 2 2

  10. Longest increasing subsequence 5, 2, 8, 6, 3, 6, 9, 7

  11. Problem:Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, find a common subsequence whose length is maximum. springtime ncaa tournament basketball printing akronkrzyzewski Subsequence need not be consecutive, but must be in order. Longest Common Subsequence

  12. Naïve Algorithm • For every subsequence of X, check whether it’s a subsequence of Y . • Time: ? Θ(n2m). • 2msubsequences of X to check. • Each subsequence takes Θ(n)time to check: scan Y for first letter, for second, and so on.

  13. Problem:Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, find a common subsequence whose length is maximum. springtime printing Longest Common Subsequence

  14. Optimal Substructure Theorem: Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynand Z is an LCS of X and Yn-1. Notation: prefix Xi= x1,...,xiis the first i letters of X.

  15. Optimal Substructure Theorem: Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynand Z is an LCS of X and Yn-1. Proof: (case 1: xm= yn) Any sequence Z’ that does not end in xm= yncan be made longer by adding xm= ynto the end. Therefore, • longest common subsequence (LCS) Z must end in xm= yn. • Zk-1 is a common subsequence of Xm-1 and Yn-1, and • there is no longer CS of Xm-1 and Yn-1, or Z would not be an LCS.

  16. Optimal Substructure Theorem: Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynand Z is an LCS of X and Yn-1. Proof: (case 2: xmyn, andzkxm) Since Z does not end in xm, • Z is a common subsequence of Xm-1 and Y, and • there is no longer CS of Xm-1 and Y, or Z would not be an LCS.

  17. Recursive Solution • Define c[i, j] = length of LCS of Xiand Yj , where Xi = x1,...,xi is the first i letters of X.. • We want c[m,n]. This gives a recursive algorithm and solves the problem.But does it solve it well?

  18. Recursive Solution c[springtime, printing] c[springtim, printing] c[springtime, printin] [springti, printing] [springtim, printin] [springtim, printin] [springtime, printi] [springt, printing] [springti, printin] [springtim, printi] [springtime, print] Cost: O(?)

  19. Steps in Dynamic Programming • Characterize structure of an optimal solution. • Define value of optimal solution recursively. • Compute optimal solution values either top-downwith caching or bottom-upin a table. • Construct an optimal solution from computed values.

  20. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 2 2 2 2 2 2 2 0 1 2 3 3 3 3 3 3 0 1 2 3 4 4 4 4 4 0 1 2 3 4 4 4 4 5 0 1 2 3 4 5 5 5 5 0 1 2 3 4 5 6 6 6 0 1 2 3 4 5 6 6 6 0 1 2 3 4 5 6 6 6 Recursive Solution p r i n t i n g • Keep track of c[a,b] in • a table of nm entries: • top/down • bottom/up 0 s p r i n g t i m Cost: O(?) e

  21. Computing the length of an LCS LCS-LENGTH (X, Y) • m← length[X] • n← length[Y] • for i ← 1 to m • do c[i, 0] ← 0 • for j ← 0 to n • do c[0, j ] ← 0 • for i ← 1 to m • do for j ← 1 to n • do if xi= yj • then c[i, j ] ← c[i1, j1] + 1 • b[i, j ] ← “ ” • else if c[i1, j ] ≥ c[i, j1] • then c[i, j ] ← c[i 1, j ] • b[i, j ] ← “↑” • else c[i, j ] ← c[i, j1] • b[i, j ] ← “←” • return c and b b[i, j ] points to table entry whose subproblem we used in solving LCS of Xi and Yj. c[m,n] contains the length of an LCS of X and Y. Time:O(mn)

  22. Initial call is PRINT-LCS (b, X,m, n). • When b[i, j ] = , we have extended LCS by one character. So • LCS = entries with in them. • Time: O(m+n) Constructing an LCS PRINT-LCS (b, X, i, j) • if i = 0 or j = 0 • then return • if b[i, j ] = “ ” • then PRINT-LCS(b, X, i1, j1) • print xi • elseif b[i, j ] = “↑” • then PRINT-LCS(b, X, i1, j) • else PRINT-LCS(b, X, i, j1)

  23. Sequence Alignment

  24. o c u r r a n c e - o c c u r r e n c e 5 mismatches, 1 gap o c - u r r a n c e o c c u r r e n c e 1 mismatch, 1 gap o c - u r r - a n c e o c c u r r e - n c e 0 mismatches, 3 gaps String Similarity • How similar are two strings? • ocurrance • occurrence

  25. C T G A C C T A C C T - C T G A C C T A C C T C C T G A C T A C A T C C T G A C - T A C A T TC + GT + AG+ 2CA 2+ CA Edit Distance • Applications. • Basis for Unix diff. • Speech recognition. • Computational biology. • Edit distance. [Levenshtein 1966, Needleman-Wunsch 1970] • Gap penalty ; mismatch penalty pq. • Cost = sum of gap and mismatch penalties.

  26. A T C G A 1 -5 -5 -1 T -5 1 -1 -5 C -5 -1 1 -5 G -1 -5 -5 1 Transition-Transversion matrix Scoring Matrices for Aligning DNA Sequences

  27. empty G C G C empty A G C Exercise • Let gap = -8

  28. x1 x2 x3 x4 x5 x6 C T A C C - G - T A C A T G y1 y2 y3 y4 y5 y6 Sequence Alignment • Goal: Given two strings X = x1 x2 . . . xm and Y = y1 y2 . . . yn find alignment of minimum cost. • Def. An alignment M is a set of ordered pairs xi-yj such that each item occurs in at most one pair and no crossings. • Ex: CTACCG vs. TACATG.Sol: M = x2-y1, x3-y2, x4-y3, x5-y4, x6-y6.

  29. Sequence Alignment: Problem Structure • Let OPT(i, j) = min cost of aligning strings x1 x2 . . . xi and y1 y2 . . . yj. • Case 1: OPT matches xi-yj. • pay mismatch for xi-yj + min cost of aligning two stringsx1 x2 . . . xi-1 and y1 y2 . . . yj-1 • Case 2a: OPT leaves xi unmatched. • pay gap for xi and min cost of aligning x1 x2 . . . xi-1 and y1 y2 . . . yj • Case 2b: OPT leaves yj unmatched. • pay gap for yj and min cost of aligning x1 x2 . . . xi and y1 y2 . . . yj-1

  30. Sequence Alignment: Algorithm Sequence-Alignment(m, n, x1x2...xm, y1y2...yn, , ) { for i = 0 to m M[0, i] = i for j = 0 to n M[j, 0] = j for i = 1 to m for j = 1 to n M[i, j] = min([xi, yj] + M[i-1, j-1],  + M[i-1, j],  + M[i, j-1]) return M[m, n] }

  31. Coin-Changing: Dynamic programming • Greedy: 100, 37, 1, 1, 1. • Optimal: 70, 70. • Greedy algorithm failed! • ?

  32. Weighted Interval Scheduling

  33. a b c d e f g h Time 0 1 2 3 4 5 6 7 8 9 10 11 Weighted Interval Scheduling • Weighted interval scheduling problem. • Job j starts at sj, finishes at fj, and has weight or value vj . • Two jobs compatibleif they don't overlap. • Goal: find maximum weight subset of mutually compatible jobs.

  34. weight = 999 b weight = 1 a Time 0 1 2 3 4 5 6 7 8 9 10 11 Unweighted Interval Scheduling • Greedy algorithm works if all weights are 1. • Consider jobs in ascending order of finish time. • Add job to subset if it is compatible with previously chosen jobs. • Observation. Greedy algorithm can fail spectacularly if arbitrary weights are allowed.

  35. 1 2 3 4 5 6 7 8 Time 0 1 2 3 4 5 6 7 8 9 10 11 Weighted Interval Scheduling • Notation. Label jobs by finishing time: f1  f2  . . .  fn . • Let p(j)=largest index i (<j) s.t. job i is compatible with j. • Ex: p(8) = ? , p(7) = ? , p(2) = ? . 3 0 5

  36. optimal substructure Dynamic Programming: Binary Choice • Let OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2, ..., j. • Case 1: OPT selects job j. • can't use incompatible jobs { p(j) + 1, p(j) + 2, ... } • must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., p(j) • Case 2: OPT does not select job j. • must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., j-1 Cost?

  37. Weighted Interval Scheduling: Brute Force • Brute force algorithm. Input: n, s1,…,sn , f1,…,fn , v1,…,vn Sort jobs by finish times so that f1 f2 ...  fn. Compute p(1), p(2), …, p(n) Compute-Opt(j) { if (j = 0) return 0 else return max(vj + Compute-Opt(p(j)), Compute-Opt(j-1)) }

  38. 1 5 2 3 4 3 4 5 3 2 2 1 p(1) = 0, p(j) = j-2 2 1 1 0 1 0 1 0 Weighted Interval Scheduling: Brute Force • Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. • Observation. Recursive algorithm fails spectacularly because of redundant sub-problems  exponential algorithms. (1+5)/2

  39. Input: n, s1,…,sn , f1,…,fn , v1,…,vn Sort jobs by finish times so that f1 f2 ...  fn. Compute p(1), p(2), …, p(n) forj = 1 to n M[j] = empty M-Compute-Opt(j) { if(M[j] is empty) M[j] = max(wj + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return M[j] } global array Weighted Interval Scheduling: Memoization • Memoization. Store results of each sub-problem in a cache; lookup as needed.

  40. Weighted Interval Scheduling: Running Time • Claim. Memoized version of alg takes O(n log n) time. • Sort by finish time: O(n log n). • Computing p(): O(n) after sorting by start time. • M-Compute-Opt(j): each invocation takes O(1) time & either • (i) returns an existing value M[j] • (ii) fills in one new entry M[j] and makes two recursive calls • Progress measure  = # nonempty entries of M[]. • initially  = 0, throughout  n. • (ii) increases  by 1  at most 2n recursive calls. • Overall running time of M-Compute-Opt(n) is O(n). ▪ • Note: O(n) if jobs are presorted by start & finish times.

  41. Weighted Interval Scheduling: Finding a Solution • Q. Dynamic programming algorithms computes optimal value. What if we want the solution itself? • A. Do some post-processing. • # of recursive calls  n  O(n). Run M-Compute-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = 0) output nothing else if(vj + M[p(j)] > M[j-1]) print j Find-Solution(p(j)) else Find-Solution(j-1) }

  42. Weighted Interval Scheduling: Bottom-Up • Bottom-up dynamic programming. Unwind recursion. Input: n, s1,…,sn , f1,…,fn , v1,…,vn Sort jobs by finish times so that f1 f2 ...  fn. Compute p(1), p(2), …, p(n) Iterative-Compute-Opt { M[0] = 0 forj = 1 to n M[j] = max(vj + M[p(j)], M[j-1]) }

  43. exercise Problem: Find mutually compatible jobs that maximize total value.

  44. Knapsack Problem

  45. Item Value Weight 1 1 1 2 6 2 Let W=11 3 18 5 4 22 6 5 28 7 Knapsack Problem • Knapsack problem. • Given n objects and a "knapsack." • Item i weighs wi > 0 kilograms and has value vi > 0. • Knapsack has capacity of W kilograms. • Goal: fill knapsack so as to maximize total value. • Greedy method: repeatedly add item with max ratio vi/wi. • Ex: {7, 2, 1} achieves only value = 35  optimal ? • Ex: {3, 4} has value 40.

  46. Dynamic Programming: False Start • Let OPT(i)=max profit for subset of items 1,…, i. • Case 1: OPT does not select item i. • OPT selects best of { 1, 2, …, i-1 } • Case 2: OPT selects item i. • accepting item i does not immediately imply that we will have to reject other items • without knowing what other items were selected before i, we don't even know if we have enough room for i • Conclusion: Need more info about sub-problems!

  47. Dynamic Programming: Adding a New Variable • Let OPT(i, w) = max profit subset of items 1, …, i with weight limit w. • Case 1: OPT does not select item i. • OPT selects best of { 1, 2, …, i-1 } using weight limit w • Case 2: OPT selects item i. • new weight limit = w – wi • OPT selects best of {1,…, i–1} using the new weight limit

  48. Knapsack Problem: Bottom-Up • Knapsack. Fill up an n-by-W array. Input: n, w1,…,wN, v1,…,vN for w = 0 to W M[0, w] = 0 for i = 1 to n for w = 1 to W if (wi > w) M[i, w] = M[i-1, w] else M[i, w] = max {M[i-1, w], vi + M[i-1, w-wi ]} return M[n, W]

  49. W + 1 0 1 2 3 4 5 6 7 8 9 10 11  0 0 0 0 0 0 0 0 0 0 0 0 { 1 } 0 1 1 1 1 1 1 1 1 1 1 1 { 1, 2 } 0 1 6 7 7 7 7 7 7 7 7 7 n + 1 { 1, 2, 3 } 0 1 6 7 7 19 24 25 25 25 25 { 1, 2, 3, 4 } 0 1 6 7 7 18 22 24 28 29 29 40 { 1, 2, 3, 4, 5 } 0 1 6 7 7 18 22 28 29 34 34 40 Item Value Weight M[i, w] = max {M[i-1, w], vi+M[i-1, w-wi ]} 1 1 1 2 6 2 3 18 5 W = 11 4 22 6 5 28 7 Knapsack Algorithm =max{7, 18+M[2, 5-5]} ? 18 OPT: { 4, 3 } value = 22 + 18 = 40

  50. Knapsack Problem: Running Time • Running time. (n W). • Not polynomial in input size! • "Pseudo-polynomial." • Decision version of Knapsack is NP-complete. • Knapsack approximation algorithm. There exists a polynomial algorithm that produces a feasible solution that has value within 0.01% of optimum.

More Related