1 / 73

Dynamic Programming

Dynamic Programming. Lecture 7 Prof. Dr. Aydın Öztürk. Introduction. Dynamic Programming(DP) applies to optimi- zation problems in which a set of choices must be made in order to arrive at an optimal solution. As choices are made, subproblems of the same form arise.

velvet
Download Presentation

Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming Lecture 7 Prof. Dr. Aydın Öztürk

  2. Introduction • Dynamic Programming(DP) applies to optimi- zation problems in which a set of choices must be made in order to arrive at an optimal solution. • As choices are made, subproblems of the same form arise. • DP is effective when a given problem may arise from more than one partial set of choices. • The key technique is to store the solution to each subproblem in case it should appear

  3. Introduction (cont.) • Divide and Conquer algorithms partition the problem into independent subproblems. • Dynamic Programming is applicable when the subproblems are not independent.(In this case DP algorithm does more work than necessary)

  4. Introduction (cont.) • Dynamic Programming algorithm solves every subproblem just once and then saves its answer in a table.)

  5. Assembly-line Scheduling

  6. Assembly-line Scheduling • If we are given a list of which stations to use in line 1 and which to use in line 2, then T(n)=Θ(n). • There are 2n possible ways to choose stations. Thus, the solution is obtained by enumarating all posible ways and computing how long each takes would take Ω(2n).

  7. The steps of a dynamic programming • Characterize the structure of an optimal solution • Recursively define the value of an optimal solution • Compute the value of an optimal solution in a bottom-up fashion • Construct an optimal solution from computed information

  8. Step 1:The structure of the fastest way through the factory • Consider the fastest possible way(fpw) to get from the starting point through station S1,j. • If j=1, there isonly one way thatthe chassis could have gone. • For j=2, 3, ..., n there are two choices: • The chassis could have come from S1, j-1and then directly to station S1, j. • The chassis could have come from S2, j-1and then directly to station S1, j-1.

  9. Step 1:The structure of the fastest way through the factory (cont.) • First, suppose that the fastest way(fw) through station S1, j is throughstation S1, j-1. • The key observation is that the chassis must have taken a fw from the starting point through station S1, j-1.

  10. Step 1:The structure of the fastest way through the factory (cont.) • Similarly, suppose that the fw through station S1, j is throughstation S2, j-1. • We observe that the chassis must have taken a fw from tha starting point through station S2, j-1.

  11. Step 1:The structure of the fastest way through the factory (cont.) • We can say that for assembly-line scheduling, an optimal solution to a problem contains within it an optimal solution to sub-problems (finding the fw through either S1, j-1or S2, j-1). • We refer to this property as optimal substructure.

  12. The fw through stations S1, j • The fw through station S1, j-1and then directly through station S1, j. or • The fw through station S2, j-1and then through station S1, j.

  13. The fw through station S2, j • The fw through station S2, j-1and then directly through station S2, j. or • The fw through station S1, j-1and then through station S2, j.

  14. Step 1:The structure of the fastest way through the factory (cont.) • To solve the problem of finding the fw through station j of either line, we solve the subproblems of finding the fw through station j-1 on both lines. • Thus, we can build an optimal solution to an instance of the assemly line scheduling problem by building on optimal solutions to subproblems

  15. Step 2: A recursive solution • We define the value of an optimal solution recursively in tems of the optimal solutions to subproblems. • Let fi[j] denote the fastest possible time to get a chassis from the starting point throuhg station Si, j. • Let fi* denote the fastest time to get a chassis all the way through the factory.

  16. Step 2: A recursive solution (cont.)

  17. Step 2: A recursive solution (cont.)

  18. Step 2: A recursive solution: An example Station S1,1 Station S1,2 Station S1,3 Station S1,4 Station S1,5 Station S1,6 9 3 4 Line 1 7 4 8 2 3 2 1 3 3 4 Chassis Enters 5 2 1 2 1 4 2 5 6 4 8 5 7 Line 2 Station S1,1 Station S1,2 Station S1,3 Station S1,4 Station S1,5 Station S1,6 j 1 2 3 4 5 6 j 2 3 4 5 6 f1[j] 9 18 20 24 32 35 1 2 1 1 2 l1[j] f*=38 l* =1 12 16 22 25 30 37 f2[j] 1 2 1 2 2 l2[j]

  19. Step 2: A recursive solution (cont.) • fi[j] values give the values of optimal solutions to subproblems. • Let li[j] to be the line number 1 or 2, whose station j-1 is used in a fw throgh station Si, j. • We also define l* to be the line whose station nis used in a fw through the entire factory.

  20. Step 2: A recursive solution (cont.) The fw through the entire factory:

  21. Step 3: Computing the fastest times We can write a simple recursive algorithm based on the equations for f1[j] and f2[j] to compute the fw through the factory. However its running time is exponential. Let ri( j ) be the number of references made tofi [ j ] in a recursive algorithm:

  22. Step 3: Computing the fastest times(cont.) • We observe that for each value of fi[j] depends only on the values of f1[j-1] and f2[j-1]. • By computing the fi[j] values in order of increasing station numbers we can compute the fw through the factory in time.

  23. The fastest way procedure

  24. The fastest way procedure (cont.) line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

  25. Matrix-Chain multiplication • We are given a sequence • And we wish to compute

  26. Matrix-Chain multiplication (cont.) • Matrix multiplication is assosiative, and so all parenthesizations yield the same product. • For example, if the chain of matrices is then the product can be fully paranthesized in five distinct way:

  27. Matrix-Chain multiplication MATRIX-MULTIPLY (A,B) if columns[A] ≠ rows[B] then error “incompatible dimensions” else for i←1 to rows[A] do for j←1 to columns[B] do C[i,j]←0 for k←1 to columns[A] do C[ i,j ]← C[i,j]+A[i,k]B[k,j] return C

  28. Matrix-Chain multiplication (cont.) Cost of the matrix multiplication: An example:

  29. Matrix-Chain multiplication (cont.)

  30. Matrix-Chain multiplication (cont.) • The problem: Given a chain of nmatrices, where matrix Ai has dimension pi-1xpi, fully paranthesize the product in a way that minimizes the number of scalar multiplications.

  31. Matrix-Chain multiplication (cont.) • Counting the number of paranthesization

  32. Matrix-Chain multiplication (cont.) • Based on the above defination we have

  33. Matrix-Chain multiplication (cont.) • Substituting the initial value for b we obtain

  34. Matrix-Chain multiplication (cont.) • Let B(x) be the generating function

  35. Matrix-Chain multiplication (cont.) • Substituting in B(x) we can write that

  36. Matrix-Chain multiplication (cont.) • Summing over the both sides

  37. Matrix-Chain multiplication (cont.) • One way to express B(x) in closed form is Expanding into a Taylor series about x=0 we get Comparing the coefficients of x we finally show that

  38. Matrix-Chain multiplication (cont.) Step 1: The structure of an optimal paranthesization(op) • Find the optimal substructure and then use it to construct an optimal solution to the problem from optimal solutions to subproblems. • Let Ai...jwhere i ≤ j, denote the matrix product Ai Ai+1 ... Aj • Anyparenthesization of Ai Ai+1 ... Aj must split the product between Ak andAk+1 fori ≤ k < j.

  39. Matrix-Chain multiplication (cont.) The optimal substructure of the problem: • Suppose that an op of Ai Ai+1 ... Aj splits the productbetween Akand Ak+1 then the paranthesization of the subchain Ai Ai+1 ... Ak within this parantesization of Ai Ai+1 ... Aj must be an op of Ai Ai+1 ... Ak

  40. Matrix-Chain multiplication (cont.) Step 2: A recursive solution: • Let m[i,j] be the minimum number of scalar multiplications needed to compute the matrix Ai...jwhere 1≤ i ≤ j ≤ n. • Thus, the cost of a cheapest way to compute A1...nwould be m[1,n]. • Assume that the op splits the product Ai...jbetween Akand Ak+1.wherei ≤ k <j. • Then m[i,j] =Theminimum cost for computing Ai...k and Ak+1...j+ the cost of multiplying these two matrices.

  41. Matrix-Chain multiplication (cont.) Recursive defination for the minimum cost of paranthesization:

  42. Matrix-Chain multiplication (cont.) To help us keep track of how to constrct an optimal solution we define s[ i,j] to be a value of kat which we can splitthe product Ai...jto obtain an optimal paranthesization. That is s[ i,j] equals a value k such that

  43. Matrix-Chain multiplication (cont.) Step 3: Computing the optimal costs It is easiy to write a recursive algorithm based on recurrence for computing m [i,j]. But the running time will be exponential!...

  44. Matrix-Chain multiplication (cont.) Step 3: Computing the optimal costs We compute the optimal cost by using a tabular, bottom-up approach.

  45. Matrix-Chain multiplication (Contd.) MATRIX-CHAIN-ORDER(p) n←length[p]-1 for i←1 to n do m[i,i]←0 for l←2to n do for i←1 to n-l+1 do j←i+l-1 m[i,j]←∞ for k←ito j-1 do q←m[i,k] + m[k+1,j]+pi-1pkpj if q < m[i,j] then m[i,j] ←qs[i,j] ←k return m and s

  46. Matrix-Chain multiplication (cont.) An example:matrix dimension A1 30 x 35 A2 35 x 15 A3 15 x 5 A4 5 x 10 A5 10 x 20 A6 20 x 25

  47. Matrix-Chain multiplication (cont.) s m 6 1 3 2 6 1 5 i j 15125 2 4 3 3 3 5 i j 10500 3 11875 3 3 4 3 3 4 575 7125 4 5 5 9375 3 3 2 1 3 3500 2500 4375 2500 5 4 5 7875 1 2 3 2 5000 1000 2625 750 1000 6 15750 1 0 0 0 0 0 0 A1 A2 A3 A4 A5 A6

  48. Matrix-Chain multiplication (cont.) Step 4: Constructing an optimal solution An optimal solution can be constructed from the computed information stored in the table s[1...n, 1...n]. We know that the final matrix multiplication is The earlier matrix multiplication can be computed recursively.

  49. Matrix-Chain multiplication (Contd.) PRINT-OPTIMAL-PARENS (s, i, j) 1 if i=j • then print “Ai” • else print “ ( “ • PRINT-OPTIMAL-PARENS (s, i, s[i,j]) • PRINT-OPTIMAL-PARENS (s, s[i,j]+1, j) 6 Print “ ) ”

  50. Matrix-Chain multiplication (Contd.) RUNNING TIME: Recursive solution takes exponential time. Matrix-chain order yields a running time of O(n3)

More Related