330 likes | 349 Views
Learn about dynamic programming concepts, specifically the matrix chain problem and assembly line scheduling. Explore the differences between dynamic programming and divide and conquer, and understand the steps involved in developing a dynamic programming algorithm.
E N D
Dynamic Programming Dr. M. Sakalli, Marmara University Matrix Chain Problem Assembly-line scheduling Elements of dynamic programming Picture reference to http://www.flickr.com/photos/7271221@N04/3408234040/sizes/l/in/photostream/. Crane strokes
Dynamic Programming (pro-gram) Like Divide and Conquer, DP solves a problem by partitioning the problem into sub-problems and combines solutions. The differences are that: D&C is top-down while DP is bottom to top approach. But memoization will allow top to down method. The sub-problems are independent from each other in the former case, while they are not independent in the dynamic programming. Therefore, DP algorithm solves every sub-problem just ONCE and saves its answer in a TABLE and then reuse it. Memoization. Optimization problems: Many solutions possible solutions and each has a value. A solution with the optimal sub-solutions. Such a solution is called an optimal solution to the problem. Not the optimum. Shortest path example The development of a dp algorithm can be in four steps. Characterize the structure of an optimal solution. Recursively define the value of an optimal solution. Compute the value of an optimal solution in a bottom-up fashion. Construct an optimal solution from computed information. Dynamic Programming Divide&Conquer
Assembly-Line Scheduling • ei time to enter • xi time to exit assembly lines • tj time to transfer from assembly • aj processing time in each station. • Brute-force approach • Enumerate all possible sequences through lines i{1, 2}, • For each sequence of n stations Sjj{1, n}, compute the passing time. (the computation takes (n) time.) • Record the sequence with smaller passing time. • However, there are too many possible sequences totaling 2n
DP Step 1: Analyze the structure of the fastest way through the paths Seeking an optimal substructure: The fastest possible way (min{f*}) through a station Si,j contains the fastest way from start to the station Si,j trough either assembly line S1, j-1 or S2, j-1. For j=1, there is only one possibility For j=2,3,…,n, two possibilities: from S1, j-1orS2, j-1 from S1, j-1, additional time a1, j from S2, j-1, additional time t2, j-1 + a1,j suppose the fastest way through S1, jis throughS1, j-1, then the chassis must have taken a fastest way from starting point through S1,j-1. Why??? Similar rendering for S2, j-1. An optimal solution to a problem contains within it an optimal solution to sub-prbls. the fastest way through station Si,jcontains within it the fastest way through station S1,j-1orS2,j-1 . Thus can construct an optimal solution to a problem from the optimal solutions to sub-problems. Assembly-Line Scheduling
DP Step 2: A recursive solution DP Step 3: Computing the fastest times in Θ(n) time. Problem: ri(j) = 2n-j. So f1[1] is referred to 2n-1 times. Total references to all fi[j] is (2n).
Running time: O(n). • Step 4: Construct the fastest way through the factory
Matrix-chain Multiplication Problem definition:Given a chain of matrices A1, A2, ..., An, where matrix Ai has dimension pi-1×pi, find the order of matrix multiplications minimizing the number of the total scalar multiplications to compute the final product. Let A be a [p, q] matrix, and B be a [q, r] matrix. Then the complexity is pqr. In the matrix-chain multiplication problem, the actually matrices are not multiplied, the aim is to determine an order for multiplying matrices that has the lowest cost. Then, the time invested in determining optimal order must worth more than paid for by the time saved later on when actually performing the matrix multiplications. • C(p,r) = A(p,q) * B(q,r) • for i1to p for j1to r • C[i,j]=0 • for i=1to p • for j=1to r • for k=1to q C[i,j] = C[i,j] + A[i,k]*B[k,j]
2 3 70 30 70 30 1 30 70 28 12 20 Example given in class 2x3 3x5 5x7 7x2 A1 A2 A3 A4 Suppose we want to multiply a sequence of matrices, A1…A4 with dimensions. Remember: Matrix multiplication is not commutative. 1- Total # of multiplication for this method is 30 + 70 +20 = 120 2- Above the total # of multiplications is 30 + 70 +28 = 128 3- Below the total # of multiplications is 70 + 30 +12 = 112
Parenthesization The aim as to fully parenthesize the product of matrices minimizing scalar multiplications. For example, for the product A1A2A3A4, a fully parenthesization is ((A1A2) A3) A4. A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses. Brute-force approach Enumerate all possible parenthesizations. Compute the number of scalar multiplications of each parenthesization. Select the parenthesization needing the least number of scalar multiplications. The number of enumerated parenthesizations of a product of n matrices, denoted by P(n), is the sequence of Catalan number growing as Ω(4n/n3/2) and solution to recurrence is Ω(2n). if n=1 if n≥2 The Brute-force approach is inefficient.
Catalan numbers: the number of ways in which parentheses can be placed in a sequence of numbers to be multiplied, two at a time • 3 numbers: • (1 (2 3)), ((1 2) 3) • 4 numbers: • (1 (2 (3 4))), (1 ((2 3) 4)), ((1 2) (3 4)), ((1 (2 3)) 4), (((1 2) 3) 4) • 5 numbers: • (1 (2 (3 (4 5)))), (1 (2 ((3 4) 5))), (1 ((2 3) (4 5))), (1 ((2 (3 4)) 5)), • (1 (((2 3) 4) 5)), ((1 2) (3 (4 5))), ((1 2) ((3 4) 5)), ((1 (2 3)) (4 5)), • ((1 (2 (3 4))) 5), ((1 ((2 3) 4)) 5), (((1 2) 3) (4 5)), (((1 2) (3 4)) 5), • (((1 (2 3)) 4) 5) ((((1 2) 3) 4) 5)
With DP DP Step 1: structure of an optimal parenthesization Let Ai..j (ij) denote the matrix resulting from AiAi+1…Aj Anyparenthesization of AiAi+1…Aj must split the product between Ak and Ak+1 for some k, (ik<j). The cost = # of computing Ai..k + # of computing Ak+1..j + # Ai..k Ak+1..j. If k is the position for an optimal parenthesization, the parenthesization of “prefix” subchain AiAi+1…Ak within this optimal parenthesization of AiAi+1…Aj must be an optimal parenthesization of AiAi+1…Ak. AiAi+1…Ak Ak+1…Aj
DP Step 2: a recursive relation Let m[i,j] be the minimum number of multiplications needed to compute the matrix AiAi+1…Aj The lowest cost to compute A1A2 … An would be m[1,n] Recurrence: 0 if i = j m[i,j] = minik<j {m[i,k] + m[k+1,j] +pi-1pkpj } if i<j ( (Ai … Ak) (Ak+1… Aj) ) (Split at k) Reminder: the dimension of Ai is pi-1 X pi m[k+1, j] m[i, k] pi-1Xpk matrix pkXpj matrix Step 2: Recursively define the value of an optimal solution
Recursive (top-down) solution using the formula for m[i,j]: RECURSIVE-MATRIX-CHAIN(p,i, j) • ifi=jthen return 0 • m[i, j] = • for k ← 1 toj − 1 • q← RECURSIVE-MATRIX-CHAIN (p,i, k) • + RECURSIVE-MATRIX-CHAIN (p,k+1, j) • + p[i-1]p[k]p[j] • ifq < m[i, j] then m[i, j] ← q • returnm[i, j] Line 1 Line 6 Line 4 Line 5 Line 6 Complexity: for n > 1
Complexity of the recursive solution Using inductive method to prove by using the substitution method – we guess a solution and then prove by using mathematical induction that it is correct. Prove that T(n) = (2n) that is T(n) ≥ 2n-1 for all n ≥1. Induction Base: T(1) ≥1=20 Induction Assumption: assume T(k) ≥ 2k-1 for all 1 ≤ k < n Induction Step:
æ ö n ç ÷ 2 è ø • Step 3, Computing the optimal cost • If by recursive algorithm is exponential in n, (2n), no better than brute-force. • But only have + n = (n2) subproblems. • Recursive behavior will encounter to revisit the same overlapping subproblems many times. • If tabling the answers for subproblems, each subproblem is only solved once. • The second hallmark of DP: overlapping subproblems and solve every subproblem just once.
Step 3: Compute the value of an optimal solution bottom-up Input: n; an array p[0…n] containing matrix dimensions State: m[1..n, 1..n] for storing m[i, j] s[1..n, 1..n] for storing the optimal k that was used to calculate m[i, j] Result: Minimum-cost table m and split table s MATRIX-CHAIN-TABLE(p, n) for i ← 1 to n m[i, i]← 0 for l ← 2 to n for i ← 1 to n-l+1 j← i+l-1 m[i, j]← for k ← i to j-1 q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j] if q < m[i, j] m[i, j]← q s[i, j]← k return m and s Takes O(n3) time Requires (n2) space
Printing the solution j i PRINT(s, 1, 4) PRINT (s, 1, 1) PRINT (s, 2, 4) PRINT (s, 2, 3) PRINT (s, 4, 4) Output: (A1((A2A3)A4)) PRINT (s, 2, 2) PRINT (s, 3, 3)
Step 4: Constructing an optimal solution • Each entry s[i, j ]=k shows where to split the product Ai Ai+1 … Ajfor the minimum cost: A1 … An=( (A1 … As[i, n]) (As[i, n]+1… An) ) • To print the solution invoke the following function with (s, 1, n) as the parameter: PRINT-OPTIMAL-PARENS(s,i, j) • ifi=jthen print “A”i • else print “(” • PRINT-OPTIMAL-PARENS(s,i, s[i, j]) • PRINT-OPTIMAL-PARENS(s,s[i, j]+1, j) • print “)”
Suppose A1 A2 Ai……………….Ar P1xP2 P2xP3 PixPi+1 ………. PrxPr+1 Assume mij = the # of multiplication needed to multiply Ai A i+1......Aj Initial valuemii= mjj = 0 Final Value m1r A1……..Ai………Ak Ak+1………..Aj mij = mik + mk+1,j + Pi Pk+1 Pj+1 k could be i <= k <= j-1 We know the range of k but don’t know the exact value of k
Thus mij = min(mik + mk+1,j + Pi Pk+1 Pj+1) for i <= k <= j-1 recurrence index : (j-1) - i + 1=(j-i) Example: Calculate m14 for A1 A2 A3 A4 2x5 5x3 3x7 7x2 P1 = 2, P2 = 5, P3 = 3, P4 = 7, P5 = 2 j-i = 0 j-i = 1 j-i = 2 j-i = 3 m11 = 0 m12 = 30 m13 = 72 m14 = 84 m22=0 m23=105 m24=72 m33=0 m34=42 m44=0 mij = min(mik + mk+1,i + PiPk+1Pj+1) for i <= k <= j-1 m12 = min(m11 + m22 + P1P2P3) for 1 <= k <= 1 = min( 0 + 0 + 2x5x3) m(1,1)m(1,2)m(1,3)m(1,4)m(1,5)m(1,6) m(2,2)m(2,3)m(2,4)m(2,5)m(2,6) m(3,3)m(3,4)m(3,5)m(3,6) m(4,4)m(4,5)m(4,6) m(5,5)m(5,6) m(6,6)
mij = min(mik+ mk+1, j+ PiPk+1Pj+1) for i<= k<= j-1 m13 = min(m1k + mk+1,3 + P1Pk+1P4)for 1<=k<=2 = min(m11 + m23 + P1P2P4 , m12 + m33 + P1P3P4 ) = min( 0+105+2x5x7, 30 + 0 + 2x3x7) = min( 105+70, 30 + 42) = 72 m24= min(m2k + mk+1,3 + P2Pk+1P4)for 2<=k<=3 = min(m22 + m34 + P2P3P5 , m23 + m44 + P2P4P5 ) = min( 0+42+5x3x2, 105+ 0 + 5x7x2) = min( 42 + 30, ….) = 72 m14= min(m1k + mk+1,4 + P1Pk+1P5)for 1<=k<=3 = min(m11 + m24 + P1P2P5 , m12 + m34 + P1P3P5 , m13 + m44 + P1P4P5 ) = min( 72+2x5x2, 30+42+ 2x3x2, 72+2x7x2) = min( 72+20, 30+42+12, 72 + 28) = min( 92, 84, 100) = 84
1..4 1..1 2..4 1..2 3..4 1..3 4..4 2..2 3..4 2..3 4..4 1..1 2..2 3..3 1..1 2..3 1..2 3..3 4..4 3..3 4..4 2..2 3..3 2..2 3..3 1..1 2..2 Memoized Matrix Chain • LOOKUP-CHAIN(p,i,j) • if m[i,j]< then return m[i,j] • if i=j then m[i,j] 0 • else for ki to j-1 • do q LOOKUP-CHAIN(p,i,k)+ • LOOKUP-CHAIN(p,k+1,j)+pi-1pkpj • if q< m[i,j] then m[i,j] q • return m[i,j]
For a DP to be applicable an optmztn prbl must have: Optimal substructure An optimal solution to the problem contains within it optimal solutions to subproblems. Overlapping (dependencies) subproblems The space of subproblems must be small; i.e., the same subproblems are encountered over and over. DP step3. Memoization: T(n)=O(n3), PSpace(n)=(n2) • A top-down variation of dynamic programming • Idea: remember the solution to subproblems as they are solved in the simple recursive algorithm but may be quite costly • DP is considered better when all subproblems must be calculated, because there is no overhead for recursion. Lookup-Chain(p, i, j) LookUp-Table(p,i,j) Initialize all m[i,j] to if m[i,j] < then returnm[i,j] ifi =jthen m[i,j] 0 else for k ← 1 toj − 1 q← LookUp-Table (p,i, k) + LookUp-Table (p,k+1, j) + p[i-1]p[k]p[j] ifq < m[i, j] then m[i, j] ← q returnm[i, j]
Elements of DP Optimal substructure A problem exhibits optimal substructure if an optimal solution to the problem contains within its optimal solution to subproblems. Overlapping subproblems When a recursive algorithm revisits the same problem over and over again, that is the optimization problem has overlapping subproblems. Subtleties Better to not assume that optimal substructure applies in general. Two examples in a directed graph G = (V, E) and vertices u, v V. Unweighted shortest path: Find a path from u to v consisting of the fewest edges. Good for Dynamic programming. Unweighted longest simple path: Find a simple path from u to v consisting of the most edges. Not good for Dynamic programming.
The running time of a dynamic-programming algorithm depends on the product of two factors. The number of subproblems overall * the number of choices for each subproblem. = Sum of entire choices. Assembly line scheduling Θ(n) subproblems · 2 choices = Θ(n) Matrix chain multiplication Θ(n2) subproblems · (n-1) choices = O(n3)
Principle of Optimality (Optimal Substructure) The principle of optimality applies to a problem (not an algorithm) A large number of optimization problems satisfy this principle. Principle of optimality: Given an optimal sequence of decisions or choices, each subsequence must also be optimal. Principle of optimality - shortest path problem Problem: Given a graph G and vertices s and t, find a shortest path in G from s to t Theorem: A subpath P’ (from s’ to t’) of a shortest path P is a shortest path from s’ to t’ of the subgraph G’ induced by P’. Subpaths are paths that start or end at an intermediate vertex of P. Proof: If P’ was not a shortest path from s’ to t’ in G’, we can substitute the subpath from s’ to t’ in P, by the shortest path in G’ from s’ to t’. The result is a shorter path from s to t than P. This contradicts our assumption that P is a shortest path from s to t.
P’={(c.d), (d,e)} P={ (a,b), (b,c) (c.d), (d,e)} f 5 e 3 13 10 G 7 a b c d 3 1 6 G’ Longest C to B D Longest A to C B A C Longest A to B Principle of Optimality P’ must be a shortest path from c to e in G’, otherwise P cannot be a shortest path from a to e in G. • Problem: What is the longest simple route between City A and B? • Simple = never visit the same spot twice. • The longest simple route (solid line) has city C as an intermediate city. • Does not consist of the longest simple route from A to C and the longest simple route from C to B. Therefore does not satisfy the Principle of Optimality