1 / 25

Dynamic Programming: A General Idea

Dynamic Programming is a problem-solving technique that divides a problem into stages, with a policy decision required at each stage. It utilizes the principle of optimality and recursive relationships to find the optimal solution.

jwillard
Download Presentation

Dynamic Programming: A General Idea

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming General Idea • Problem can be divided into stages with a policy decision required at each stage. (Solution is a sequence of decisions) • Each stage has a number of states associated with it. • The effect of the policy decision at each stage is to transform the current state into a state in the next stage. • Given a current state, the optimal policy for the remaining stages is independent from the policy adopted in the previous stages. (The decision at the current stage is based on the results of the previous stage, but decisions are independent) Dynamic Programming

  2. Dynamic Programming General Idea “Principle of optimality” : A decision sequence can be optimal only if the decision sequence that takes us from the outcome of the initial decision is itself optimal.  eliminates “not optimal” subsequences • The solution procedure begins by finding the optimal policy for the last stage.(backward approach) Then, a recursive relationship that represents the optimal policy for each stage can be generated. Dynamic Programming

  3. Greedy & Dynamic Programming Greedy: - Consider only 1 seq. of decisions Dynamic Programming: • Consider many seq. of decisions • Try all possibilities Dynamic Programming

  4. Divide and conquer: divide the problem into subproblems. Solve each subproblem independently and then combine the solutions.Ex: Assume n = 5 Dynamic Programming

  5. But Dynamic Programming: works on phases: 0 1 1 2 3 5 8 . . . Based on the results in previous phase, make our decision for the current phase Can be solved in a for-loop F(0) = 0 F(1) = 1 For i  2 to n F(i) = F(i-1) + F(i-2); Dynamic Programming

  6. Multistage Single-source Single-destination Shortest Path The problem: Given: m columns (or stages) of n nodes assume edge only exist between stages Goal: to find the shortest path from the source (0, 1) to the destination (m-1,1) Stage number Node number of that stage Dynamic Programming

  7. The straight forward way: Time complexity is as there are that many different paths. The Dynamic Programming way: Let m(i, j) = length of the shortest path from the source (0,1) to (i, j) c(i, k, j)=distance from (i, k) to (i+1, j) want to find m(m-1, 1) define the function equation: Shortest path of source (0,1)  destination (m-1,1) Dynamic Programming

  8. 2 3,1 2,1 1,1 2 1 3 2 1 4 5 2 2 3 3 1,2 2,2 4,1 3,2 0,1 5 1 1 3 2 4 5 3 2,3 3,3 1,3 1 4 Ex: 3 4 Source Destination m: Stage # Dynamic Programming

  9. Remember the previous node of this shortest path Can be used for constructing the shortest path at the end Dynamic Programming

  10. The shortest path is: Time complexity is O(mn) to compute m(i, j), each m(i, j) also takes O(n) comparisons. Total complexity is O(mn2). LetN= m nnodes # of nodes in the graph max # of stagesmax # of nodes in each stage We have mn2 = mn  n = N  n Note: if use the Dijkstra’s algorithm, the complexity is O(N2) Dynamic Programming

  11. Revisit Dynamic Programming • The problem can be divided into stages. • Need to make decisions at each stage. • The decision for the current stage(based on the results of the previous stage) is independent from that of the previous stages, e.g. look at node 3,1 of the multistage shortest path problem, it chooses among 3 decisions from nodes 2,1 or 2,2 or 2,3 but it is independent from how node 2,1 (2,2 or 2,3) makes its decision. Note that we find shortest paths from the source to all three nodes 2,1 and 2,2 and 2,3, but some of them may not be used on the shortest path at all. • But every decision must be optimal, so that the overall result is optimal. This is called “Principle of Optimality”. Dynamic Programming

  12. 2. 0/1 Knapsack The problem:  Given n objects, p1 , p2 , …, pn, profit, knapsack of capacity M, w1 , w2 , …, wn,weight Goal: find x1 , x2 , …, xn s.t. is maximized subject to where xi = 0 or 1 Dynamic Programming

  13. The straight forward way: The complexity is O(2n) Example: Dynamic Programming

  14. Does Greedy work? This problem can’t use Greedy ( Pi / Wi  order) Counter-example: Homework Exercise The Dynamic Programming way: Define notation: = max profit generated from subject to the capacity X Dynamic Programming

  15. Example: Question: to find Use backward approach: Dynamic Programming

  16. The result: . . . . . . . . . Therefore, the complexity of 0/1 Knapsack is O(2n) Dynamic Programming

  17. M+1 n+1 +pi (M+1)(n+1) entries and constant time each O(Mn) Dynamic Programming

  18. Example: M = 6 +8 +5 Max profit is 12 By tracing which entries lead to this max-profit solution, we can obtain the optimal solution - Object 1,2 and 4. Dynamic Programming

  19. Chained Matrix Multiplication The problem: Given M1 M2  . . .  Mr [d0 d1] [d1d2] [dr-1 dr] (r matrices) Goal: find the minimum cost of computing M1 M2  . . .  Mr Dynamic Programming

  20. The idea: DefineC(i, j) = minimumcost of computingMiMi+1. . .Mj Goal: To find C(1,r) (minimumcost of computingM1M2. . .Mr ) C(i, i) = 0 i (the total # of elementary multiplications ) Sequence matrices from i to j: k: divide-point, Dynamic Programming

  21. We divide it into two parts with divide-point k, B A Define the functional equation: Cost of multiplying A and B Dynamic Programming

  22. Example: size 1 size 2 Dynamic Programming

  23. Smallest size 3 Smallest 48 + 40 + 24 size 4 It has the smallest cost when k = 2. Remember this k-value and it can be used for obtaining the optimal solution. Dynamic Programming

  24. Algorithm: procedure MinMult (r, D, P) // r, D - inputs, P -output begin for i  1 to r do C(i, i) = 0 for d  1 to (r –1) do for i  1 to (r –d) do end; Dynamic Programming

  25. Analysis: . . . . . . . . . . . . # of C(i, j)’s is O(r2) For computing each C(i, j): O(r) time Total complexity is O(r3) Dynamic Programming

More Related