1 / 41

Module 2 Dynamic Programming

Module 2 Dynamic Programming. Prepared by Lee Revere and John Large. Learning Objectives. Students will be able to: Understand the overall approach of dynamic programming. Use dynamic programming to solve the shortest-route problem. Develop dynamic programming stages.

Download Presentation

Module 2 Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module 2 Dynamic Programming Prepared by Lee Revere and John Large M2-1

  2. Learning Objectives Students will be able to: • Understand the overall approach of dynamic programming. • Use dynamic programming to solve the shortest-route problem. • Develop dynamic programming stages. • Describe important dynamic programming terminology. • Describe the use of dynamic programming in solving knapsack problems. M2-2

  3. Module Outline M2.1Introduction M2.2Shortest-Route Problem Solved by Dynamic Programming M2.3Dynamic Programming Terminology M2.4Dynamic Programming Notation M2.5Knapsack Problem M2-3

  4. Dynamic Programming • Dynamic programmingis a quantitative analytic technique applied to large, complex problems that have sequences of decisions to be made. • Dynamic programming divides problems into a number of decision stages; • the outcome of a decision at one stage affects the decision at each of the next stages. • The technique is useful in a large number of multi-period business problems, such as • smoothing production employment, • allocating capital funds, • allocating salespeople to marketing areas, and • evaluating investment opportunities. M2-4

  5. Dynamic Programmingvs. Linear Programming Dynamic programming differs from linear programming in two ways: • First, there is no algorithm (like the simplex method) that can be programmed to solve all problems. • Instead, dynamic programming is a technique that allows a difficult problem to be broken down into a sequence of easier sub-problems, which are then evaluated by stages. M2-5

  6. Dynamic Programmingvs. Linear Programming • Second, linear programming is a method that gives single-stage(i.e., one-time period) solutions. • Dynamic programming has the power to determine the optimal solution over a one-year time horizon by breaking the problem into 12 smaller one-month horizon problems and to solve each of these optimally. Hence, it uses a multistageapproach. M2-6

  7. Four Steps in Dynamic Programming • Divide the original problem into subproblems called stages. • Solve the last stage of the problem for all possible conditions or states. • Working backward from that last stage, solve each intermediate stage. • Obtain the optimal solution for the original problem by solving all stages sequentially. M2-7

  8. Solving Types of Dynamic Programming Problems The next slides will show how to solve two types of dynamic programming problems: • network • non-network • The Shortest-Route Problem is a network problem that can be solved by dynamic programming. • The Knapsack Problem is an example of a non-network problem that can be solved using dynamic programming. M2-8

  9. SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC PROGRAMMING George Yates is to travel from Rice, Georgia (1) to Dixieville, Georgia (7). • George wants to find the shortest route. But • there are small towns between Rice and Dixieville. • The road map is on the next slide. • The circles (nodes) on the map represent cities such as Rice, Dixieville, Brown, and so on. The arrows (arcs) represent highways between the cities. M2-9

  10. Dynamic ProgrammingGeorge Yates Lakecity Athens 10 miles 4 5 4 miles 12 miles 14 miles Rice Brown 4 miles 5 miles 1 3 7 Dixieville 2 miles 2 miles 6 miles 2 6 10 miles Hope Georgetown Figure M2.1 M2-10

  11. SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC PROGRAMMING • The mileage is indicated along each arc. • Can solve this by inspection, but it is instructive seeing dynamic programming used here to show how to solve more complex problems. M2-11

  12. George Yates Dynamic Programming Step-1: • First, divide the problem into sub-problems or stages. • Figure M2.2 (next slide) reveals the stages of this problem. • In dynamic programming, we usually start with the last part of the problem, Stage 1, and work backward to the beginning of the problem or network, which is Stage 3 in this problem. • Table M2.1 (second slide) summarizes the arcs and arc distances for each stage. M2-12

  13. George YatesStages Lakecity Athens 10 miles 4 5 12 miles 4 miles 14 miles Rice Brown 4 miles 7 5 miles 1 3 Dixieville 2 miles 2 miles 6 miles 2 6 10 miles Hope Georgetown Stage 3 Stage 2 Stage 1 Figure M2.2 M2-13

  14. Table M2.1: Distance Along Each Arc ARC STAGE ARC DISTANCE 1 5-7 14 6-7 2 2 4-5 10 3-5 12 3-6 6 2-5 4 2-6 10 3 1-4 4 1-3 5 1-2 2 Table M2.1 M2-14

  15. Step 2: Solve The Last Stage – Stage 1 • Next, solve Stage 1, the last part of the network. This is usually trivial. • Find the shortest path to the end of the network: node 7 in this problem. • The objective is to find the shortest distance to node 7. M2-15

  16. Step 2: Solve The Last Stage – Stage 1 continued • At Stage 1, the shortest paths, from nodes 5 and 6 to node 7 are the only paths. • Also note in Figure M2.3 (next slide) that the minimum distances are enclosed in boxes by the entering nodes to stage 1, node 5 and node 6. M2-16

  17. George YatesStage 1 14 10 miles 4 5 4 miles 14 miles 12 miles 4 miles 5 miles 1 3 7 2 miles 2 miles 6 miles 2 6 10 miles 2 M2-17

  18. Step 3: Moving Backwards Solving Intermediate Problems • Moving backward, now solve for Stages 2 and 3. • At Stage 2 use Figure M2.4. (next slide) M2-18

  19. George YatesStage 2 14 24 10 miles 4 5 14 miles 4 miles 12 miles 8 4 miles 5 miles 1 3 7 2 miles 2 miles 6 miles 2 6 10 miles 12 2 M2-19

  20. Fig M2.4 (previous slide) Analysis • If we are at node 4, the shortest and only route to node 7 is arcs 4–5 and 5–7. • At node 3, the shortest route is arcs 3–6 and 6–7 with a total minimum distance of 8 miles. • If we are at node 2, the shortest route is arcs 2–6 and 6–7 with a minimum total distance of 12 miles. • The solution to Stage 3 can be completed using the network on the following slide. M2-20

  21. George YatesStage 3 24 14 Minimum Distance to Node 7 from Node 1 10 miles 4 5 13 14 miles 4 miles 8 12 miles 4 miles 5 miles 7 1 3 2 miles 2 miles 6 miles 2 6 10 miles 2 12 M2-21

  22. Step 4 : Final Step • The final step is to find the optimal solution after all stages have been solved. • To obtain the optimal solution at any stage, only consider the arcs and the optimal solution at the next stage. • For Stage 3, we only have to consider the three arcs to Stage 2 (1–2, 1–3, and 1–4) and the optimal policies at Stage 2. M2-22

  23. DYNAMIC PROGRAMMING TERMINOLOGY • Stage: a period or a logical sub-problem. • State variables: possible beginning situations or conditions of a stage. These have also been called the input variables. • Decision variables: alternatives or possible decisions that exist at each stage. • Decision criterion:a statement concerning the objective of the problem. M2-23

  24. DYNAMIC PROGRAMMING TERMINOLOGY continued • Optimal policy: a set of decision rules, developed as a result of the decision criteria, that gives optimal decisions for any entering condition at any stage. • Transformation:normally, an algebraic statement that reveals the relationship between stages. M2-24

  25. Shortest Route Problem Transformation Calculation In the shortest-route problem, the following transformation can be given: Distance from the beginning of a given stage to the last node Distance from the beginning of the previous stage to the last node = + Distance from the given stage to the previous stage M2-25

  26. Dynamic Programming Notation • In addition to terminology, mathematical notation can also be used to describe any dynamic programming problem. • Here, an input, decision, output and return are specified for each stage. • This helps to set up and solve the problem. • Consider Stage 2 in the George Yates Dynamic Programming problem first discussed in Section M2.2. This stage can be represented by the diagram shown in two slides - Figure M2.7 (as could any given stage of a given dynamic programming problem). M2-26

  27. Input, Decision, Output,and Return for Stage 2 inGeorge Yates’s Problem sn = input to stage n (M2-1) dn = decision at stage n (M2-2) rn = return at stage n (M2-3) • Please note that the input to one stage is also the output from another stage. e.g., the input to Stage 2, s2, is also the output from Stage 3 (see Figure M2.7 in the next slide). • This leads us to the following equation: s n −1 = 1 output from Stage n M2-27

  28. Input, Decision, Output,and Return for Stage 2 inGeorge Yates’s Problem Decision d2 Stage 2 Input s2 Output s1 Return r2 Fig M2.7 M2-28

  29. Transformation Function • A transformation function allows us to go from one stage to another. • The total return function allows us to keep track of profits and costs. tn= transformation function at Stage n (M2-5) Sn-1 = tn(Sn, dn) (M2-6) fn = total return at Stage n (M2-7) M2-29

  30. Dynamic ProgrammingKey Equations • sn Input to stage n • dn Decision at stage n • rn Return at stage n • sn-1 Input to stage n-1 • tn Transformation function at stage n • sn-1 = tn [sn dn] General relationship between stages • fn  Total return at stage n M2-30

  31. KNAPSACK PROBLEM • The “knapsack problem” involves the maximization or minimization of a value, such as profits or costs. • Like a linear programming problem, there are restrictions. • Imagine a knapsack or pouch that can only hold a certain weight or volume. • We can place different types of items in the knapsack. • Our objective is to place items in the knapsack • to maximize total value without breaking the knapsack because of too much weight or a similar restriction. M2-31

  32. Types of Knapsack Problems • There are many kinds of problems that can be classified as knapsack problems. • e.g., Choosing items to place in the cargo compartment of an airplane and • selecting which payloads to put on the next NASA space shuttle. • The restriction can be volume, weight, or both. • Some scheduling problems are also knapsack problems. • e.g., we may want to determine which jobs to complete in the next two weeks. • The two-week period is the knapsack, and we want to load it with jobs in such a way as to maximize profits or minimize costs. • The restriction is the number of days or hours during the two-week period. M2-32

  33. Examples of Knapsack Problems Traveling Salesman Problem Salesman has to touch 16 locations without wasting extra time and traveling haphazardly. M2-33

  34. Example 2: Scenario • Consider packing a knapsack for a picnic in the country. There are many different items that you could bring, but the knapsack is not big enough to contain all items. • Decide what is appropriate for the trip and leave the not so important things behind. • Evaluate the importance of different items for this situation. • Bringing food may be deemed very important and bringing a DVD as not important, even if it can yield a pleasant day. M2-34

  35. Scenario continued • In theory, values must be assigned to each item. Items must be rated in terms of profit and cost. • Profit would be the importance of the item, while the cost is the amount of space it occupies in the knapsack. That is a multi-objective problem. • You want to maximize the profit while minimizing the total cost. • To maximize profit would mean taking all the items. However, this would exceed the capacity of the backpack. • To minimize cost, we would take none of the items, but this would mean that we have no profit. You have to find the best compromise. M2-35

  36. Considered Example Backpack capacity (max cost) = 10 Items:Cost:Profit: Flan                      5           12 Gyros                     3           10 Flatware                        2             7 Blanket                               4           10 Cups                              1             6 DVD                                  4             2 M2-36

  37. Examine Some Possible Solutions.. M2-37

  38. More on Knapsack Problems • The knapsack problem concerns many situations of resource allocation with financial constraints, for instance, • to select what things we should buy, given a fixed budget. • Everything has a cost and a profit, so seek the most value for a given cost. • The term knapsack problem invokes the image of the backpacker who is constrained, by a fixed-size knapsack, to fill it only with the most useful items. M2-38

  39. Knapsack Problem in Graphics M2-39

  40. GLOSSARY • Decision Criterion.A statement concerning the objective of a dynamic programming problem. • Decision Variable.The alternatives or possible decisions that exist at each stage of a dynamic programming problem. • Dynamic Programming.A quantitative technique that works backward from the end of the problem to the beginning of the problem in determining the best decision for a number of interrelated decisions. M2-40

  41. Glossary continued • Optimal Policy.A set of decision rules, developed as a result of the decision criteria, that gives optimal decisions at any stage of a dynamic programming problem. • Stage.A logical sub-problem in a dynamic programming problem. • State Variable.A term used in dynamic programming to describe the possible beginning situations or conditions of a stage. • Transformation.An algebraic statement that shows the relationship between stages in a dynamic programming problem. M2-41

More Related