1 / 27

Introduction to Operations Research

Deterministic Dynamic Programming. Introduction to Operations Research. Dynamic Programming. Dynamic programming is a widely-used mathematical technique for solving problems that can be divided into stages and where decisions are required in each stage.

gabby
Download Presentation

Introduction to Operations Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deterministic Dynamic Programming Introduction to Operations Research

  2. Dynamic Programming • Dynamic programming is a widely-used mathematical technique for solving problems that can be divided into stages and where decisions are required in each stage. • The goal of dynamic programming is to find a combination of decisions that optimizes a certain amount associated with a system.

  3. Dynamic Programming (DP) • determines the optimum solution to an n-variable problem by decomposing it into n stages with each stage constituting a single-variable sub problem. • Recursive Nature of Computations in DP • Computations in DP are done recursively, in the sense that the optimum solution of one sub problem is used as an input to the next sub problem. Deterministic Dynamic Programming

  4. By the time the last sub problem is solved, the optimum solution for the entire problem is at hand. The manner in which the recursive computations are carried out depends on how we decompose the original problem • In particular, the sub problems are normally linked by common constraints. As we move from one sub problem to the next, the feasibility of these common constraints must be maintained Deterministic Dynamic Programming

  5. We illustrate with the famous STAGECOACH problem • It concerns a mythical fortune seeker in Missouri who decided to go west to join the gold rush in California during the mid-19th century. The journey would require travelling by stagecoach through different states. STAGECOACH problem

  6. Traveling out west was dangerous during this time frame, so the stagecoach company offered life insurance to their passengers • Since our fortune seeker was concerned about his safety, he decided the safest route should be the one with the cheapest total life insurance cost STAGECOACH problem

  7. STAGECOACH problem

  8. Four stages were required to travel from the point of embarkation in state A (Missouri) to his destination in state J (California). The insurance costs between the states are also shown. • Thus the problem is to find the cheapest route the fortune-seeker should take STAGECOACH problem

  9. STAGECOACH problem • By using the minimum technique for selecting the shortest step offered by each successive step, we will have the possible shortest path AB  F  I  J, with cost 13. • When replacing AB  F with AD  F , we get another path with cost only 11. • One possible approach is to enumerate all the possible routes, which is 18 routes. This is so-called exhaust enumeration method.

  10. STAGECOACH problem Now let’s do the same problem through dynamic programming: • Stage • State • Decision variable • Optimal policy (Optimal solution)

  11. Dynamic Programming • Dynamic programming does not exist a standard mathematical formulation of “the” dynamic programming problem. Rather, dynamic programming is a general type of approach to problem solving, and the particular equations used must be developed to fit each situation.

  12. Dynamic Programming • Dynamic programming starts with small portion of the original problem and finds the optimal solution for this smaller problem. It then gradually enlarges the problem, finding the current optimal solution from the preceding one, until the original problem is solved in its entirety.

  13. Formulation • Let decision variable xn, (n=1,2,3,4) be the immediate destination on stage n. The route selected is A x1x2x3x4, where x4 is J. • Let fn(s, xn ) be the total cost of the best overall policy for the remaining stages, given that you are in state s, ready to start stage n, and select xn as the immediate destination. • Given s and n, let x*n denotes any value of xn (not necessary unique) that minimizes fn(s, xn ), and let f *n(s) be the corresponding minimum value of fn(s, xn ).

  14. Formulation Thus where fn(s, xn ) = immediate cost (at stage n) + minimum future cost (stages n+1 onward) = Cs,xn+f*n+1( xn) the value of Cs,xn is given by the preceding tables for by i=s (the current state) and j= xn(the immediate destination), here f *5( J ) =0. • Objective is to find f *1(A) and the corresponding route.

  15. Solution • Stage n=4:

  16. Solution • Stage n=3:

  17. Solution • Stage n=3: x3 s

  18. Solution • Stage n=2:

  19. Solution • Stage n=2: x2 s

  20. Solution • Stage n=1:

  21. Solution • Stage n=1: x1 s

  22. Optimal Solution

  23. General characteristics of Dynamic Programming • The problem structure is divided into stages • Each stage has a number of states associated with it • Making decisions at one stage transforms one state of the current stage into a state in the next stage. • Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. • The principle of optimality allows to solve the problem stage by stage recursively.

  24. Division into stages The problem is divided into smaller subproblems each of them represented by a stage. The stages are defined in many different ways depending on the context of the problem. If the problem is about long-time development of a system then the stages naturally correspond to time periods. If the goal of the problem is to move some objects from one location to another on a map then partitioning the map into several geographical regions might be the natural division into stages. Generally, if an accomplishment of a certain task can be considered as a multi-step process then each stage can be defined as a step in the process.

  25. States Each stage has a number of states associated with it. Depending what decisions are made in one stage, the system might end up in different states in the next stage. If a geographical region corresponds to a stage then the states associated with it could be some particular locations (cities, warehouses, etc.) in that region. In other situations a state might correspond to amounts of certain resources which are essential for optimizing the system.

  26. Decisions Making decisions at one stage transforms one state of the current stage into a state in the next stage. In a geographical example, it could be a decision to go from one city to another. In resource allocation problems, it might be a decision to create or spend a certain amount of a resource. For example, in the shortest path problem three different decisions are possible to make at the state corresponding to Columbus; these decisions correspond to the three arrows going from Columbus to the three states (cities) of the next stage: Kansas City, Omaha, and Dallas.

  27. Principle of Optimality The goal of the solution procedure is to find an optimal policy for the overall problem, i.e., an optimal policy decision at each stage for each of the possible states. Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. For example, in the geographical setting the principle works as follows: the optimal route from a current city to the final destination does not depend on the way we got to the city. A system can be formulated as a dynamic programming problem only if the principle of optimality holds for it.

More Related