1 / 30

Mathematical Modeling and Optimization: Summary of “Big Ideas”

Mathematical Modeling and Optimization: Summary of “Big Ideas”. A schematic view of modeling/optimization process. assumptions, abstraction,data,simplifications. Real-world problem. Mathematical model. makes sense? change the model, assumptions?. optimization algorithm. Solution to

mkwong
Download Presentation

Mathematical Modeling and Optimization: Summary of “Big Ideas”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mathematical Modeling and Optimization:Summary of “Big Ideas”

  2. A schematic view of modeling/optimization process assumptions, abstraction,data,simplifications Real-world problem Mathematical model makes sense? change the model, assumptions? optimization algorithm Solution to real-world problem Solution to model interpretation

  3. Mathematical models in Optimization • The general form of an optimization model: min or maxf(x1,…,xn)(objective function) subject to gi(x1,…,xn) ≥ 0(functional constraints) x1,…,xn S(set constraints) • x1,…,xn are called decision variables • In words, the goal is to find x1,…,xnthat • satisfy the constraints; • achieve min (max) objective function value.

  4. Types of Optimization Models Stochastic (probabilistic information on data) Deterministic (data are certain) Discrete, Integer (S = Zn) Continuous (S = Rn) Linear (f and g are linear) Nonlinear (f and g are nonlinear)

  5. What is Discrete Optimization? Discrete Optimization is a field of applied mathematics, combining techniques from • combinatorics and graph theory, • linear programming, • theory of algorithms, to solve optimization problems over discrete structures.

  6. Solution Methods for Discrete Optimization Problems • Integer Programming • Network Algorithms • Dynamic Programming • Approximation Algorithms

  7. Integer Programming • Programming = Planning in this context • In a survey of Fortune 500 firms, 85% of those responding said that they had used linear or integer programming. • Why is it so popular? • Many different real-life situations can be modeled as IPs. • There are efficient algorithms to solve IPs.

  8. Topics in this class about Integer Programming • Modeling real-life situations as integer programs • Applications of integer programming • Solution methods (algorithms) for integer programs • (maybe) Using software (called AMPL) to solve integer programs

  9. IP modeling techniques Using binary variables Restrictions on number of options Contingent decisions Variables (functions) with k possible values Either-Or Constraints Big M method Balance constraints Fixed Charges Making choices with non-binary variables Piecewise linear functions

  10. IP applications Facility Location Problem Knapsack Problem Multi-period production planning Inventory management Fair representation in electoral systems Consultant hiring Bin Packing Problem Pairing Problem Traveling Salesman Problem

  11. Difficulties of real-life modeling • The problems that you will encounter in the real world are always a lot messier than the ``clean'' problems we looked at in this class; there are always side constraints that complicate getting even a feasible solution. • Most real world problems have multiple objectives, and it is hard to choose from among them. • In the real world you must gain experience with how to adapt the idealized models of academia to each new problem you are asked to solve.

  12. Utilizing the relationship between problems • Important modeling skill: • Suppose you know how to model Problems A1,…,Ap; • You need to solve Problem B; • Notice the similarities between Problems Ai and B; • Build a model for Problem B, using the model for Problem Ai as a prototype. • Example: • The version of the facility location problem as a special case of the knapsack problem • Solving the committee assignment problem by graph coloring

  13. Complexity of Solving Discrete Optimization Problems Two classes: • Class 1 problems havepolynomial-time algorithms for solving the problems optimally. Examples: Min. Spanning Tree Problem, Assignement Problem • For Class 2problems (NP-hard problems) • No polynomial-time algorithm is known; • And more likely there is no one. Examples: Traveling Salesman Problem Coloring Problem • Most discrete optimization problems are in the second class.

  14. Three main directions to solve NP-hard discrete optimization problems: • Integer programming techniques • Approximation algorithms • Heuristics • Important Observation: Any solution method suggests a tradeoff between time and accuracy. On time-accuracy tradeoff schedule: Integer programming Approximation algorithms Heuristics Brute force Most accuracy Least accuracy Worst time Best time

  15. Solving Integer Programs (IP) vs solving Linear Programs (LP) • LP algorithms • Simplex Method • Interior-point methods • IP algorithms use the above-mentioned LP algorithms as subroutines. • The algorithms for solving LPs are much more time-efficient than the algorithms for IPs. • Important modeling consideration: Whenever possible avoid integer variables in your model.

  16. LP-relaxation-based solution methods for Integer Programs • Branch-and-Bound Technique • Cutting Plane Algorithms

  17. Basic Concepts of Branch-and-Bound The basic concept underlying the branch-and-bound technique is to divide and conquer. Since the original “large” problem is hard to solve directly, it is divided into smaller and smaller subproblems until these subproblems can be conquered. The dividing (branching) is done by partitioning the entire set of feasible solutions into smaller and smaller subsets. The conquering (fathoming) is done partially by (i) giving a bound for the best solution in the subset; (ii) discarding the subset if the bound indicates that it can’t contain an optimal solution.

  18. Summary of branch-and-bound Steps for each iteration: • Branching: Among the unfathomed subproblems, select the one that was created most recently. (Break ties according to which has larger LP value.) Choose a variable xi which has a noninteger value xi* in the LP solution of the subproblem. Create two new subproblems by adding the respective constraints xi  xi* and xi ≥  xi* . • Bounding: Solve the new subproblems, record their LP solutions. Based on the LP values, update the incumbent, and the lower and upper bounds for OPT(IP) if necessary. • Fathoming: For each new subproblem, apply the three fathoming tests. Discard the subproblems that are fathomed. • Optimality test:If there are no unfathomed subproblems left then return the current incumbent as optimal solution (if there is no incumbent then IP is infeasible.) Otherwise, perform another iteration.

  19. Importance of tight lower and upper bounds in branch-and-bound • Having tight lower and upper bounds on the IP optimal value might significantly reduce the number of branch-and-bound iterations. • For maximization problem, • A lower bound can be found by applying a fast heuristic algorithm to the problem. • An upper bound can be found by solving a relaxation of the problem (e.g., LP-relaxation). • If the current lower and upper bounds are close enough, we can stop the branch-and-bound algorithm and return the current incumbent solution (it can’t be too far from the optimum).

  20. General Idea of Cutting Plane Technique • Add new constraints (cutting planes) to the problem such that (i) the set of feasible integer solutions remains the same, i.e., we still have the same integer program. (ii) the new constraints cut off some of the fractional solutions making the feasible region of the LP-relaxation smaller. • Smaller feasible region might result in a better LP value (i.e., closer to the IP value), thus making the search for the optimal IP solution more efficient. • Each integer program might have many different formulations. Important modeling skill: Give as tight formulation as possible. How? Find cutting planes that make the formulation of the original IP tighter.

  21. Methods of getting Cutting Planes • Exploit the special structure of the problem to get cutting planes (e.g., bin packing problem, pairing problem) • Often can be hard to get • Topic of intensive research • More general methods are also available • Can be used automatically for many problems (e.g., knapsack-type constraints)

  22. Branch-and-cut algorithms • Integer programs are rarely solved based solely on cutting plane method. • More often cutting planes and branch-and-bound are combined to provide a powerful algorithmic approach for solving integer programs. • Cutting planes are added to the subproblems created in branch-and-bound to achieve tighter bounds and thus to accelerate the solution process. • This kind of methods are known as branch-and-cut algorithms.

  23. Network Models • Minimum Spanning Tree Problem • Assignment Problem • Traveling Salesman Problem • Coloring Problem • Min. Spanning Tree and Assignment Problem are in Class 1 (haspolynomial-time algorithms for solving the problem optimally). • Traveling Salesman Problem and Graph Coloring are in Class 2 (NP-hard problem).

  24. Methods for solving NP-hard problems Three main directions to solve NP-hard discrete optimization problems: • Integer programming techniques • Heuristics • Approximation algorithms • We gave examples of all three methods for TSP. • 2-approximation algorithm for TSP was given and analyzed in details.

  25. Some Characteristics of Approximation Algorithms • Time-efficient (sometimes not as efficient as heuristics) • Don’t guarantee optimal solution • Guarantee good solution within some factor of the optimum • Rigorous mathematical analysis to prove the approximation guarantee • Often use algorithms for related problems as subroutines The 2-approximation algorithm for TSP used the algorithm of finding a minimum spanning tree as subroutine.

  26. Performance of TSP algorithms in practice • A more sophisticated algorithm (which again uses the MST algorithm as a subroutine) guarantees a solution within factor of 1.5 of the optimum (Christofides). • For many discrete optimization problems, there are benchmarks of instances on which algorithms are tested. • For TSP, such a benchmark is TSPLIB. • On TSPLIB instances, the Christofides’ algorithm outputs solutions which are on average 1.09 times the optimum. For comparison, the nearest neighbor algorithm outputs solutions which are on average 1.26 times the optimum. • A good approximation factor often leads to good performance in practice.

  27. Dynamic Programming • Dynamic programming is a widely-used mathematical technique for solving problems that can be divided into stages and where decisions are required in each stage. • The goal of dynamic programming is to find a combination of decisions that optimizes a certain amount associated with a system.

  28. General characteristics of Dynamic Programming • The problem structure is divided into stages • Each stage has a number of states associated with it • Making decisions at one stage transforms one state of the current stage into a state in the next stage. • Given the current state, the optimal decision for each of the remaining states does not depend on the previous states or decisions. This is known as the principle of optimality for dynamic programming. • The principle of optimality allows to solve the problem stage by stage recursively.

  29. Problems that can be solved by Dynamic Programming • Shortest path problems • Multi-period production planning • Resource allocation problems

  30. Advantages of Dynamic Programming • More time-efficient compared to integer programming (but large-scale real-life problems might require lots of states) • Can solve a variety of problems (however integer programming is a more universal technique) • Can handle stochastic data (for example, stochastic demand in multi-period production planning)

More Related