380 likes | 824 Views
Managerial Decision Making and Problem Solving . Lecture Notes 2 Sensitivity Analysis. Introduction. Sensitivity analysis (or post-optimality analysis) is used to determine how the optimal solution is affected by changes, within specified ranges, in: the objective function coefficients
E N D
Managerial Decision Making and Problem Solving Lecture Notes 2 Sensitivity Analysis
Introduction • Sensitivity analysis (or post-optimality analysis) is used to determine how the optimal solution is affected by changes, within specified ranges, in: • the objective function coefficients • the right-hand side (RHS) values
Introduction • Sensitivity analysis is important to a manager who must operate in a dynamic environment with imprecise estimates of the coefficients. • Sensitivity analysis allows a manager to ask certain what-if questions about the problem.
Sensitivity Analysis of Objective Function Coefficients • The range of values over which an objective function coefficient may vary without causing any change in the values of the decision variables in the optimal solution is called range of optimality. • Managers should focus on those objective coefficients that have a narrow range of optimality and coefficients near the endpoints of the range.
Sensitivity Analysis of Objective Function Coefficients • The optimal solution will remain unchanged as long as: • An objective function coefficient lies within its range of optimality • There are no changes in any other input parameters. • The value of the objective function will change if • the coefficient is multiplied by a non-zero number.
Example 1 • Changing Slope of Objective Function x2 Coincides with x1 + x2< 8 constraint line 8 7 6 5 4 3 2 1 Objective function line for 5x1 + 7x2 5 Coincides with 2x1 + 3x2< 19 constraint line Feasible Region 4 3 1 2 x1 1 2 3 4 5 6 7 8 9 10
Sensitivity Analysis of Objective Function Coefficients • Graphically, the limits of a range of optimality are found by changing the slope of the objective function line within the limits of the slopes of the binding constraint lines. • Slope of an objective function line, Max c1x1 + c2x2, is -c1/c2, and the slope of a constraint, a1x1 + a2x2 = b, is -a1/a2.
Sensitivity Analysis of Objective Function Coefficients • The optimal solution will not change as long as the following expression is satisfied
Sensitivity Analysis of Objective Function Coefficients • Simultaneous Changes: • The range of optimality for objective function coefficient is applicable for changes made to one coefficient at a time. • If two or more objective function coefficients are changed simultaneously, further analysis is needed to determine whether the optimal solution will change.
The 100% Rule for Objective Function Coefficients • The 100% rule states that simultaneous changes in objective function coefficients will not change the optimal solution as long as the sum of the percentages of the change divided by the corresponding maximum allowable change in the range of optimality for each coefficient does not exceed 100%.
The 100% Rule for Objective Function Coefficients • If the sum of the percentage changes does not exceed 100%, the optimal solution will not change. • The 100 percent rule does not, however, say that the optimal solution will change if the sum of the percentage changes exceeds 100%. It is possible that the optimal solution will not change even though the sum of the percentage changes exceeds 100%.
The 100% Rule for Objective Function Coefficients • If two objective function coefficients change simultaneously, both may move outside their respective ranges of optimality and not affect the optimal solution. For instance, in a two-variable linear program, the slope of the objective function will not change at all if both coefficients are changed by the same percentage. • When the 100 percent rule is not satisfied, we must re-solve the problem to determine what effect such changes will have on the optimal solution.
Reduced Cost • The amount by which an objective function coefficient would have to improve (increase for a maximization problem and decrease for a minimization problem) before it would b it would be possible for the corresponding variable to assume a positive value in the optimal solution is called “Reduced cost.”
Sensitivity Analysis of The Right hand side of the constraints • In sensitivity analysis of right-hand sides of constraints we are interested in the following questions: • Keeping all other factors the same, how much would the optimal value of the objective function (for example, the profit) change if the right-hand side of a constraint changed by one unit? • For how many additional or fewer units will this per unit change be valid?
Sensitivity Analysis of The Right hand side of the constraints • Any change to the right hand side of a binding constraint will change the optimal solution. • Any change to the right-hand side of a non-binding constraint that is less than its slack or surplus, will cause no change in the optimal solution.
Shadow Price • Assuming there are no other changes to the input parameters, the change to the objective function value per unit increase to a right hand side of a constraint is called the Shadow Price
Range of Feasibility • Assuming there are no other changes to the input parameters, the range of feasibility is • The range of values for a right hand side of a constraint, in which the shadow prices for the constraints remain unchanged. • In the range of feasibility the objective function value changes as follows:Change in objective value = [Shadow price][Change in the right hand side value]
The 100% Rule for Constraint Right-Hand sides For all right-hand sides that are changes, sum the percentages of allowable increases and allowable decreases. If the sum of percentages does not exceed 100%, then the shadow price will not change.
Dual Price • The concept of a dual price is closely related to the concept of a shadow price. • The dual price associated with a constraint is the improvement in the value of the optimal solution per unit increase in the right hand side of the constraint. • In general, the dual price and the shadow price are the same for all the maximization linear programs.
Dual Price • In minimization linear programs, the shadow price is the negative of the corresponding dual price. • The negative dual price tells us that the objective function will not improve if the value of the right hand side is increased by one unit.
The correct interpretation of shadow prices • Sunk Cost – A cost that is not affected by the decision made. It will incurred no matter what values the decision variables assume. • Since the cost of the resource is not included in the calculation of objective function coefficients - The shadow price is the value of an extra unit of the resource.
The correct interpretation of shadow prices • Relevant Cost - A cost that depends upon the decision made. The amount of a relevant cost will vary depending on the values of the decision variables, • Since the cost of the resource is included in the calculation of objective function coefficients - the shadow price is the premium value above the existing unit value for the resource
Range of Feasibility • Managers are frequently called on to provide an economic justification for new technology. Often the new technology is developed, or purchased, in order to conserve resources. The dual prices can be helpful in such cases because it can be used to determine the savings attributable to the new technology by showing the savings per unit of resource conserved.
The 100% Rule for Objective Function Coefficients • The 100 percent rule cannot be applied to changes in both objective function coefficients and right-hand sides at the same time. In order to consider simultaneous changes for both right- hand-side values and objective function coefficients, the problem must be resolved. Only some analysis of simultaneous changes is possible with the help of 100 percent rule.
Post Optimality Changes • The addition or deletion of variables, and changes to the left hand side coefficients of a linear programming model are additional post optimality analyses that may be of interest to the decision maker.
Post Optimality Changes • Addition of a constraint • Is this constraint satisfied by the decision maker? • If yes, the current solution will remain optimal • If no, the problem must be resolved. • The new optimal objective function value will not be better than the original value because the problem is now more constrained.
Post Optimality Changes • Deletion of a Constraint • Is the constraint nonbinding? • If yes, the current optimal solution will not change. • If No, The problem must be resolved. • Since the problem is less restrictive, the new optimal solution will generate optimal objective function at least as good as that to the original model.
Post Optimality Changes • Deletion of variable. • If the variable to be deleted is zero in the optimal solution, it will not affect the optimal solution. • If the value of the variable is not zero in the optimal solution, the problem must be resolved. • Deleting a non-zero variable will result in a worse objective function value or one that is at best no better than the original objective function value.
Post Optimality Changes • Addition of a variable • The problem must be re-solved (in most cases). • Net marginal profit procedure can be used to determine whether the addition of the new variable will have any effect on the optimal solution. • Net marginal profit is the difference between the objective function coefficient and the total marginal cost of the resources (calculated using the current values of the dual prices).
Post Optimality Changes • Changes in the left-hand side coefficient • If the change is to a non-binding constraint • Does the current optimal solution satisfies the modified constraint? • If yes, it remains the optimal solution to the revised model • If no, or change is to a binding constraint, the model must be resolved.
Models Without Unique Optimal Solutions • Infeasibility: Occurs when a model has no feasible point. • Unboundness: Occurs when the objective can become infinitely large (max), or infinitely small (min). • Alternate solution: Occurs when more than one point optimizes the objective function
No point, simultaneously, lies both above line and below lines and . 1 2 3 2 1 3 Infeasible Model
Example: Unbounded Model x2 10 3x1 + x2> 8 8 6 Max 4x1 + 5x2 4 x1 + x2> 5 2 x1 2 4 6 8 10
Example: Alternative Optimal Model x2 x1 + x2< 7 7 6 5 4 3 2 1 Max 4x1 + 6x2 A B x1< 6 2x1 + 3x2< 18 x1 1 2 3 4 5 6 7 8 9 10