240 likes | 260 Views
Explore decision analysis methods under ignorance, risk, and certainty. Learn to evaluate alternatives for optimal outcomes and reduce regret. Enhance strategic decision-making skills.
E N D
Decision Analysis Agenda Problems
Decision AnalysisOpen MGS3100_06Decision_Making.xls Decision Alternatives • Your options - factors that you have control over • A set of alternative actions - We may chose whichever we please States of Nature • Possible outcomes – not affected by decision. • Probabilities are assigned to each state of nature Certainty • Only one possible state of nature • Decision Maker (DM) knows with certainty what the state of nature will be Ignorance • Several possible states of nature • DM Knows all possible states of nature, but does not know probability of occurrence Risk • Several possible states of nature with an estimate of the probability of each • DM Knows all possible states of nature, and can assign probability of occurrence for each state
Decision Making Under Ignorance • LaPlace-Bayes • All states of nature are equally likely to occur. • Select alternative with best average payoff • Maximax • Evaluates each decision by the maximum possible return associated with that decision • The decision that yields the maximum of these maximum returns (maximax) is then selected • Maximin • Evaluates each decision by the minimum possible return associated with the decision • The decision that yields the maximum value of the minimum returns (maximin) is selected • Minmax Regret • The decision is made on the least regret for making that choice • Select alternative that will minimize the maximum regret
MinMax Regret Table Regret Table: Highest payoff for state of nature – payoff for this decision
Decision Making Under Risk • Expected Return (ER) or Expected Value (EV) or Expected Monetary Value (EMV) • Sj The jth state of nature • Di The ith decision • P(Sj) The probability that Sj will occur • Rij The return if Di and Sj occur • ERj The long-term average return • ERi = S Rij P(Sj) • Variance = S (ERi - Rij)2 P(Sj) • The EMV criterion chooses the decision alternative which has the highest EMV. We'll call this EMV the Expected Value Under Initial Information (EVUII) to distinguish it from what the EMV might become if we later get more information. Do not make the common student error of believing that the EMV is the payoff that the decision maker will get. The actual payoff will be the for that alternative (j) Vi,j and for the State of Nature (i) that actually occurs.
Decision Making Under Risk • One way to evaluate the risk associated with an Alternative Action by calculating the variance of the payoffs. Depending on your willingness to accept risk, an Alternative Action with only a moderate EMV and a small variance may be superior to a choice that has a large EMV and also a large variance. The variance of the payoffs for an Alternative Action is defined as • Variance = S (ERi - Rij)2 P(Sj) • Most of the time, we want to make EMV as large as possible and variance as small as possible. Unfortunately, the maximum-EMV alternative and the minimum-variance alternative are usually not the same, so that in the end it boils down to an educated judgment call.
Expected Value of Perfect Information • EVPI measures how much better you could do on this decision if you could always know what state of nature would occur. • The Expected Value of Perfect Information (EVPI) provides an absolute upper limit on the value of additional information (ignoring the value of reduced risk). It measures the amount by which you could improve on your best EMV if you had perfect information. It is the difference between the Expected Value Under Perfect Information (EVUPI) and the EMV of the best action (EVUII). • The Expected Value of Perfect Information measures how much better you could do on this decision, averaging over repeating the decision situation many times, if you could always know what State of Nature would occur, just in time to make the best decision for that State of Nature. Remember that it does not imply control of the States of Nature, just perfect prediction. Remember also that it is a long run average. It places an upper limit on the value of additional information.
Expected Value of Perfect Information • EVUPI - Expected Value under perfect information S P(Si) max(Vij) • EVUII – EMV of the best action max(EMVj) • EVPI = EVUPI - EVUII
Expected Value of Sample Information • EVSI = expected value of sample information • EVwSI = expected value with sample information • EVwoSI = expected value without sample information • EVSI = EVwSI – EVwoSI • Efficiency Index = [EVSI/EVPI]100
Agenda Problems Decision Analysis
What kinds of problems? • Alternatives known • States of Nature and their probabilities are known. • Payoffs computable under different possible scenarios
Basic Terms • Decision Alternatives • States of Nature (eg. Condition of economy) • Payoffs ($ outcome of a choice assuming a state of nature) • Criteria (eg. Expected Value) Z
Example Problem 2- Sequential Decisions • Would you hire a consultant (or a psychic) to get more info about states of nature? • How would additional info cause you to revise your probabilities of states of nature occurring? • Draw a new tree depicting the complete problem. • Consultant’s Track Record Z
Example Problem 2- Sequential Decisions (Ans)Open MGS3100_06Joint_Probabilities_Table.xls • First thing you want to do is get the information (Track Record) from the Consultant in order to make a decision. • This track record can be converted to look like this: P(F/S1) = 0.2 P(U/S1) = 0.8 P(F/S2) = 0.6 P(U/S2) = 0.4 P(F/S3) = 0.7 P(U/S3) = 0.3 F= Favorable U=Unfavorable • Next, you take this information and apply the prior probabilities to get the Joint Probability Table/Bayles Theorum Z
Example Problem 2- Sequential Decisions (Ans)Open MGS3100_06Joint_Probabilities_Table.xls • Next step is to create the Posterior Probabilities (You will need this information to compute your Expected Values) P(S1/F) = 0.06/0.49 = 0.122 P(S2/F) = 0.36/0.49 = 0.735 P(S3/F) = 0.07/0.49 = 0.143 P(S1/U) = 0.24/0.51 = 0.47 P(S2/U) = 0.24/0.51 = 0.47 P(S3/U) = 0.03/0.51 = 0.06 • Solve the decision tree using the posterior probabilities just computed. Z