220 likes | 386 Views
Materials for Lecture 20. Read Chapter 9 Lecture 20 CV Stationarity.xls Lecture 20 Changing Risk Over Time.xls Lecture 20 VAR Analysis.xls Lecture 20 Simple VAR.xls. Value at Risk Analysis. Value at Risk – VAR
E N D
Materials for Lecture 20 • Read Chapter 9 • Lecture 20 CV Stationarity.xls • Lecture 20 Changing Risk Over Time.xls • Lecture 20 VAR Analysis.xls • Lecture 20 Simple VAR.xls
Value at Risk Analysis • Value at Risk – VAR • Originally VAR used to quantify market risk, but considered only 1 source of risk • By year 2000 businesses were integrating their risk management systems across the whole enterprise • Focus on analyzing multiple sources of risk including market risk • Now market based VAR analyses measure integrated market and credit risk
Value at Risk Model • In an intuitive definition “VAR summarizes the worst loss over a target horizon with a given level of confidence” • VAR defines the quantile of the projected distribution of gains and losses over the target horizon
Value At Risk Model • If c is the selected confidence level, VAR corresponds to the 1-c lower tail of the probability distribution (the quantile).
Value At Risk (VAR) Model To estimate the VAR quantile for a risky business use these steps: • Develop a stochastic simulation model of the risky business decision • Validate stochastic variables and validate the model • Pick a ‘c’ value, say, 5%, so 1-c = 95% • Simulate the model and analyze the KOV • Calculate the quantile for the c value • Calculate VAR = Mean – Quantile at 1-c • Report the results
Value At Risk (VAR) Model • On selecting the ‘c’ value – literature uses the 95% level • This is to say we want to know the value of returns which the businesswill exceed 95% of the time • If simulating 1,000 iterations, the quantile will be the 50th value, so we can sort the stochastic results and read the 50thvalue • Or simply use the PDF in Simetar
VAR in Simetar • Simulate the KOV and draw a PDF • Change the Confidence level to 0.90 for “c” = 5% • Edit the title of the chart • VAR value is the Lower Quantile
Valuation Models • A variation on VAR is the traditional valuation model • Valuation models focus on the mean and the variation below the mean
VAR as Risk Capital • VAR is the equity capital that should be set aside to cover most all potential losses with a probability of “c” • Thus, the VAR is the amount of capital reserves that should be held to meet shortfalls
VAR for Comparing Risky Alternatives • Simulate multiple scenarios and calculate VAR for each alternative
VAR Shortcomings • VAR analyses generally used in business gives a false sense of security • The literature assumes Normality for the random variables, why? • Normal is easy to simulate • Can easily calculate the Quantile if you know mean and std deviation Q = Mean – (2.035 * Std Dev) • The chance of a Black Swan is ignored • This understates the Quantile and the equity capital reserve needed to cover cash flow deficits • Contributed to the Recession
Overcoming VAR Shortcomings • Modify the probability distributions for the random variables that affect the business • Incorporate low probability events that could cause major harm to the business. • Use and EMP distribution and adjust the Probabilities and Sorted Deviates as a Fraction for Black Swan events • Change the F(X) values for the low probability • Change the minimum Xs
Covariance Stationary & Heteroskedasticy • Part of validation is to test if the standard deviation for random variables match the historical std dev. • Referred to as “covariance stationary” • Simulating outside the historical range causes a problem in that the mean will likely be different from history causing the coefficient of variation, CVSim, to differ from historical CVHist: CVHist = σH / ῩH Not Equal CVSim = σH / ῩS
Covariance Stationary • CV stationarity likely a problem when simulating outside the sample period: • If Mean for X increases, CV declines, which implies less relative risk about the mean as time progresses CVSim = σH / ῩS • If Mean for X decreases, CV increases, which implies more relative risk about the mean as we get farther out with the forecast CVSim = σH / ῩS • See Chapter 9
CV Stationarity • The Normal distribution is covariance stationary BUT it is not CV stationary if the mean differs from historical mean • For example: • Historical Mean of 2.74 and Historical Std Dev of 1.84 • Assume the deterministic forecast for mean increases over time as: 2.73, 3.00, 3.25, 4.00, 4.50, and 5.00 • CV decreases while the std dev is constant
CV Stationarity for Normal Distribution • An adjustment to the StdDev can make the simulation results CV stationary if you are simulating a Normal dist. • Calculate a Jt+i value for each period (t+i) to simulate as: Jt+i= Ῡt+i / Ῡhistory • The Jt+i value is then used to simulate the random variable in period t+i as: Ỹt+i = Ῡt+i + (StdDevhistory * Jt+i * SND) Ỹt+i = NORM(Ῡt+i, StdDev * Jt+i) • The resulting random values for all years t+i have the same CV but different StdDevthan the historical data • This is the result desired when doing multiple year simulations
CV Stationarity and Empirical Distribution • Empirical distribution automatically adjusts so the simulated values are CV stationary if the distribution is expressed as deviations from the mean or trend Ỹt+i = Ῡt+i * [1 + Empirical(Sj, F(Sj), USD)]
Empirical Distribution Validation • Empirical distribution automatically adjusts so the simulated values are CV stationary • This is done by adjusting the standard deviation • This poses a problem for validation • The correct method for validating Empirical distribution is: • Set up the theoretical mean and standard deviation • Mean = Historical mean * J • Std Dev = Historical mean * J * CV for simulated values / 100 • Here is an example for J = 2.0
Add Heteroskedasticy to Simulation • Sometimes we want the CV to change over time • Change in policy could increase the relative risk • Change in management strategy could change relative risk • Change in technology can change relative risk • Change in market volatility can change relative risk • Create an Expansion factor or Et+i value for each year to simulate • Et+i is a fractional adjustment to the relative risk • 0.0 results in No risk at all for the random variable • 1.0 results in same relative risk (CV) as the historical period • 1.5 results in 50% larger CV than historical period • 2.0 results in 100% larger CV than historical period • Chapter 9
Add Heteroskedasticy to Simulation • Simulate 5 years with no risk for the first year, historical risk in year 2, 15% greater risk in year 3, and 25% greater CV in years 4-5 • The Et+ivalues for years 1-5 are, respectively, 0.0, 1.0, 1.15, 1.25, 1.25 • Apply the Et+i expansion factors as follows: • Normal distribution Ỹt+i = Ῡt+i + (Std Devhistory * Jt+i* Et+i* SND)Ỹt+i=NORM (Ῡt+i, StdDevhistory * Jt+i* Et+i) • Empirical Distribution if Si are deviations from mean Ỹt+i = Ῡt+i * { 1 + [Empirical(Sj, F(Sj), USD) * Et+I]}