260 likes | 275 Views
Risk Management Models - What to use and not to use Peter Luk April 2008. Historical impact of financial mathematics. 1973 – Ground breaking Black-Scholes formula 1979 – Actuarial paper on equity-linked guarantee 1987 – Portfolio Insurance. DJIA down 22% 1997/98 – LTCM. DJIA down 18%
E N D
Risk Management Models - What to use and not to use Peter Luk April 2008
Historical impact of financial mathematics • 1973 – Ground breaking Black-Scholes formula • 1979 – Actuarial paper on equity-linked guarantee • 1987 – Portfolio Insurance. DJIA down 22% • 1997/98 – LTCM. DJIA down 18% • 2007/08 – Sub-prime. DJIA down 17%
Typical Assumptions • Risk-free rate of interest: • Constant or normally distributed • Equity price change: • Normally distributed with constant volatility
By the Cornish-Fisher expansion, we have + .......... where, Is the coefficient of skewness Is the coefficient of kurtosis
Are stock price changes distributed normally ? • Professor Eugene Fama says, “If the population of price change is strictly normal,…an observation that is more than five standard deviations from the mean should be observed once every 7,000 years. In fact, such observations seem to occur about once every three or four years.” • In August 2007, Wall street Journal reported that events (Lehman Brothers’) models predicted would only happen once in 10,000 years happened every day for three days. • Financial Times reported that Goldman Sachs witnessed something that only happens once every 100,000 years according to their model.
Someempirical tests • Five indices: • Hang Seng Index • Nikkei 225 • Dow Jones Industrial Average • Standard & Poor 500 • Financial Times Stock Exchange 100 Index • Three different 6-month periods: • Jan. 2007 – Jun 2007 • Jul. 2007 – Dec. 2007 • Oct. 2007 – Mar. 2008 • Three different normality tests: • Chi-square • Kolmogorov-Smirnov • Anderson-Darling • Half of the 15 combinations failed the test of normality.
Implied volatility • When the observed interest rate and volatility cannot reproduce the observed price, people do the reverse way, by using observed interest rate and price to calculate the so-called “implied volatility”. • The above Cornish-Fisher expansion gives the approximate relationship: Implied volatility = historic volatility x adj. factor • Implied volatility is a guesswork, because it factors into it anticipated future changes of volatility. This, plus the falsehood of the normal assumption, makes Black-Scholes and other derivative modeling unreliable.
Market liquidity • An important (perhaps the most important) assumption often taken for granted in our conventional modeling is market liquidity. We all assume for a willing seller, there is a ready buyer. • LTCM (and its Nobel laureates) learned a painful lesson when they found there were no buyers when they were forced to sell. • Similarly, many sub-prime failures are victim of illiquidity.
Conclusions • Conventional models based on normal distribution will continue to be used because of their popularity and simplicity (and lack of alternatives). • Occam’s razor – other things being equal, always use simpler models. • Short term models are ok. Long term models are more likely to be unreliable. Insurance models are always of longer term than trading models. • Avoid relying on tail expectations (our knowledge of tail is very limited). • Financial markets are subject to influence of mass human psychology and history has proven us wrong every time we thought we got it right.
To use: • Short term models • Simple models • Constant risk-free yield curve • Constant volatility • Not to use: • Long term models • Complicated models ( unless they have passed stress testing) • Variable risk-free yield curve • Fluctuating volatility • For the calculation of conditional tail expectation
There are six mortgages here showing the mortgagor’s income and the amount of mortgage. • The first three are fine but the last three end up in default. • A Logit model is used here and the maximum likelihood method is used to estimate the probability of default as follow.
Criteria for a good credit rating model: • Size of past data • Homogeneous data • Criticism of rating agencies • Sub-prime crisis due mainly to securitization (HSBC vs Bear Stearn)
Conclusions • An in-house developed credit rating model (a logit or probit model) is a great help, even though they are not foolproof. • While public ratings are useful, we should not have blind faith in them. • To use: • In-house models • Public ratings for long established industries • Public ratings for familiar financial instruments • Not to use: • Public ratings for new industries • Public ratings for newly developed economies • Public ratings for unfamiliar financial instruments
Asset model is one-dimensional (you can have assets without liabilities). • Liability model is multi-dimensional (you always have accompanying assets) • Market consistent assumptions • Cash flow, due dates • Worst possible scenarios • Regulatory requirements • Public confidence
1st - Market consistent liability data do not exist. • Example: $100 m / (1.05) = $95.23 m • $100 m x 0.7 = $70 m • Rule-based? Shell life of 15 years • 2nd - Liability is not liability until due • Positive cash flow • Due date can change. Cross default and chain reaction • 3rd - Worst case scenario • Actuaries know: catastrophe insurance and reinsurance • Only model natural events, not human behavior • Do not model tail distribution: we know too little about it
4th - Regulatory requirements • Reasonable or not is irrelevant • Capital requirements considered as liability for modeling • 5th - Public confidence • Difficult to measure • Confidence crisis does not happen often, but devastating when happens • Other things to note:- • No publicly recognized models • Pricing models far more important than valuation models • Guarantees 1: guaranteed equity-linked insurance • Guarantees 2: Interest guarantee
Conclusions • While asset modeling is one-dimensional, liability modeling has at least five dimensions: market consistent data, positive cashflow and due dates, worst case scenarios, regulatory requirements and public confidence. • Pricing models far more important than valuation models. • Modeling for long term liabilities is unreliable. • To use: • Regulatory requirements whether reasonable or not • Always model cash flow under different scenarios • Always consider the worst case scenario • Use stress testing or back testing • Not to use: • Any pricing model without considering the worst case scenario • Long term model for contingent liabilities
1881 – Frederick Taylor introduced time-motion studies and unit cost. Age of micro-modeling started. • 1977 – Introduction of home computer and 1990s’ internet/website started the digital age. • Business models changed drastically. Unit cost started to become irrelevant. • We witness the onset of the age of macro-modeling. • Only one actuarial paper (Australian) dealt with macro-pricing.
An example – unit cost in insurance • Cost per policy and cost per thousand sum insured – Nobody verified if past assumptions made decades ago were correct. • Cost as percentage of premium remain at around 20% for a hundred years. • This relationship is due to mutation of human behavior. • Cost will not go lower • Cost will not go higher
Conclusions • Business models will change together with our IT infrastructure. Income and outgo will have to be separately modeled in future. • A good understanding of macro-modeling will improve our competitive edge vis-à-vis other actuaries and other financial professionals. • Maybe regulatory authorities should start looking at things at the macro level, i.e., instead of just having triggers for individual companies (such as capital adequacy trigger), there should be a trigger at the industry level, such as total liabilities of all financial institutions (including securitized instruments) vs total assets, etc.
Thank you Please refer to hard copies for further references