1 / 85

Estimating Parameters for Basel Guidelines Compliance

Explore methods for estimating PD, LIED, EAD values critical for calculating EL and UL, important for regulatory capital under Basel guidelines. Learn expert credit grading and quantitative scoring techniques.

herschelj
Download Presentation

Estimating Parameters for Basel Guidelines Compliance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 19 Estimating Parameter Values for Single Facilities

  2. INTRODUCTION • In the previous chapter, we discussed the framework and equations to calculate the expected loss (EL) and unexpected loss (UL) for a single facility • These equations depended critically on three parameters: • the probability of default (PD) • the loss in the event of default (LIED) • the exposure at default (EAD)

  3. INTRODUCTION • As we will find later, these three parameters are also important for calculating regulatory capital under the new guide-lines from the Basel Committee • In this chapter, we discuss the methods that banks use to find values for these three parameters, and we show example calculations of EL and UL • Most of the methods rely on the analysis of historical information • Therefore, at the end of the chapter there will be a short discussion on the types of data that should be recorded by banks

  4. ESTIMATING THE PROBABILITY OF DEFAULT • The approaches to estimating the credit quality of a borrower can be grouped into four categories • expert credit grading • quantitative scores based on customer data • equity-based credit scoring • cash-flow simulation

  5. Expert Credit Grading • There are 3 steps to estimating the probability of default through expert grading. • The first step is to define a series of buckets or grades into which customers of differing credit quality can be assigned • The second step is to assign each customer to one of the grades • The final step is to look at historical data for all the customers in each grade and calculate their average probability of default

  6. Expert Credit Grading • The most difficult of these 3 steps is assigning each customer into a grade • The highest grade may be defined to contain customers who are "exceptionally strong companies or individuals who are very unlikely to default” • the lower grades may contain customers who "have a significant chance of default" • Credit-rating agencies use around 20 grades, as shown in Table 19-1.

  7. Expert Credit Grading • From many years of experience, and by studying previous defaults, the experts have an intuitive sense of the quantitative and qualitative indicators of trouble • For example, they know that any company whose annual sales are less than their assets is likely to default

  8. Expert Credit Grading • For large, single transactions, such as loans to large corporations, banks will rely heavily on the opinion of experts. • For large-volume, small exposures, such as retail loans, the bank reduces costs by relying mostly on quantitative data, and only using expert opinion if the results of the quantitative analysis put the customer in the gray area between being accepted and rejected

  9. Quantitative Scores Based on Customer Data • Quantitative scoring seeks to assign grades based on the measurable characteristics of borrowers that, at times, may include some subjective variables such as the quality of the management team of a company • The quantitative rating models are often called score cards because they produce a score based on the given information • Table 19-2 shows the types of information typically used in a model to rate corporations, and Table 19-3 shows information used to rate individuals.

  10. Quantitative Scores Based on Customer Data

  11. Quantitative Scores Based on Customer Data • The variables used should also be relatively independent from one another • An advanced technique to ensure this is to transform the variables according to their principal components (Eigenvalues) • The number of variables in the model should be limited to those that are strongly predictive • Also, there should be an intuitive explanation as to why each variable in the model is meaningful in predicting default

  12. Quantitative Scores Based on Customer Data • For example, low profit-ability would intuitively signal a higher probability of default • If the variables used in the model are not intuitive, the model will probably not be accepted by the credit officers • There are two common approaches: • discriminant analysis • logistic regression.

  13. Discriminant Analysis • Discriminant analysis attempts to classify customers into two groups: • those that will default • those that will not • It does this by assigning a score to each customer • The score is the weighted sum of the customer data:

  14. Discriminant Analysis • Here, wi is the weight on data type i, and Xi,c, is one piece of customer data. • The values for the weights are chosen to maximize the difference between the average score of the customers that later defaulted and the average score of the customers who did not default

  15. Discriminant Analysis • The actual optimization process to find the weights is quite complex • The most famous discriminant score card is Altman's Z Score. • For publicly owned manufacturing firms, the Z Score was found to be as follows:

  16. Discriminant Analysis

  17. Discriminant Analysis • Typical ratios for the bankrupt and nonbankrupt companies in the study were as follows:

  18. Discriminant Analysis • A company scoring less than 1.81 was "very likely" to go bankrupt later • A company scoring more than 2.99 was "unlikely" to go bankrupt. • The scores in between were considered inconclusive

  19. Discriminant Analysis • This approach has been adopted by many banks. • Some banks use the equation exactly as it was created by Altman • But, most use Altman's approach on their own customer data to get scoring models that are tailored to the bank • To obtain the probability of default from the scores, we group companies according their scores at the beginning of a year, and then calculate the percentage of companies within each group who defaulted by the end of the year

  20. Limitations of z-score model • The past performance involved in a firm’s accounting statements may not be informative in predicting the future • A lack of theoretical underpinning • Accounting fraud

  21. Logistic Regression • Logistic regression is very similar to discriminant analysis except that it goes one step further by relating the score directly to the probability of default • Logistic regression uses a logit function as follows:

  22. Logistic Regression • Here, PC is the customer's probability of default • Yc is a single number describing the credit quality of the customer.

  23. Logistic Regression • Yc (a single number describing the credit quality of the customer) is a constant, plus a weighted sum of the observable customer data:

  24. Logistic Regression • When Yc is negative, the probability of default is close to 100% • When Yc is a positive number, the probability drops towards 0 • The probability transitions from 1 to 0 with an "S-curve," as in Figure 19-1.

  25. Logistic Regression

  26. Logistic Regression • To create the best model, we want to find the set of weights that produces the best fit between PC and the observed defaults • We would like PCto be close to 100 % for a customer that later defaults and close to 0 if the customer does not default • This can be accomplished using maximum likelihood estimation (MLE)

  27. Logistic Regression • In MLE, we define the likelihood function Lc for the customer • to be equal to PCif the customer did default • to be 1 - PC if the customer did not default

  28. Logistic Regression • We then create a single number, J, that is the product of the likelihood function for all customers:

  29. Logistic Regression • J will be maximized if we choose the weights in in Yc such that for every company, whenever • there is a default, Pc is close to 1, • there is no default, Pc is close to 0 • If we can choose the weights such that J equals 1, we have a perfect model that predicts with 100 accuracy whether or not a customer will default • In reality, it is very unlikely that we will achieve a perfect model, and we settle for the set of weights that makes J as close as possible to 1

  30. Logistic Regression • The final result is a model of the form: • Where the values for all the weights are fixed • Now, given the data (Xi) for any new company, we can estimate its probability of default

  31. Testing Quantitative Scorecards • An important final step in building quantitative models is testing • The models should be tested to see if they work reliably • One way to do this is to use them in practice and see if they are useful in predicting default

  32. Testing Quantitative Scorecards • The usual testing procedure is to use hold-out samples • Before building the models, the historical customer data is separated randomly into two sets: • the model set • the test set • The model set is used to calculate the weights • The final model is then run on the data in the test set to see whether it can predict defaults

  33. Testing Quantitative Scorecards • The results of the test can be presented as a power curve • The power curve is constructed by sorting the customers according to their scores, and then constructing a graph with the percentage of all the customers on the x-axis and the percentage of all the defaults on the y-axis

  34. Testing Quantitative Scorecards • For this graph, x and y are given by the following equations: k is the cumulative number of customers, N is the total number of customers, IC is an indicator that equals 1 if company c failed, and equals 0 otherwise ND is the total number of defaulted customers in the sample

  35. Testing Quantitative Scorecards A perfect model is one in which the scores are perfectly correlated with default, and the power curve rises quickly to 100%, as in Figure 19-2.

  36. Testing Quantitative Scorecards A completely random model will not predict default, giving a 45-degree line, as in Figure 19-3. Most models will have a performance curve somewhere between Figure 19-2and Figure 19-3.

  37. Testing Quantitative Scorecards • Accuracy ratio by CAP curve= (the area under a model’s CAP)/ (the area under the ideal CAP) • Type I vs. Type II • type I error: classifying a subsequently failing firm as non-failed • type II error: classifying a subsequently non-failed firm as failed

  38. Testing Quantitative Scorecards • The costs of misclassifying a firm that subsequently fails are much more serious than the costs of misclassifying a firm that does not fail • In particular, in the first case, the lender can lose up to 100% of the loan amount while, in the latter case, the loss is just the opportunity cost of not lending to that firm. • Accordingly, in assessing the practical utility of failure prediction models, banks pay more attention to the misclassification costs involved in type I rather than type II errors.

  39. Equity-Based Credit Scoring • The scoring methods described above relied mostly on examination of the inner workings of the company and its balance sheet • A completely different approach is based on work by Merton and has been enhanced by the company KMV. • Merton observed that holding the debt of a risky company was equivalent to holding the debt of a risk-free company plus being short a put option on the assets of the company • Simply speaking, the shareholders have a right to sell the company to debt holder

  40. Equity-Based Credit Scoring • When the shareholders will sell the company to debt holders? • If the value of the assets falls below the value of the debt • The shareholders can put the assets to debt holders • In return, receive the right not to repay the full amount of the debt

  41. Equity-Based Credit Scoring • In this analogy, the underlying for the put option is the company assets, and the strike price is the amount of debt • his observation led Merton to develop a pricing model for risky debt and allowed the calculation of the probability of default • This calculation is illustrated in Figure 19-4.

  42. Equity-Based Credit Scoring

  43. Distance to Default

  44. Equity-Based Credit Scoring • It is relatively difficult to observe directly the total value of a company's assets, but it is reasonable to assume that the value of the assets equals the value of the debt plus equity, and the value of the debt is approximately stable • This assumption allows us to say that changes in asset value equal changes in the equity price • This approach is attractive because equity information is readily available for publicly traded companies, and it reflects the markets collective opinion on the strength of the company

  45. Equity-Based Credit Scoring • We can then use the volatility of the equity price to predict the probability tha tthe asset value will fall below the debt value, causing the company to default • If we assume that the equity value (E) has a Normal probability distribution, the probability that the equity value will be less than zero is given by the following:

  46. Equity-Based Credit Scoring >the critical value or the distance to default >It is the number of standard deviations between the current price and zero

  47. Equity-Based Credit Scoring • With these simplifying assumptions, for any given distance to default we can calculate the probability of default, as in Table 19-4 • This table also shows the ratings that correspond to each distance to default.

More Related