1 / 28

Comptroller of the Currency

Comptroller of the Currency. Administrator of National Banks. Managing Model Risk in Retail Scoring Dennis Glennon Credit Risk Analysis Division Office of the Comptroller of the Currency September 28, 2012

holt
Download Presentation

Comptroller of the Currency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comptroller of the Currency Administrator of National Banks Managing Model Risk in Retail Scoring Dennis Glennon Credit Risk Analysis DivisionOffice of the Comptroller of the CurrencySeptember 28, 2012 The opinions expressed in this paper are those of the authors and do not necessarily reflect those of the Office of the Comptroller of the Currency. All errors are the responsibilities of the authors.

  2. Agenda • Introduction to Model Risk • What is it? • Why is it relevant? • Managing Model Risk • Overview of Sound Model Development and Validation Procedures • Emerging Issues Related to Model Risk 2

  3. Models Risk: What is it? • Model Risk – Potential for adverse consequences from decisions based on incorrect or misused model outputs • Model errors that produce inaccurate outputs • Model may be used incorrectly or inappropriately (i.e., using a model outside the environment for which it was designed). • Model risk emerges from the process used to develop models for measuring credit risk. • The process introduces a secondary loss exposure beyond that of credit risk alone • e.g., poor underwriting decisions based on erroneous models or overly broad interpretations of model results. 3

  4. Model Risk: What is it? • Credit Risk: The risk to earnings or capital from an obligor's failure to meet the terms of any contract with the bank or otherwise fails to perform as agreed. • A conceptually distinct exposure to loss. • There are many reasons for poor model-based results including: • Poor modeling (i.e., inadequate understanding of the business) • Poor model selection (i.e., overfitting) • Inadequate understanding of model use • Changing conditions in the market 4

  5. Managing Model Risk • The goal of model-risk analysis is to isolate the effect of a bank's choice of risk-management strategies from those associated with incorrect or misused model output. • Model Validation is an essential component of a sound model-risk management process. • Validate at time of model development/implementation • Ongoing monitoring • Re-validate 5

  6. Model Risk • Model validation can be costly. • However, using unvalidated models to underwrite, price, and/or manage risk is potentially an unsafe and unsound practice. • The best defense against model risk is the implementation of formal, prudent, and comprehensive model-validation procedures. 6

  7. Model Risk: Sound Modeling Practices • Sound modeling practices • In many cases, there are generally accepted methods of building and validating models. • These methods incorporate procedures developed in the finance, statistics, econometrics, and information theory literature. • Although these methods are valid, they may not be appropriate in all applications. • A model selected for its ability to discriminate between high and low risk may perform poorly at predicting the likelihood of default. 7

  8. Models as Decision Tools • Two primary modeling objectives • Classification: The model is used to rank credits by their expected relative performance • Prediction: The model is used to accurately predict the probability of the outcome • Modelers typically have one of these objectives in mind when developing and validating their models 8

  9. Model Selection: Which model is better? obs. good (G) - y=0 obs. bad (B) - y=1 Model 1 Model 2 1 3 5 7 9 1 5 4 4 11 y y 1 1 [0.1] [0.3] [0.5] [0.7] [0.9] [0.08] [0.45] [0.44] [0.67] [0.92] [bad rate] [bad rate] 11 6 5 2 1 9 7 5 3 1 0 10 30 50 70 90 0 10 30 50 70 90 20 0 40 60 80 100 20 40 60 80 100 0 Score (quintiles) Score (quintiles) [#B / (#G + #B)] 9

  10. Models as Decision Tools • A comparison of models: visual summary Reliable and Accurate Reliable, but not Accurate Odds: 33:1 Bad %: 3.0% Odds: 33:1 Bad %: 3.0% Odds: 12.2:1 Bad %: 7.6% Score: 253

  11. Illustrative Example ln(20/1) = 3.0 bad rate = 5% ln(4/1) = 1.4 bad rate = 20%

  12. Models as Decision Tools • The model design should reflect how the model will be used. • As such, the choices of: • sample design • modeling technique • validation procedures should reflect the intended purpose for which the model will ultimately be used. • To effectively manage model risk, the right tools must be used. 12

  13. Models as Decision Tools • Models are developed for different purposes – i.e., classification or prediction. As such, the choices of: • sample design • modeling technique • validation procedures are driven by the intended purpose for which the model will ultimately be used. 13

  14. Model Validation • The classification objective is the weaker of two conditions. • There are well-developed methods outlined in the literature and accepted by the industry that are used to assess the validity of models developed under that objective. • In practice, we see: • Development • KS / Gini used as the primary model selection tool • These evaluated on the development, hold out, and out-of-time samples • Validation • KS / ΔKS • Stability test (e.g., PSI, characteristic analysis, etc.) • Backtesting analysis 14

  15. Model Validation • Almost all scoring models generate KS values that reject the null that the distribution of good accounts is equal to the distribution of bads. • KS is also used to identify a specific model with the maximum separation across alternative models. • In practice, however, the difference between the maxKS and those of alternative models is never tested using statistical methods (although there are tests outlined in the literature – e.g., Krzanowski and Hand, 2011). • More importantly, once a model is selected, few modelers apply a statistical test to determine if the KS has change significantly over time to conclude the model is no longer working as expected. 15

  16. Model Validation • The test that have been developed, however, tend to be sensitive to sample size. Given the size of development and validation samples, very small changes may be statistically significant. • OPEN ISSUE 1: Are there tests banks can use to test for statistical significance that are not overly sensitive to sample size. 16

  17. Model Validation • Predictive models are developed under a model accuracy objective. • As a result, a goodness-of-fit test is required for model selection. • Common performance measures used to evaluate predictive models: • Interval Test • Chi-Square Test • Hosmer-Lemeshow (H-L) Test • Unfortunately, the goodness-of-fit tests assume defaults are independent events. If the events are dependent, the tests will reject the null too frequently. 17

  18. Model Validation • The Vasicek Test is an alternative test of accuracy that allows for dependence. • The Vasicek Test is designed to capture the effect of dependence on the size of the confidence bands. • Formula used to derive the confidence bands where Vint is the width of the interval;  ~ N(0,1); Z.95=1.64; and ρ – correlation. 18

  19. Vasicek Test: An Example

  20. Model Validation: Vasicek Test • If ρ is too high the bands are too wide: too many models would pass the test • ρ is not known and has to be estimated. • For point-in-time based models, ρ can be very small • For through-the-cycle based models, ρ can be large • In practice, we often see models fail the interval/Chi-square test, but pass the Vasicek test (especially when samples are large). • Open Issue 2: How do we resolve the inconsistency? 20

  21. Sensitivity of Validation Test to Sample Size • Accuracy tests tend to reject models that • discriminate well • consistent with the expectations of the LOB • Measurement can be so precise that even a small, non-relevant difference in point estimates can be considered statistically significant. 21

  22. Illustrative Example 22

  23. Illustrative Example 23

  24. Illustrative Example 24

  25. Interval Tests with Large Samples • Conclusion: • Statistical difference: significant • Economic difference: insignificant • Solutions? • Reduce the number observations using a sample: less powerful test • Redefine the test • Interval test • Focus on capital 25

  26. Interval Tests with Large Samples (5) (4) (3) (2) (1) -1% 0 +1% 26

  27. Interval Test • Restate the null as an interval defined over an economically acceptable range • If the CI1-α around the point estimate is within the in interval, conclude no economically significant difference • May want to reformulate the interval test in terms of an acceptable economic bias in the calculation of regulatory capital • Open Issue 3: How do we reconcile business and statistical significance? 27

  28. Conclusion • Active management of model risk • Sound model development, implementation, and use of models are vital elements, and • Rigorous model validation is critical to effective model risk management. • Model Risk should be managed like other risks • Identify the source • Manage it properly 28

More Related