260 likes | 553 Views
Results of BBA/ISDA/RMA IRB Validation Study. BBA/ISDA/RMA Advanced IRB Forum Monika Mars London - June 23, 2003. Agenda. Survey Approach & Participants Background – Use of Ratings Survey Findings Conclusions and Implications. Survey Approach. Survey research and design.
E N D
Results of BBA/ISDA/RMA IRB Validation Study BBA/ISDA/RMA Advanced IRB Forum Monika Mars London - June 23, 2003
Agenda • Survey Approach & Participants • Background – Use of Ratings • Survey Findings • Conclusions and Implications
Survey Approach Survey research and design 4th Quarter 2002 Data collection,and analysis Jan – Feb 2003 Feb – Mar 2003 Interviews 1st Draft Mid March 2003 Final Report Draft – early May Reportpreparation Reportpresentation June 19/23
Survey responses covered all asset classes representing a diverse group of institutions
Agenda • Survey Methodology & Participants • Background – Use of Ratings • Survey Findings • Conclusions and Implications
Internal ratings are key to managing the business at most firms
Most banks use “Master Scales” to compare ratings information across portfolios
Default definitions, time horizons and alignment to external sources vary among institutions • The definition of default is not in all cases in line with the BASEL II definition – this is particularly the case for retail portfolios • Time horizons of one year are most common, however the estimate of a 1-year PD might be based on a multiyear sample • Some banks use more than one year as a time horizon while a few use less than a one year time horizon to estimate PD • A small number of banks estimate PDs over the life of the loan • Most participants align a “majority” of their ratings in the corporate asset class to an external source, while the majority don’t do this in the retail asset class
Agenda • Survey Methodology & Participants • Background – Use of Ratings • Survey Findings • Conclusions and Implications
Key Findings • Banks employ a wide range of techniques for internal ratings validation • Ratings validation is not an exact science • Expert judgment is of critical importance in the process • Data issues are centred around quantity not quality • Regional differences exist with respect to the validation of internal ratings • Defining standards for stress testing requires additional work
Banks employ a wide range of techniques to validate internal ratings - key differences exist between corporate and retail ratings • Corporate Asset Class • Statistical models where the quantity of default data allows for strong estimation (particularly in middle market) • Expert judgment models for portfolios where default data is limited • Hybrid and/or Vendor models to complete the picture • Retail Asset Class • Statistical models are heavily relied upon due to the greater availability of internal data history
A variety of model types are employed within each asset class
Models for bank and sovereign exposures extensively use external information and expert judgement • Ratings for bank exposure are mostly derived by benchmarking against external ratings as well as using expert judgment or hybrid models • Ratings for sovereign exposures are similarly derived by benchmarking against external ratings as well as using expert judgment • Published default statistics are used for PD estimation for both bank and sovereign exposures
Most banks surveyed have a rating system for specialised lending in place but face major issues in its validation • A common theme is the lack of default data • Validation issues specific to specialised lending include: • differentiation of borrower and transaction, • definition of default (particularly the restructuring clause), • inconsistent data history, • and the time horizon of the model
Rating validation is not an exact science • Even with the use of statistical techniques to assess model performance absolute triggers and thresholds are not used • There is no absolute KS statistic, GINI coefficient, COC or ROC measure that models need to reach to be considered adequate • Default statistics published by the major rating agencies are used differently from bank to bank depending on each bank’s assessment of the most appropriate use of the external data • Benchmarking against external ratings raises many issues including the “unknown” quality of external ratings, methodology differences, and the like
The performance of statistical rating models is achieved through a number of different techniques
Different triggers are used to evaluate the overall performance of expert judgement rating models
A variety of techniques are employed for evaluating vendor models
Expert judgement is essential in the validation process • Data scarcity prevents the use of statistical models for some asset classes: corporate, bank, sovereign, and specialised lending • Most respondents use judgemental overlay by rating experts (account officer, credit analyst) to confirm or modify the risk rating output of their assessment model (statistical, hybrid, vendor) • Large proportions of banks’ exposures are covered by expert-judgment type rating systems
Most data issues centre around quantity of data available not the quality of the data • Most banks surveyed have initiated projects to collect the necessary data in a consistent manner across the institution to allow for statistical modelling in the future • The quantity of default data around large corporate, bank, sovereign, and specialised lending exposure classes is a real problem for most institutions • Institutions have begun data pooling initiatives for PD and LGD data, however there is scepticism as to whether these measures will solve the data quantity problem
Clear regional differences exist with regard to internal ratings for corporate assets and their validation • Expert judgment models are used for large corporate portfolios, however the structure of the ratings differ significantly • In North America fixed weightings are not assigned for the factors to be assessed by the experts • In Europe specific weights for each factor are often set • Models based on equity market information (KMV) or balance sheet information (Moody’s RiskCalc) are used for corporate and middle market portfolios • In North America, these models tend to be an integral part of the rating and are used in conjunction with expert judgment in a hybrid approach • In Europe, these models are more likely to be used as a benchmark or a validation of the internal rating model
Similar differences can be observed for the retail asset class • Statistical (scorecard) techniques for retail exposures tend to be product specific in the US and UK, while in Continental Europe the focus is on customer scores/ratings • US and UK scorecards are redeveloped more often than those on Continental Europe, where robustness of ratings and long-term stability factors are of higher priority • This often has direct implications for validation, as longer term more stable models tend to show – for example - lower GINIs than models using the latest available data
More work needs to be done in defining standards for stress testing • There is currently no uniform approach regarding the type of stress testing undertaken, its frequency, or actions taken in response to stress testing results • At the moment, stress testing is performed on the portfolio level with risk ratings being a key input in stress testing scenarios for economic capital requirements • There is uncertainty around BASEL II requirements with respect to stress testing of rating model inputs – and also considerable debate as to its usefulness
Agenda • Survey Methodology & Participants • Background – Use of Ratings • Survey Findings • Conclusions and Implications
The industry, regulators and other stakeholders must continue a dialogue to address Basel II implementation issues • Recognition of different techniques for validating internal rating systems – no one “right” method • Increased debate and guidance with respect to validation of expert judgement based rating systems • Recognition of regional / cultural differences as they impact internal ratings and the consequences for validation • Guidance on requirements for the use of pooled data • Additional discussion and clarification with respect to stress testing requirements