240 likes | 354 Views
Rational Calculation and Trust: A Comparative Institutional Analysis of Emerging Credit Card Markets in Transition Economies. Akos Rona-Tas University of California, San Diego. Economic growth. Inflation is tamed. Retail lending is up. Credit cards in Poland and the Czech Republic. Project.
E N D
Rational Calculation and Trust: A Comparative Institutional Analysis of Emerging Credit Card Markets in Transition Economies Akos Rona-Tas University of California, San Diego
Project • How do banks decide on creditworthiness of individual applicants? • 3 Central European countries: • Czech Republic, Hungary, Poland • 3 East European Countries: • Bulgaria, Russia, Ukraine • 3 Asian countries: • China, Vietnam, South Korea Total of 95 banks – most issue credit cards • Project site: www.socsci2.ucsd.edu/~aronatas/project/project.html
Different strategies of market initiation Strategy 1 “Build from your corporate customer base” Technique: Offer credit cards to employees of corporate clients Advantage: Corporation screens customers/bears some of the risk Precondition: Stable corporate clients with large employee base Strategy 2 “Build from your retail customer base” Technique: Offer credit cards to good customers already using other services (deposits, checking, bill payment etc.) Advantage: Information exists on future card holders Precondition: Large retail customer base Strategy 3 “Build a customer base by offering credit cards” Technique: Offer cards to customers from the street and use cards to recruit new customers and cross sell them other services (deposits, checking, bill payment etc.) Advantage: Pool is not restricted Precondition: Some form of pre-screening must be found (membership in high status groups e.g., Academy of Sciences, Association of Industrialists etc.).
Prediction vs. Control • Prediction: Screening of applicants (credit assessment) Control: Monitoring Sanctioning
Expert Judgment Rules of Thumb Point System Statistical Scoring Methods of assessing creditworthiness Formalization
Credit scoring • It is designed to separate “Goods” from “Bads” • Link function • the discrete outcome variable (e.g., default/no default) is linked to a set of predictors combined with the help of a set of weights. This is called the link function. • calculated on the basis of earlier applicants. • turns the discrete binary outcome into a continuous probability distribution. The scores predict the place of the new applicant in this probability distribution. • Cut off rule • they must be translated into decisions, into a discrete variableOnce the continuous scores are created
Predictor information (Xki) Link function ( f) Outcome (Yi) Cut-off rule Decision Income (X1i) Yes Age (X2i) Probability/Score (Y*i) #Dependents (X3i) No Education (X4i) Residence (X5i) Credit scoring
Choice of outcome • Banks should model profitability • But banks model default (or even missed payment)– not factoring in collection • Why? • Banks take a moral stance • – supply public good • Banks cannot easily break down profitability to each loan • Banks reputation depends on low default rate
Choice of the link function • Yi=f(Xki) • 1. Discriminant function • 2. Regression based functions • Linear regression (OLS) • Logistic regression (logit) • Probit 3. Functions based on biological models -- Neural networks model -- Genetic algorithm (GA) 4. Other functions -- Linear programming (LP) -- Classification trees or recursive partitioning algorithms (RPA) -- Nearest neighbors
Comparing link functions (cont) • Correct by all three methods: • Correct good: 588 (48%) • Correct bad: 24 ( 2%) • Error by at least one method: 613 (50%) • Error by one or two • (but not all three) methods: 464 (38%) • Agreement by all three methods: 761 (62%) • of those correct 612 (79%) • of those incorrect 159 (21%)
No Best Statistical Technique • 1. Models fit poorly and model fit is mainly driven by the variation in the sample not the link function used. (The less variation the better the fit.) • 2. Even when two models fit equally well they select different individuals. • 3. No link function is overall superior to any other • 4. Some function will do better avoiding false negatives, others avoiding false positives in a particular data, but no model is overall better at either • 5. Agreement of multiple functions is no guarantee of correct prediction
Selection bias • Sample must be representative of applicants but sample is representative of clients not applicants therefore the model will tell us the probability of bad behavior given that the person got the loan yet the relevant information is the probability of bad behavior given that the person applied Paradox: to find a good model banks need both bad and good debtors who got credit to make money banks need to eliminate bad debtors
Misidentified models • The model must have all relevant predictors • What are the good predictors to use? • How many predictors to use? • Power of prediction vs. client convenience • How to measure those variables? • Categorization and validity of indicators • Garbage-in-garbage-out
Learning • Selection bias • Allowing documented manual overrides • Experimenting with random decision • Experimenting with cutoff rules Misidentification -- Analyzing reasons for manual overrides -- Targeting special groups and developing predictors for them (e.g. student cards, those working abroad etc.) -- Adding publicly available data (aggregate or individual level such as criminal records, business records, court records, residential records, telephone book etc.) -- Data mining (e.g. using own records) -- Verification (random, triggered or without exception)
Non-prediction Related Benefits of Scoring Models -- cheaper, faster -- legitimacy legal cover against rejected customers professional legitimacy -- less skill is required from loan officers -- more control over loan officers and the lending process
Scoring as Control • Pre-application scoring • of potential customers to select likely applicants • Post-approval scoring • to predict problems with existing customers • Post-default scoring • to predict likelihood of collecting on debt
Information sharing among banks • Purpose: • Prediction • Control Types of credit registries: -- Black list (only bad information) No record good record, history bad incentive: to switch identity -- Full record (both good and bad information) Informs banks about exposure and context of bad information Good record good history incentive: to build history, keep identity Why is creating credit bureaus hard? How far back to remember?
Some lessons • 1. Good strategies of market initiation is built more on control than predicting applicants’ behavior but then prediction becomes increasingly important. • 2. Market expansion forces banks to shift to prediction and formalization. • 3. The best strategy in market expansion is prudent learning. Bad decisions are necessary and some of that should be counted as market costs such as the cost of market research or introductory discounts. • 4. Credit scoring is always used along with more flexible methods of assessment.