320 likes | 479 Views
Data Mining. A special presentation for BCIS4660 (BA 356) Fall 2008 Dr. Nick Evangelopoulos, ITDS Dept. Some slide material taken from: Groth, Han and Kamber, SAS Institute. Overview of this Presentation. Introduction to Data Mining Regression/Logistic Regression Decision Trees
E N D
Data Mining A special presentation for BCIS4660 (BA 356) Fall 2008 Dr. Nick Evangelopoulos, ITDS Dept. Some slide material taken from: Groth, Han and Kamber, SAS Institute
Overview of this Presentation • Introduction to Data Mining • Regression/Logistic Regression • Decision Trees • SAS EM Demo: The Home Equity Loan Case • Important DM techniques Not Covered today: • Neural Networks • Market Basket Analysis • Memory-Based Reasoning • Web Link Analysis
Introduction to DM “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Sir Arthur Conan Doyle: Sherlock Holmes, "A Scandal in Bohemia")
What Is Data Mining? • Data mining (knowledge discovery in databases): • A process of identifying hidden patterns and relationships within data (Groth) • Data mining: • Extraction of interesting (non-trivial,implicit, previously unknown and potentially useful)information or patterns from data in large databases
Data Deluge hospital patient registries electronic point-of-sale data remote sensing images tax returns stock trades OLTP telephone calls airline reservations credit card charges catalog orders bank transactions
Multidisciplinary Scope Statistics Pattern Recognition Neurocomputing Machine Learning AI Data Mining Databases KDD
Data Mining: A KDD Process Knowledge • Data mining: the core of knowledge discovery process. Pattern Evaluation Data Mining Task-relevant Data Selection Data Warehouse Data Cleaning Data Integration Databases
Data Mining and Business Intelligence Increasing potential to support business decisions End User (Manager) Making Decisions Business Analyst Data Presentation Visualization Techniques Data Mining Data Analyst Information Discovery Data Exploration Statistical Analysis, Querying and Reporting Data Warehouses / Data Marts OLAP, MDA DBA Data Sources Paper, Files, Information Providers, Database Systems, OLTP
Architecture of a Typical Data Mining System Graphical user interface Pattern evaluation Data mining engine Knowledge-base Database or data warehouse server Filtering Data cleaning & data integration Data Warehouse Databases
DATA MINING AT WORK:Detecting Credit Card Fraud • Credit card companies want to find a way to monitor new transactions and detect those made on stolen credit cards. Their goal is to detect the fraud while it is taking place. • In a few weeks after each transaction they will know which of the transactions were fraudulent and which were not, and they can then use this data to validate their fraud detection and prediction scheme.
DATA MINING AT WORK:Strategic Pricing Solutions at MCI MCI now has a solution for making strategic pricing decisions, driving effective network analysis, enhancing segment reporting and creating data for sales leader compensation. Before implementing SAS, the process of inventorying MCI's thousands of network platforms and IT systems – determining what each one does, who runs them, how they help business and which products they support – was completely manual. The model created with SAS has helped MCI to catalog all that information and map the details to products, customer segments and business processes. "That's something everyone is excited about," says Leslie Mote, director of MCI corporate business analysis. "Looking at the cost of a system and what it relates to helps you see the revenue you're generating from particular products or customers. I can see what I'm doing better."
Our own example:The Home Equity Loan Case • HMEQ Overview • Determine who should be approved for a home equity loan. • The target variable is a binary variable that indicates whether an applicant eventually defaulted on the loan. • The input variables are variables such as the amount of the loan, amount due on the existing mortgage, the value of the property, and the number of recent credit inquiries.
HMEQ case overview • The consumer credit department of a bank wants to automate the decision-making process for approval of home equity lines of credit. To do this, they will follow the recommendations of the Equal Credit Opportunity Act to create an empirically derived and statistically sound credit scoring model. The model will be based on data collected from recent applicants granted credit through the current process of loan underwriting. The model will be built from predictive modeling tools, but the created model must be sufficiently interpretable so as to provide a reason for any adverse actions (rejections). • The HMEQ data set contains baseline and loan performance information for 5,960 recent home equity loans. The target (BAD) is a binary variable that indicates if an applicant eventually defaulted or was seriously delinquent. This adverse outcome occurred in 1,189 cases (20%). For each applicant, 12 input variables were recorded.
The HMEQ Loan process • An applicant comes forward with a specific property and a reason for the loan (Home-Improvement, Debt-Consolidation) • Background info related to job and credit history is collected • The loan gets approved or rejected • Upon approval, the Applicant becomes a Customer • Information related to how the loan is serviced is maintained, including the Status of the loan (Current, Delinquent, Defaulted, Paid-Off)
Loan Status Balance Reason MonthlyPayment Approval Date Applies for HMEQ Loan on… using… APPLICANT PROPERTY becomes OFFICER ACCOUNT CUSTOMER has HISTORY The HMEQ LoanTransactional Database • Entity Relationship Diagram (ERD), Logical Design:
HMEQLoanApplication Applicant Property Account History OFFICERID APPLICANTID PROPERTYID LOAN REASON DATE APPROVAL APPLICANTID NAME JOB DEBTINC YOJ DEROG CLNO DELINQ CLAGE NINQ PROPERTYID ADDRESS VALUE MORTDUE ACCOUNTID CUSTOMERID PROPERTYID ADDRESS BALANCE MONTHLYPAYMENT STATUS HISTORYID ACCOUNTID PAYMENT DATE Customer Officer CUSTOMERID APPLICANTID NAME ADDRESS OFFICERID OFFICERNAME PHONE FAX HMEQ Transactional database:the relations • Entity Relationship Diagram (ERD), Physical Design:
The HMEQ LoanData Warehouse Design • We have some slowly changing attributes: HMEQLoanApplication: Loan, Reason, Date Applicant: Job and Credit Score related attributes Property: Value, Mortgage, Balance • An applicant may reapply for a loan, then some of these attributes may have changed. • Need to introduce “Key” attributes and make them primary keys
The HMEQ LoanData Warehouse Design STAR 1 – Loan Application facts • Fact Table: HMEQApplicationFact • Dimensions: Applicant, Property, Officer, Time STAR 2 – Loan Payment facts • Fact Table: HMEQPaymentFact • Dimensions: Customer, Property, Account, Time
HMEQApplicationFact Applicant Property APPLICANTKEY PROPERTYKEY OFFICERKEY TIMEKEY LOAN REASON APPROVAL APPLICANTKEY APPLICANTID NAME JOB DEBTINC YOJ DEROG CLNO DELINQ CLAGE NINQ PROPERTYKEY PROPERTYID ADDRESS VALUE MORTDUE HMEQPaymentFact Account Time Customer Officer CUSTOMERKEY PROPERTYKEY ACCOUNTKEY TIMEKEY BALANCE PAYMENT STATUS ACCOUNTKEY LOAN MATURITYDATE MONTHLYPAYMENT OFFICERKEY OFFICERID OFFICERNAME PHONE FAX CUSTOMERKEY CUSTOMERID APPLICANTID NAME ADDRESS TIMEKEY DATE MONTH YEAR Two Star Schemas for HMEQ Loans
The HMEQ Loan DW:Questions asked by management • How many applications were filed each month during the last year? What percentage of them were approved each month? • How has the monthly average loan amount been fluctuating during the last year? Is there a trend? • Which customers were delinquent in their loan payment during the month of September? • How many loans have defaulted each month during the last year? Is there an increasing or decreasing trend? • How many defaulting loans were approved last year by each loan officer? Who are the officers with the largest number of defaulting loans?
The HMEQ Loan DW:Some more involved questions • Are there any patterns suggesting which applicants are more likely to default on their loan after it is approved? • Can we relate loan defaults to applicant job and credit history? Can we estimate probabilities to default based on applicant attributes at the time of application? Are there applicant segments with higher probability? • Can we look at relevant data and build a predictive model that will estimate such probability to default on the HMEQ loan? If we make such a model part of our business policy, can we decrease the percentage of loans that eventually default by applying more stringent loan approval criteria?
Property Applicant PROPERTYKEY PROPERTYID ADDRESS VALUE MORTDUE APPLICANTKEY APPLICANTID NAME JOB DEBTINC YOJ DEROG CLNO DELINQ CLAGE NINQ HMEQPaymentFact HMEQApplicationFact CUSTOMERKEY PROPERTYKEY ACCOUNTKEY TIMEKEY BALANCE PAYMENT STATUS APPLICANTKEY PROPERTYKEY OFFICERKEY TIMEKEY LOAN REASON APPROVAL Officer Account Customer OFFICERKEY OFFICERID OFFICERNAME PHONE FAX CUSTOMERKEY CUSTOMERID APPLICANTID NAME ADDRESS ACCOUNTKEY LOAN MATURITYDATE MONTHLYPAYMENT Time TIMEKEY DATE MONTH YEAR Selecting Task-relevant attributes
HMEQ: Modeling Goal • The credit scoring model should compute the probability of a given loan applicant to default on loan repayment. A threshold is to be selected such that all applicants whose probability of default is in excess of the threshold are recommended for rejection. • Using the HMEQ task-relevant data file, two competing models will be built: A logistic Regression model, and a Decision Tree • Model assessment will allow us to select the best of the three alternative models
Predictive Modeling Inputs Target ... ... ... ... ... ... Cases ... ... ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
( ) 1 - p logit(p ) log p p w0 + w1x1 +…+ wpxp g-1( ) logit(p) 1.0 p 0.5 0.0 0 Logistic Regression Models log(odds) = Training Data
n = 5,000 20% BAD yes no Debt-to-Income Ratio < 45 n = 3,350 n = 1,650 10% BAD 41% BAD Decision Trees: Divide and Conquer the HMEQ data The tree is fitted to the data by recursive partitioning. Partitioning refers to segmenting the data into subgroups that are as homogeneous as possible with respect to the target. In this case, the binary split (Debt-to-Income Ratio < 45) was chosen. The 5,000 cases were split into two groups, one with a 10% BAD rate and the other with a 41% BAD rate. The method is recursive because each subgroup results from splitting a subgroup from a previous split. Thus, the 3,350 cases in the left child node and the 1,650 cases in the right child node are split again in similar fashion.
Introducing SAS Enterprise Miner v.5.3 • Implemented Methodology: • Sample-Explore-Modify-Model-Assess (SEMMA) • Available Modeling Tools: • Logistic Regression • Decision Trees • Analysis diagram:
Summary • Data Mining provides new opportunities to discover previously unknown relationships • SAS EM employs the SEMMA Methodology to extract “knowledge” • Alternative approaches for KD include: • Regression/Logistic Regression • Decision Trees