560 likes | 728 Views
Adventures in Segmentation Using Applied Data Mining to add Business Value . Drew Minkin. Agenda. The Value Add of Data Mining Segmentation 101 Segmentation Tools in Analysis Services Methodology for Segmentation Analysis Building Confidence in your Model. The Value Add of Data Mining.
E N D
Adventures in SegmentationUsing Applied Data Mining to add Business Value Drew Minkin
Agenda • The Value Add of Data Mining • Segmentation 101 • Segmentation Tools in Analysis Services • Methodology for Segmentation Analysis • Building Confidence in your Model
Value Add - What is Data Mining? • Statistics for the Computer Age • Evolution, not revolution with traditional statistics • Statistics enriched with brute-force capabilities of modern computing • Associated with industrial-sized data sets
Data Mining OLAP Reports (Ad hoc) Reports (Static) Value Add - Data Mining in the BI Spectrum Business Knowledge SQL-Server 2008 Relative Business Value Easy Difficult
Value Add – Data Mining and Democracy • VoterVault • From Mid-1990s • Massive get-out-the-vote drive for those expected to vote Republican • Demzilla • Names typically have 200 to 400 information items
Value Add – The Promise of Data Mining • “The quiet statisticians have changed our world; not by discovering new facts or technical developments, but by changing the ways that we reason, experiment and form our opinions.” -- Ian Hacking
Value Add – Operational Benefits • Improved efficiency • Inventory management • Risk management
Value Add – Strategic Benefits • The Bottom Line • Increased agility • Brand building • Differentiate message • “Relationship” building
Value Add – Tactical Benefits • Reduction of costs • Transactional leakage • Outlier analysis
Value Add - Customer Attrition Analysis • Identify a group of customers who are expected to attrite • Conduct marketing campaigns to change the behavior in the desired direction • change their behavior, reduce the attrition rate.
Value Add - Target Result • Slow attriters: Customers who slowly pay down their outstanding balance until they become inactive. • Fast attriters: Customers who quickly pay down their balance and either lapse it or close it via phone call or write in.
Value Add - Sample Applications • Credit models • Retention models • Elasticity models • Cross-sell models • Lifetime Value models • Agent/agency monitoring • Target marketing • Fraud detection
Segmentation – Machine Learning • Unsupervised learning • Associations and patterns • many entities • target information • Market basket analysis (“diapers and beer”) • Supervised learning • Predict the value • target variable • well-defined predictive variables • Credit / non-credit scoring engines
Segmentation –Sample Data Sources • Data Warehouse: Credit Card Data Warehouse containing about 200 product specific fields • Third Party Data : A set of account related demographic and credit bureau information • Segmentation files :Set of account related segmentation values based on our client's segmentation scheme which combines Risk, Profitability and External potential • Payment Database :Database that stores all checks processed. The database can categorize source of checks
Methodology – Acquiring Raw Data • Research/Evaluate possible data sources • Availability • Hit rate • Implementability • Cost-effectiveness • Extract/purchase data • Check data for quality (QA) • At this stage, data is still in a “raw” form • Often start with voluminous transactional data • Much of the data mining process is “messy”
Methodology – Goals of Refinement • Reflects data changes over time. • Recognizes and removes statistically insignificant fields • Defines and introduces the "target" field • Allows for second stage preprocessing and statistical analysis.
Methodology - Scoring Engines • Scoring engine • Formula that classifies or separates policies (or risks, accounts, agents…) into • profitable vs. unprofitable • Retaining vs. non-retaining… • (Non-)Linear equation f() of several predictive variables • Produces continuous range of scores score = f(X1, X2, …, XN)
Methodology – Deployed Model Data To Predict Training Data Mining Model Mining Model Mining Model DB data Client data Application log “Just one row” New Entry New Txion DM Engine DM Engine Predicted Data
Methodology - Testing • Randomly divide data into 3 pieces • Training data • Test data • Validation data • Use Training data to fit models • Score the Test data to create a lift curve • Perform the train/test steps iteratively until you have a model you’re happy with • During this iterative phase, validation data is set aside in a “lock box” • Score the Validation data and produce a lift curve • Unbiased estimate of future performance
Methodology - Multivariate Analysis • Examine correlations among the variables • Weed out redundant, weak, poorly distributed variables • Model design • Build candidate models • Regression/GLM • Decision Trees/MARS • Neural Networks • Select final model
Data Mining - Algorithm Matrix Segmentation Advanced Data Exploration Classification Forecasting Association Text Analysis Estimation Association Rules Clustering Decision Trees Linear Regression Logistic Regression Naïve Bayes Neural Nets Sequence Clustering Time Series
Data Mining - SQL-Server Algorithms Decision Trees Clustering Time Series Neural Net Sequence Clustering Association Naïve Bayes Linear and Logistic Regression
Data Mining - Blueprint for Toolset • Offline and online modes • Everything you do stays on the server • Offline requires server admin privileges to deploy • Define Data Sources and Data Source Views • Define Mining Structure and Models • Train (process) the Structures • Verify accuracy • Explore and visualise • Perform predictions • Deploy for other users • Regularly update and re-validate the Model
Data Mining - Cross-Validation • SQL Server 2008 • X iterations of retraining and retesting the model • Results from each test statistically collated • Model deemed accurate (and perhaps reliable) when variance is low and results meet expectations
Data Mining - Microsoft Decision Trees • Use for: • Classification: churn and risk analysis • Regression: predict profit or income • Association analysis based on multiple predictable variable • Builds one tree for each predictable attribute • Fast
Data Mining - Microsoft Naïve Bayes • Use for: • Classification • Association with multiple predictable attributes • Assumes all inputs are independent • Simple classification technique based on conditional probability
Data Mining - Clustering • Applied to • Segmentation: Customer grouping, Mailing campaign • Also: classification and regression • Anomaly detection • Discrete and continuous • Note: • “Predict Only” attributes not used for clustering
Data Mining - Neural Network Output Layer Loyalty Hidden Layers Input Layer Age Education Sex Income • Applied to • Classification • Regression • Great for finding complicated relationship among attributes • Difficult to interpret results • Gradient Descent method
Data Mining - Sequence Clustering • Analysis of: • Customer behaviour • Transaction patterns • Click stream • Customer segmentation • Sequence prediction • Mix of clustering and sequence technologies • Groups individuals based on their profiles including sequence data
Data Mining - What is a Sequence? • To discover the most likely beginning, paths, and ends of a customer’s journey through our domain consider using: • Association Rules • Sequence Clustering
Data Mining – Minor Introduction to DMX • Your “if” statement will test the value returned from a prediction – typically, predicted probability or outcome • Steps: • Build a case (set of attributes) representing the transaction you are processing at the moment • E.g. Shopping basket of a customer plus their shipping info • Execute a “SELECT ... PREDICTION JOIN” on the pre-loaded mining model • Read returned attributes, especially case probability for a some outcome • E.g. Probability > 50% that “TransactionOutcome=ShippingDeliveryFailure” • Your application has just made an intelligent decision! • Remember to refresh and retest the model regularly – daily?
Building Confidence - Model Design • Which target variable to use? • Frequency & severity • Loss Ratio, other profitability measures • Binary targets: defection, cross-sell • …etc • How to prepare the target variable? • Period - 1-year or Multi-year? • Losses evaluated @? • Cap large losses? • Cat losses? • How / whether to re-rate, adjust premium? • What counts as a “retaining” policy? • …etc
Building Confidence - Improving Models • Approaches • Change the algorithm • Change model parameters • Change inputs/outputs to avoid bad correlations • Clean the data set • Perhaps there are no good patterns in data • Verify statistics (Data Explorer)
Building Confidence – Alternate Methods • Capping • Outliers reduced in influence and to produce better estimates. • Binning • Small and insignificant levels of character variables are regrouped. • Box-Cox Transformations • These transformations are commonly included, specially, the square root and logarithm. • Johnson Transformations • Performed on numeric variables to make them more ‘normal’. • Weight of Evidence • Created for character variables and binned numeric variables.
Building Confidence - Confusion Matrix 1241 correct predictions (516 + 725) . 35 incorrect predictions (25 + 10). The model scored 1276 cases (1241+35). The error rate is 35/1276 = 0.0274. The accuracy rate is 1241/1276 = 0.9725.
Building Confidence – Warning Signs • “All models are wrong, but some are useful." • George Box
Building Confidence –Li’s Revenge • Extrapolation • Applying models from unrelated disciplines • Equality • The real world contains a surprising amount of uncertainty, fuzziness, and precariousness. • Copula • Binding probabilities can mask errors • Distribution functions • Small miscalculations can make coincidences look like certainties • Gamma • Human behavior difficult to quantify as a linear parameter