280 likes | 299 Views
Topics in Business Intelligence. K-NN & Naive Bayes – GROUP 1. Isabel van der Lijke Nathan Bok Gökhan Korkmaz. Introduction K- nn. k-NN Classifier (Categorical Outcome) Determining Neighbors Classification Rule Example: Riding Mowers Choosing k Setting the Cutoff Value
E N D
Topics in Business Intelligence K-NN & Naive Bayes – GROUP 1 Isabel van der Lijke Nathan BokGökhan Korkmaz
Introduction K-nn • k-NN Classifier (Categorical Outcome) • Determining Neighbors • Classification Rule • Example: Riding Mowers • Choosing k • Setting the Cutoff Value • Advantages and shortcomings of k-NN algorithms
IntroductionNaiveBayes • Basic Classification Procedure • Cutoff Probability Method • Conditional Probability • Naive Bayes • Advantages and shortcomings of the naive Bayes classifier
Simple Case Application • Depression
Simple Case Application • Fruits Example: P(Banana) = 500 / 1000 = 0,5 1-0,5 = 0,5 (Not banana) New fruit compute all the chances
Real-Life applicationNaiveBayes • Medical Data Classification with Naive Bayes Approach • Introduction • Requirements for systems dealing with medical data • An empirical comparison • Tables • Conclusion
TABLE 3:COMPARATIVE ANALYSIS BASED ON AREA UNDER ROC CURVE (AUC)
Real-Life application K-NN • Used to help health care professionals in diagnosing heart disease. • Useful for pattern recognition and classification. • Euclidean distance: • Often normalized data due to different variable formats.
Case Study • “Our customer is a Dutch charity organization that wants to be able to classify it's supporters to donators and non-donators. The non-donators are sent a single marketing mail a year, whereas the donators receive multiple ones (up to 4).” • Who are the donators? • Who are the non-donators? • Application of K-NN & Naive Bayes to training and test dataset. • 4000 customers. • SPSS, Excel, XLMiner
Clean-up • No missing values • 1-dimensional outliers removed through sorting (regarding annual & average donation) • 2-dimensional outliers removed through scatterplot
Variables Kept Average donation Frequency of Response Median Time of Response Time as client Variables removed Annual donationLast donationTime since last response.
Normalization of scores into z-scores. • Nominal categorization of data • Classification through percentiles of z-score & by manually processing values within the variables.
Analysis of Case Study – K-NN • Xlminer Partition data • Models created: • M1 = Zavgdon & Zfrqres • M2 = ZtimeCl, Zfrqres & Zavgdon • M3 = Zmedtor, Zfrqres & Zavgdon • ZtimeCl, Zfrqres, Zmedtor & Zavgdon
CHOOSING MODEL FOR K-NN • Accuracy: Proportion of correctly classified instances. • Error rate: (1 – Accuracy) • Sensitivity: Sensitivity is the proportion of actual positives which are correctly identified as positives by the classifier. • Specificity: Like sensitivity, but for the negatives.
Analysis of the case study – NaiveBayes • M1 = Cfrqres & Cavgdon • M2 = Cfrqresp, Cavgdon & Cmedtor