270 likes | 740 Views
ROC curve estimation. Index. Introduction to ROC ROC curve Area under ROC curve Visualization using ROC curve. ROC curve. Originally stands for R eceiver O perating C haracteristic curve. It is used widely in biomedical applications like radiology and imaging.
E N D
Index • Introduction to ROC • ROC curve • Area under ROC curve • Visualization using ROC curve
ROC curve • Originally stands for Receiver Operating Characteristic curve. • It is used widely in biomedical applications like radiology and imaging. • An important utility here is to assess classifiers in machine learning.
Example situation • Consider diagnostic test for a disease • Test has 2 possible outcomes: • Positive or negative. • Now based on this we will explain the various notations used in ROC curves in the next slide.
Data distribution available Pts without the disease Pts with disease Test Result
Call these patients “negative” Call these patients “positive” Threshold Test Result
Call these patients “negative” Call these patients “positive” Some definitions ... True Positives Test Result without the disease with the disease
Call these patients “negative” Call these patients “positive” False Positives Test Result without the disease with the disease
Call these patients “negative” Call these patients “positive” True negatives Test Result without the disease with the disease
Call these patients “negative” Call these patients “positive” False negatives Test Result without the disease with the disease
Confusion Matrix • Confusion matrix is defined as a matrix consisting of two rows and two columns. • The orientation of entries in the confusion matrix is as follows if say the confusion matrix is called CMat. • Then CMat[1][1]=True Positives CMat[1][2]=False Positives. • Similarly CMat[2][1]=False Negatives and CMat[2][2]=True Negatives.
2-class Confusion Matrix • Reduce the 4 numbers to two rates true positive rate = TP = (#TP)/(#P) false positive rate = FP = (#FP)/(#N) • Rates are independent of class ratio*
Comparing classifiers using Confusion Matrix Classifier 1 TP = 0.4 FP = 0.3 Classifier 2 TP = 0.7 FP = 0.5 Classifier 3 TP = 0.6 FP = 0.2
Interpretations from the Confusion matrix • The following metrics for a classifier can be calculated using the confusion matrix. These can be used for evaluating the classifier. • Accuracy = (TP+TN) • Precision = TP/(TP+FP) • Recall = TP/(TP+FN) • F-Score = 2*recall*precision/(recall + precision)
ROC curve 100% True Positive Rate (sensitivity) 0% 100% 0% False Positive Rate (1-specificity)
ROC curve comparison 100% 100% True Positive Rate True Positive Rate 0% 0% 100% 100% 0% 0% False Positive Rate False Positive Rate A poor test: A good test:
Area under ROC curve (AUC) • Overall measure of test performance • Comparisons between two tests based on differences between (estimated) AUC • For continuous data, AUC equivalent to Mann-Whitney U-statistic (nonparametric test of difference in location between two populations) • Determines the accuracy of a classifier in machine learning.
AUC for ROC curves 100% 100% 100% 100% True Positive Rate True Positive Rate True Positive Rate True Positive Rate 0% 0% 0% 0% 100% 100% 100% 100% 0% 0% 0% 0% False Positive Rate False Positive Rate False Positive Rate False Positive Rate AUC = 100% AUC = 50% AUC = 90% AUC = 65%
Further Evaluation methods • ROC curve based visualization • The visualization of the ROC curve is a very good method of evaluating the classifier. • Tools like Matlab, Weka and Orange provide facilities to support visualization of the ROC curve.
ROCR is one such tool which provides effective visualization.