170 likes | 250 Views
Evaluating Hypotheses Reading: Coursepack : Learning From Examples, Section 4 (pp. 16-21). Evaluating Hypotheses . What we want: hypothesis that best predicts unseen data Assumption: Data is “ iid ” (independently and identically distributed). Accuracy and Error.
E N D
Evaluating HypothesesReading: Coursepack: Learning From Examples, Section 4 (pp. 16-21)
Evaluating Hypotheses • What we want: hypothesis that best predicts unseen data • Assumption: Data is “iid” (independently and identically distributed)
Accuracy and Error • Accuracy = fraction of correct classifications on unseen data (test set) • Error rate = 1 − Accuracy
How to use available data to best measure accuracy? Split data into training and test sets.
How to use available data to best measure accuracy? Split data into training and test sets. But how to split?
How to use available data to best measure accuracy? Split data into training and test sets. But how to split? Too little training data: Too little test data:
How to use available data to best measure accuracy? Split data into training and test sets. But how to split? Too little training data: Don’t get optimal classifier Too little test data: Measured accuracy is not correct
One solution: “k-fold cross validation” • Each example is used both as a training instance and as a test instance. • Split data into k disjoint parts: S1, S2, ..., Sk. • For i = 1 to k Select Sito be the test set. Train on the remaining data, test on Si, to obtain accuracy Ai . • Report as the final accuracy.
Avoid “peeking” at test data when training Example from readings: Split data into training and test sets. Train model with one learning parameter (e.g., “gain” vs “gain ratio”) Test on test set. Repeat with other learning parameter. Test on test set. Return accuracy of model with best performance. What’s wrong with this procedure?
Avoid “peeking” at test data when training Example from readings: Split data into training and test sets. Train model with one learning parameter (e.g., “gain” vs “gain ratio”) Test on test set. Repeat with other learning parameter. Test on test set. Return accuracy of model with best performance. Problem: You used the test set to select the best model – but is part of the learning process! Risk of overfitting to a particular test set. Need to evaluate final learned model on previously unseen data.
Can also solve this problem by using k-fold cross-validation to select model parameters, and then evaluate the resulting model on unseen test data that has been set aside previous to training.
Evaluating classification algorithms “Confusion matrix” for a given class c Predicted (or “classified”) True False Actu al (in class c) (not in class c) True (in class c) TruePositiveFalseNegative False (not in class c) FalsePositiveTrueNegative
Evaluating classification algorithms “Confusion matrix” for a given class c Predicted (or “classified”) True False Actu al (in class c) (not in class c) True (in class c) TruePositiveFalseNegative False (not in class c) FalsePositiveTrueNegative Type 2 error Type 1 error
Precision: Fraction of true positives out of all predicted positives: • Recall: Fraction of true positives out of all actual positives:
Error vs. Loss • Loss functions