630 likes | 706 Views
Machine Learning in Practice Lecture 9. Carolyn Penstein Ros é Language Technologies Institute/ Human-Computer Interaction Institute. Plan for the Day. Announcements Questions? Assignment 4 Quiz Today’s Data Set: Speaker Identification Weka helpful hints
E N D
Machine Learning in PracticeLecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute
Plan for the Day • Announcements • Questions? • Assignment 4 • Quiz • Today’s Data Set: Speaker Identification • Weka helpful hints • Visualizing Errors for Regression Problems • Alternative forms of cross-validation • Creating Train/Test Pairs • Intro to Evaluation
Preprocessing Speech Record speech to WAV files. Extract a variety of acoustic and prosodic features.
Predictions: which algorithm will perform better? • What previous data set does this remind you of? • J48 .53 Kappa • SMO .37 Kappa • Naïve Bayes .16 Kappa
What would 1R do? .16 Kappa
Creating Train/Test Pairs First click here
Creating Train/Test Pairs If you pick unsupervised, you’ll get non-stratified folds, otherwise you’ll get stratified folds.
Stratified versus Non-Stratified • Weka’s standard cross-validation is stratified • Data is randomized before dividing it into folds • Preserves distribution of class values across folds • Reduces variance in performance • Unstratified cross-validation means there is no randomization • Order is preserved • Advantage for matching predictions with instances in Weka
Stratified versus Non-Stratified • Leave-one-out cross validation • Train on all but one instance • Iterate over all instances • Extreme version of unstratified cross-validation • If test set only has one instance, the distribution of class values cannot be preserved • Maximizes amount of data used for training on each fold
Stratified versus Non-Stratified • Leave-one-subpopulation out • If you have several data points from the same subpopulation • Speech data from the same speaker • May have data from same subpopulation in train and test • over-estimates overlap between train and test • When is this not a problem? • You can manually make sure that won’t happen • You have to do that by hand
Creating Train/Test Pairs If you pick unsupervised, you’ll get non-stratified folds, otherwise you’ll get stratified folds.
Creating Train/Test Pairs Now click here
You’re going to run this filter 20 times altogether. twice for every fold. Creating Train/Test Pairs
True for Train, false for Test Creating Train/Test Pairs
Creating Train/Test Pairs If you’re doing Stratified, make sure you have to class attribute selected here.
1. Click Apply Creating Train/Test Pairs
2. Save the file Creating Train/Test Pairs
3. Undo before you create the next file Creating Train/Test Pairs
Doing Manual Train/Test * First load the training data on the Preprocess tab
Doing Manual Train/Test * Now select Supplied Test Set as the Test Option
Doing Manual Train/Test Then Click Set
Doing Manual Train/Test * Next Load the Test set
Doing Manual Train/Test * Then you’re all set, so click on Start
Intro to Chapter 5 • Many techniques illustrated in Chapter 5 (ROC curves, recall-precision curves) don’t show up in applied papers • They are useful for showing trade-offs between properties of different algorithms • You see them in theoretical machine learning papers
Intro to Chapter 5 • Still important to understand what they represent • The thinking behind the techniques will show up in your papers • You need to know what your numbers do and don’t demonstrate • They give you a unified framework for thinking about machine learning techniques • There is no cookie cutter for a good evaluation
Confidence Intervals • Mainly important if there is some question about whether your data set is big enough • You average your performance over 10 folds, but how certain can you be that the number you got is correct? • We saw before that performance varies from fold to fold ( ) 0 10 20 30 40
Confidence Intervals • We know that the distribution of categories found in the training set and in the testing set affects the performance • Performance on two different sets will not be the same • Confidence intervals allow us to say that the probability of the real performance value being within a certain range from the observed value is 90% ( ) 0 10 20 30 40
Confidence Intervals • Confidence limits come from the normal distribution • Computed in terms of number of standard deviations from the mean • If the data is normally distributed, there is a 15% chance of the real value being more than 1 standard deviation above the mean
What is a significance test? • How likely is it that the difference you see occurred by chance? • How could the difference occur by chance? ( ( ) ) 0 10 20 30 40 If the mean of one distribution is within the confidence interval of another, the difference you observe could be by chance. If you want p<.05, you need the 90% confidence intervals. Find the corresponding Z scores from a standard normal distribution table.
Computing Confidence Intervals • 90% confidence interval corresponds to z=1.65 • 5% chance that a data point will occur to the right of the rightmost edge of the interval • f = percentage of successes • N = number of trials • p = (f + z2/2N +or- z(squrt(f/N – f2/N + z2/4N2)))/(1 + z2/N) • f=75%, N=1000, c=90% -> [0.727,0.773]
Significance Tests • If you want to know whether the difference in performance between Approach A and Approach B is significant • Get performance numbers for A and B on each fold of a 10-fold cross validation • You can use the Experimenter or you can do the computation in Excel or Minitab • If you use exactly the same “folds” across approaches you can use a paired t-test rather than an unpaired t-test
Significance Tests • Don’t forget that you can get a significant result by chance! • The Experimenter corrects for multiple comparisons • Significance tests are less important if you have a large amount of data and the difference in performance between approaches is large
* First click New Using the Experimenter
Make sure Simple is selected Using the Experimenter
Select .csv as the output file format and click on Browse Click on Add New Enter file name Using the Experimenter
Load data set Using the Experimenter
10 repetitions is better than 1, but 1 is faster. Using the Experimenter
Click on Add New to add algorithms Using the Experimenter
Click Choose to select algorithm Using the Experimenter