1 / 63

Machine Learning in Practice Lecture 9

Machine Learning in Practice Lecture 9. Carolyn Penstein Ros é Language Technologies Institute/ Human-Computer Interaction Institute. Plan for the Day. Announcements Questions? Assignment 4 Quiz Today’s Data Set: Speaker Identification Weka helpful hints

vita
Download Presentation

Machine Learning in Practice Lecture 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning in PracticeLecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

  2. Plan for the Day • Announcements • Questions? • Assignment 4 • Quiz • Today’s Data Set: Speaker Identification • Weka helpful hints • Visualizing Errors for Regression Problems • Alternative forms of cross-validation • Creating Train/Test Pairs • Intro to Evaluation

  3. Speaker Identification

  4. Today’s Data Set – Speaker Identification

  5. Preprocessing Speech Record speech to WAV files. Extract a variety of acoustic and prosodic features.

  6. Predictions: which algorithm will perform better? • What previous data set does this remind you of? • J48 .53 Kappa • SMO .37 Kappa • Naïve Bayes .16 Kappa

  7. Notice Ranges and Contingencies

  8. Most Predictive Feature

  9. Least Predictive Feature

  10. What would 1R do?

  11. What would 1R do? .16 Kappa

  12. Weka Helpful Hints

  13. Evaluating Numeric Prediction: CPU data

  14. Visualizing Classifier Errors for Numeric Prediction

  15. Creating Train/Test Pairs First click here

  16. Creating Train/Test Pairs If you pick unsupervised, you’ll get non-stratified folds, otherwise you’ll get stratified folds.

  17. Stratified versus Non-Stratified • Weka’s standard cross-validation is stratified • Data is randomized before dividing it into folds • Preserves distribution of class values across folds • Reduces variance in performance • Unstratified cross-validation means there is no randomization • Order is preserved • Advantage for matching predictions with instances in Weka

  18. Stratified versus Non-Stratified • Leave-one-out cross validation • Train on all but one instance • Iterate over all instances • Extreme version of unstratified cross-validation • If test set only has one instance, the distribution of class values cannot be preserved • Maximizes amount of data used for training on each fold

  19. Stratified versus Non-Stratified • Leave-one-subpopulation out • If you have several data points from the same subpopulation • Speech data from the same speaker • May have data from same subpopulation in train and test • over-estimates overlap between train and test • When is this not a problem? • You can manually make sure that won’t happen • You have to do that by hand

  20. Creating Train/Test Pairs If you pick unsupervised, you’ll get non-stratified folds, otherwise you’ll get stratified folds.

  21. Creating Train/Test Pairs Now click here

  22. Creating Train/Test Pairs

  23. You’re going to run this filter 20 times altogether. twice for every fold. Creating Train/Test Pairs

  24. True for Train, false for Test Creating Train/Test Pairs

  25. Creating Train/Test Pairs If you’re doing Stratified, make sure you have to class attribute selected here.

  26. 1. Click Apply Creating Train/Test Pairs

  27. 2. Save the file Creating Train/Test Pairs

  28. 3. Undo before you create the next file Creating Train/Test Pairs

  29. Doing Manual Train/Test * First load the training data on the Preprocess tab

  30. Doing Manual Train/Test * Now select Supplied Test Set as the Test Option

  31. Doing Manual Train/Test Then Click Set

  32. Doing Manual Train/Test * Next Load the Test set

  33. Doing Manual Train/Test * Then you’re all set, so click on Start

  34. Evaluation Methodology

  35. Intro to Chapter 5 • Many techniques illustrated in Chapter 5 (ROC curves, recall-precision curves) don’t show up in applied papers • They are useful for showing trade-offs between properties of different algorithms • You see them in theoretical machine learning papers

  36. Intro to Chapter 5 • Still important to understand what they represent • The thinking behind the techniques will show up in your papers • You need to know what your numbers do and don’t demonstrate • They give you a unified framework for thinking about machine learning techniques • There is no cookie cutter for a good evaluation

  37. Confidence Intervals • Mainly important if there is some question about whether your data set is big enough • You average your performance over 10 folds, but how certain can you be that the number you got is correct? • We saw before that performance varies from fold to fold ( ) 0 10 20 30 40

  38. Confidence Intervals • We know that the distribution of categories found in the training set and in the testing set affects the performance • Performance on two different sets will not be the same • Confidence intervals allow us to say that the probability of the real performance value being within a certain range from the observed value is 90% ( ) 0 10 20 30 40

  39. Confidence Intervals • Confidence limits come from the normal distribution • Computed in terms of number of standard deviations from the mean • If the data is normally distributed, there is a 15% chance of the real value being more than 1 standard deviation above the mean

  40. What is a significance test? • How likely is it that the difference you see occurred by chance? • How could the difference occur by chance? ( ( ) ) 0 10 20 30 40 If the mean of one distribution is within the confidence interval of another, the difference you observe could be by chance. If you want p<.05, you need the 90% confidence intervals. Find the corresponding Z scores from a standard normal distribution table.

  41. Computing Confidence Intervals • 90% confidence interval corresponds to z=1.65 • 5% chance that a data point will occur to the right of the rightmost edge of the interval • f = percentage of successes • N = number of trials • p = (f + z2/2N +or- z(squrt(f/N – f2/N + z2/4N2)))/(1 + z2/N) • f=75%, N=1000, c=90% -> [0.727,0.773]

  42. Significance Tests • If you want to know whether the difference in performance between Approach A and Approach B is significant • Get performance numbers for A and B on each fold of a 10-fold cross validation • You can use the Experimenter or you can do the computation in Excel or Minitab • If you use exactly the same “folds” across approaches you can use a paired t-test rather than an unpaired t-test

  43. Significance Tests • Don’t forget that you can get a significant result by chance! • The Experimenter corrects for multiple comparisons • Significance tests are less important if you have a large amount of data and the difference in performance between approaches is large

  44. * First click New Using the Experimenter

  45. Make sure Simple is selected Using the Experimenter

  46. Select .csv as the output file format and click on Browse Click on Add New Enter file name Using the Experimenter

  47. Load data set Using the Experimenter

  48. 10 repetitions is better than 1, but 1 is faster. Using the Experimenter

  49. Click on Add New to add algorithms Using the Experimenter

  50. Click Choose to select algorithm Using the Experimenter

More Related