1 / 43

Carolyn Penstein Rosé Language Technologies Institute Human-Computer Interaction Institute School of Computer Science

LightSIDE. Carolyn Penstein Rosé Language Technologies Institute Human-Computer Interaction Institute School of Computer Science With funding from the National Science Foundation and the Office of Naval Research. l ightsidelabs.com/research/. Click here to load a file.

urbano
Download Presentation

Carolyn Penstein Rosé Language Technologies Institute Human-Computer Interaction Institute School of Computer Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LightSIDE Carolyn Penstein Rosé Language Technologies Institute Human-Computer Interaction Institute School of Computer Science With funding from the National Science Foundation and the Office of Naval Research

  2. lightsidelabs.com/research/

  3. Click here to load a file

  4. Select Heteroglossia as the predicted category

  5. Make sure the text field is selected to extract text features from

  6. Punctuation can be a “stand in” for mood • “you think the answer is 9?” • “you think the answer is 9.” • Bigrams capture simple lexical patterns • “common denominator” versus “common multiple” • Trigrams (just like bigrams, but with 3 words next to each other) • Carnegie Mellon University • POS bigrams capture syntactic or stylistic information • “the answer which is …” vs “which is the answer” • Line length can be a proxy for explanation depth Feature Space Customizations

  7. Contains non-stop word can be a predictor of whether a conversational contribution is contentful • “ok sure” versus “the common denominator” • Remove stop words removes some distracting features • Stemming allows some generalization • Multiple, multiply, multiplication • Removing rare features is a cheap form of feature selection • Features that only occur once or twice in the corpus won’t generalize, so they are a waste of time to include in the vector space Feature Space Customizations

  8. Think like a computer! • Machine learning algorithms look for features that are good predictors, not features that are necessarily meaningful • Look for approximations • If you want to find questions, you don’t need to do a complete syntactic analysis • Look for question marks • Look for wh-terms that occur immediately before an auxilliary verb Feature Space Customizations

  9. Click to extract text features

  10. Select Logistic Regression as the Learner

  11. Evaluate result by cross validation over sessions

  12. Run the experiment

  13. A sequence of 1 to 6 categories • May include GAPs • Can cover any symbol • GAP+ may cover any number of symbols • Must not begin or end with a GAP Stretchy Patterns(Gianfortoni, Adamson, & Rosé, 2011)

  14. Now it’s your turn!We’ll explore some advanced features and error analysis after the break!

  15. Identify large error cells • Make comparisons • Ask yourself how it is similar to the instances that were correctly classified with the same class (vertical comparison) • How it is different from those it was incorrectly not classified as (horizontal comparison) Error Analysis Process Positive Negative

  16. Error Analysis on Development Set

  17. Error Analysis on Development Set

  18. Error Analysis on Development Set

  19. Error Analysis on Development Set

  20. Error Analysis on Development Set

  21. Positive: is interesting, an interesting scene • Negative: would have been more interesting, potentially interesting, etc. What’s different?

  22. * Note that in this case we get no benefit if we use feature selection over the original feature space.

  23. Feature Splitting (DauméIII, 2007) General General Domain A Domain B Why is this nonlinear? It represents the interaction between each feature and the Domain variable Now that the feature space represents the nonlinearity, the algorithm to train the weights can be linear.

  24. Healthcare Bill Dataset

  25. Healthcare Bill Dataset

  26. Healthcare Bill Dataset

  27. Healthcare Bill Dataset

  28. Healthcare Bill Dataset

  29. Healthcare Bill Dataset

  30. Healthcare Bill Dataset

  31. Healthcare Bill Dataset

  32. Healthcare Bill Dataset

More Related