1 / 58

Linguistic Regularities in Sparse and Explicit Word Representations

Linguistic Regularities in Sparse and Explicit Word Representations. Omer Levy Yoav Goldberg Bar- Ilan University Israel. Papers in ACL 2014*. * Sampling error: +/- 100%. Neural Embeddings. Representing words as vectors is not new!. Explicit Representations (Distributional).

Download Presentation

Linguistic Regularities in Sparse and Explicit Word Representations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linguistic Regularities in Sparse and Explicit Word Representations Omer Levy Yoav Goldberg Bar-Ilan University Israel

  2. Papers in ACL 2014* * Sampling error: +/- 100%

  3. Neural Embeddings

  4. Representing words as vectors is not new!

  5. Explicit Representations (Distributional)

  6. Questions • Are analogies unique to neural embeddings? Compare neural embeddings with explicit representations • Why does vector arithmetic reveal analogies? Unravel the mystery behind neural embeddings and their “magic”

  7. Background

  8. Mikolov et al. (2013a,b,c) • Neural embeddings have interesting geometries

  9. Mikolov et al. (2013a,b,c) • Neural embeddings have interesting geometries • These patterns capture “relational similarities” • Can be used to solve analogies: man is to woman as king is to queen

  10. Mikolov et al. (2013a,b,c)

  11. Mikolov et al. (2013a,b,c)

  12. Mikolov et al. (2013a,b,c)

  13. Mikolov et al. (2013a,b,c)

  14. Mikolov et al. (2013a,b,c)

  15. Mikolov et al. (2013a,b,c)

  16. Mikolov et al. (2013a,b,c)

  17. Mikolov et al. (2013a,b,c)

  18. Are analogies unique to neural embeddings?

  19. Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations

  20. Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations

  21. Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations • Learn different representations from the samecorpus:

  22. Are analogies unique to neural embeddings?

  23. Analogy Datasets

  24. Embedding vs Explicit (Round 1)

  25. Embedding vs Explicit (Round 1) Many analogies recovered by explicit, but many more by embedding.

  26. Why does vector arithmetic reveal analogies?

  27. Why does vector arithmetic reveal analogies?

  28. Why does vector arithmetic reveal analogies?

  29. Why does vector arithmetic reveal analogies?

  30. Why does vector arithmetic reveal analogies?

  31. Why does vector arithmetic reveal analogies?

  32. Why does vector arithmetic reveal analogies?

  33. Why does vector arithmetic reveal analogies?

  34. Why does vector arithmetic reveal analogies? royal? female?

  35. What does each similarity term mean? • Observe the joint features with explicit representations!

  36. Can we do better?

  37. Let’s look at some mistakes…

  38. Let’s look at some mistakes…

  39. Let’s look at some mistakes…

  40. Let’s look at some mistakes…

  41. The Additive Objective

  42. The Additive Objective

  43. The Additive Objective

  44. The Additive Objective

  45. The Additive Objective

  46. The Additive Objective • Problem: one similarity might dominate the rest • Much more prevalent in explicit representation • Might explain why explicit underperformed

  47. How can we do better?

  48. How can we do better? • Instead of adding similarities, multiply them!

  49. How can we do better?

More Related