580 likes | 705 Views
Linguistic Regularities in Sparse and Explicit Word Representations. Omer Levy Yoav Goldberg Bar- Ilan University Israel. Papers in ACL 2014*. * Sampling error: +/- 100%. Neural Embeddings. Representing words as vectors is not new!. Explicit Representations (Distributional).
E N D
Linguistic Regularities in Sparse and Explicit Word Representations Omer Levy Yoav Goldberg Bar-Ilan University Israel
Papers in ACL 2014* * Sampling error: +/- 100%
Questions • Are analogies unique to neural embeddings? Compare neural embeddings with explicit representations • Why does vector arithmetic reveal analogies? Unravel the mystery behind neural embeddings and their “magic”
Mikolov et al. (2013a,b,c) • Neural embeddings have interesting geometries
Mikolov et al. (2013a,b,c) • Neural embeddings have interesting geometries • These patterns capture “relational similarities” • Can be used to solve analogies: man is to woman as king is to queen
Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations
Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations
Are analogies unique to neural embeddings? • Experiment: compare embeddings to explicit representations • Learn different representations from the samecorpus:
Embedding vs Explicit (Round 1) Many analogies recovered by explicit, but many more by embedding.
Why does vector arithmetic reveal analogies? royal? female?
What does each similarity term mean? • Observe the joint features with explicit representations!
The Additive Objective • Problem: one similarity might dominate the rest • Much more prevalent in explicit representation • Might explain why explicit underperformed
How can we do better? • Instead of adding similarities, multiply them!