1 / 6

Hierarchical Verb Direct-Object Selectional Preferences in Lexical Acquisition

This research by Emily Shen and Sushant Prakash focuses on verb-direct object selectional preferences based on the WordNet hierarchy. Discover how general verb classes impact word sense disambiguation and parse selection, offering insights into semantics. The study introduces a strategy to calculate verb-object probability, considering hierarchies for better accuracy. Results show most and least selective verbs along with top noun classes. Testing on WSJ and BLLIP data sets proves the efficacy of the proposed method, paving the way for future improvements like incorporating disambiguated nouns and modeling class relationships.

akimbrough
Download Presentation

Hierarchical Verb Direct-Object Selectional Preferences in Lexical Acquisition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lexical Acquisition of Verb Direct-Object Selectional Preferences Based on the WordNet Hierarchy Emily Shen and Sushant Prakash

  2. Selectional Preferences: V-DO • Eat a carrot • Drive a truck • Eat a truck • Drive a carrot • Find general classes that a verb takes as arguments • Useful for word sense disambiguation, choosing among parses, capturing some essence of semantics, etc.

  3. Strategy • P(v,c) = (1/N) nwords(c) (1/|classes(n)|) C(v,n) • S(v) = D(P(C|v)||P(C)) = c P(c|v)log[P(c|v)/P(c)] • A(v,c) = P(c|v)log[P(c|v)/P(c)] / S(v) • A(v,n) = maxcclasses(n) A(v,c) • But this assumes flat set of classes – we wanted to exploit the hierarchy: • Propagate probability counts to hypernyms. • Pmod(v,c) = Porig(v,c)+ c_kdesPorig(v,ckdes)

  4. This may seem a little screwy… • No discount factor for each step up • No splitting the count for branches

  5. Results • Most selective verbs • discipline, sigh, slice, shoot down, elongate • Least selective verbs • make, have, see, get, include • Top noun classes • plant – plant, explosive device • transplant – kidney, internal organ, body part • Tested WSD on WSJ and BLLIP. • Random baseline: 26.39% P, 100% R, 41.76% F1 • Flat WSJ: 28.39% P, 71.67% R, 40.67% F1 • Hyper WSJ: 51.44% P, 65.21% R, 57.51% F1

  6. Future Work • Feed disambiguated nouns into model for training • Model class to class relationships • Also take into account subject

More Related