60 likes | 179 Views
Lexical Acquisition of Verb Direct-Object Selectional Preferences Based on the WordNet Hierarchy. Emily Shen and Sushant Prakash. Selectional Preferences: V-DO. Eat a carrot Drive a truck Eat a truck Drive a carrot Find general classes that a verb takes as arguments
E N D
Lexical Acquisition of Verb Direct-Object Selectional Preferences Based on the WordNet Hierarchy Emily Shen and Sushant Prakash
Selectional Preferences: V-DO • Eat a carrot • Drive a truck • Eat a truck • Drive a carrot • Find general classes that a verb takes as arguments • Useful for word sense disambiguation, choosing among parses, capturing some essence of semantics, etc.
Strategy • P(v,c) = (1/N) nwords(c) (1/|classes(n)|) C(v,n) • S(v) = D(P(C|v)||P(C)) = c P(c|v)log[P(c|v)/P(c)] • A(v,c) = P(c|v)log[P(c|v)/P(c)] / S(v) • A(v,n) = maxcclasses(n) A(v,c) • But this assumes flat set of classes – we wanted to exploit the hierarchy: • Propagate probability counts to hypernyms. • Pmod(v,c) = Porig(v,c)+ c_kdesPorig(v,ckdes)
This may seem a little screwy… • No discount factor for each step up • No splitting the count for branches
Results • Most selective verbs • discipline, sigh, slice, shoot down, elongate • Least selective verbs • make, have, see, get, include • Top noun classes • plant – plant, explosive device • transplant – kidney, internal organ, body part • Tested WSD on WSJ and BLLIP. • Random baseline: 26.39% P, 100% R, 41.76% F1 • Flat WSJ: 28.39% P, 71.67% R, 40.67% F1 • Hyper WSJ: 51.44% P, 65.21% R, 57.51% F1
Future Work • Feed disambiguated nouns into model for training • Model class to class relationships • Also take into account subject