230 likes | 360 Views
The Impact of Grammar Enhancement on Semantic Resources Induction. Luca Dini (dini@celi.it) Giampaolo Mazzini (mazzini@celi.it). Objectives. Bridging from dependency parsing to kowledge representation; Need of an intermediate level Semantic Role Labelling Easily configurable; Rule based;
E N D
The Impact of Grammar Enhancement on Semantic Resources Induction Luca Dini (dini@celi.it) Giampaolo Mazzini (mazzini@celi.it)
Objectives • Bridging from dependency parsing to kowledge representation; • Need of an intermediate level • Semantic Role Labelling • Easily configurable; • Rule based; • Moderately learning based (MLN) • Production of a reasonably large repository of lexical units with assigned frames and mappings to syntax. • Objective of this presentation: To measure the inpact of grammar enhancement on the derivation of semantic resources.
Plan of This Talk • Architecture and Methodology; • First Evaluation; • The Effect of Grammar Improvement;
Machine Translation Target Example <LU, FRAME> Parsing Parsing parsed Annotation parsed Example Target LU Identification annotated Example FE alignement <tLU,FRAME,VALENCE> Dep. Extraction Architecture Source Annotation
…foreign policy dispute …disputa di politica straniera Example <dispute.n,Quarreling> <disputa.n,Quarreling, <Issue,Prep[di]>>
Ingredients • Bilingual MT System (Systran) • Comparable parsers for Italian and English (XIP, Xerox Incremental Parser) • Lexicon look up module (350.000 it <-> en) • Word sense disambiguation and clustering • Semantic vectors for source and target
Challenges • Ambiguity of translation: • Write.v ->{scrivere, fare lo scrittore, scolpire, vergare,documentare, comporre, scrivere una lettera, cantare, trascrivere}. • Lack of translation. • Identification of the semantic head of the Frame Element. • Grammatical transformations. • Grammar Errors.
Evaluation(1): SRL (1) • Manual annotation of TUT corpus (Lesmo et Al. 2002): • 1000 sentences • Corpus annotated only with frame bearing induced LU; • Selection of correct frame (if any) • FE annotation of all dependants • Export in CoNLL format
Evaluation (1): SRL (2) • Second step: “parse” the corpus for SRL: • No real parser; • Very simple algorithm for assignement; • Random choice in case of ambiguity; • Results: According to Toutanova et al. (2008) F-Measure metrics: • precision of 0.53, a recall of 0.33 and a consequent precision of 0.41. • Poor comparison with state of the art SRL.
Evaluation (2) • “Standard” corpus annotation: • 20 sentences X 20 lexical units (no ambiguity). • Creation of a DB of <Lunit, frame, Valence> triples. • Comparison with induced resources based on standard precision and recall metrics. • A hit counts as positive if Part-of-speech, Grammatic Function and Frame element all matches • A “boost” was assigned on the basis of the importance of valence population (based both number and variety of realization). • Global precision and recall is the arithmetic mean of all weights: • Precision: 0,65 • Recall: 0,41
Errors • No translation for a lexical unit (7,815); • Absence of examples in the source FrameNet (4,922); • No translated example contains the candidate translation(s) of the lexical unit (1,736). • No head could be identified for English frame element realization (parse error or difficult structure, e.g. coordination) (6,191) • The translation of the semantic head of the frame element or of the frame bearing head could not be matched in the Italian example. (99,808) • The semantic heads of both the lexical unit and the frame element are found in the Italian example but the parser could not find any dependency among them. (94,004)
The Enhancement Phase • Improvements concerned only one side of the parsing mechanism, i.e. the Italian Dependency Grammar; • Development: • Using the XIP IDE (Mokhtar et al., 2001). • The development period lasted about 6 month (Testa & al. ,2009)). • It was based on iterative verification on different corpus (TUT/ISST). • Improvement in LAS 40% -> 70%
Consequences • The architecture was kept exactly the same and the source code “frozen” during the six month period. • Results
Comments • Both evaluation types shows an increase in precision of about 6%; • Strangely recall stay almost constant in ev1, while it increases considerably in ev2 • Explanation (?): • Unmapped phenomena; • “Random” effect due to small evaluation set.
Issues & Conclusions • Was it worth 6 month labour ? • Probably not, if grammar enhancement is finalized just to the acquisition of the resources. • Probably yes, if it is independently motivated. • In general evaluation of the impact of lower modules on high level application is something crucial for strategic choices and a rather “neglected” aspect. • We need to understand the correct trade-off. • Convergency: IFRAME project (http://sag.art.uniroma2.it/iframe/doku.php)