220 likes | 346 Views
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement. Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies Institute Carnegie Mellon University AMTA 2004. Outline. Automatic Rule Refinement AVENUE and resource-poor scenarios
E N D
Error Analysis of Two Types of Grammar for the purpose ofAutomatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies Institute Carnegie Mellon University AMTA 2004
Outline • Automatic Rule Refinement • AVENUE and resource-poor scenarios • Experiment • Data (eng2spa) • Two types of grammar • Evaluation results • Error analysis • RR required for each type • Conclusions and Future Work AMTA 2004
Motivation for Automatic RR General • MT output still requires post-editing • Current systems do not recycle post-editing efforts back into the system, beyond adding as new training data within Avenue • Resource-poor scenarios: lack of manual grammar or very small initial grammar • Need to validate elicitation corpus and automatically learned translation rules AMTA 2004
Motivation for Automatic RR General • MT output still requires post-editing • Current systems do not recycle post-editing efforts back into the system, beyond adding as new training data within Avenue • Resource-poor scenarios: lack of manual grammar or very small initial grammar • Need to validate elicitation corpus and automatically learned translation rules AMTA 2004
AVENUE and resource-poor scenarios • No e-data available (often spoken tradition) SMT or EBMT • lack of computational linguists to write a grammar So how can we even start to think about MT? • That’s what AVENUE is all about Elicitation Corpus + Automatic Rule Learning + Rule Refinement What do we usually have available in resource-poor scenarios? Bilingual users AMTA 2004
Elicitation Morphology Rule Learning Run-Time System Rule Refinement Word-Aligned Parallel Corpus Translation Correction Tool Learning Module Handcrafted rules Run Time Transfer System Transfer Rules Morpho-logical analyzer Rule Refinement Module Elicitation Tool Elicitation Corpus Lexical Resources Lattice AVENUE overview AMTA 2004
Automatic and Interactive RLR 1st step SLSentence1– TLSentence1 SLSentence2– TLSentence2 Automatically Learned Rule R 2nd step TLS3 SLS3 RR module R’ (R refined) TLS3’ SLS3 TLS3’ AMTA 2004
Interactive Elicitation of MT errors Assumptions: • non-expert bilingual users can reliably detect and minimally correct MT errors, given: • SL sentence (I saw you) • up to 5 TL sentences (Yo vi tú, ...) • word-to-word alignments (I-yo, saw-vi, you-tú) • (context) • using an online GUI: the Translation Correction Tool (TCTool) Goal:Simplify MT correction task maximally User studies: 90% error detection accuracy and 73% error classification [LREC 2004] AMTA 2004
TCTool v0.1 Actions: • Add a word • Delete a word • Modify a word • Change word order AMTA 2004
RR Framework • Find best RR operations given a: • grammar (G), • lexicon (L), • (set of) source language sentence(s) (SL), • (set of) target language sentence(s) (TL), • its parse tree (P), and • minimal correction of TL (TL’) such that TQ2 > TQ1 • Which can also be expressed as: max TQ(TL|TL’,P,SL,RR(G,L)) AMTA 2004
Types of RR operations bifurcate • Grammar: • R0 R0 + R1 [=R0’ + contr] Cov[R0] Cov[R0,R1] • R0 R1 [=R0 + constr] Cov[R0] Cov[R1] • R0 R1[=R0 + constr= -] R2[=R0’ + constr=c +] Cov[R0] Cov[R1,R2] • Lexicon • Lex0 Lex0 + Lex1[=Lex0 + constr] • Lex0 Lex1[=Lex0 + constr] • Lex0 Lex1[Lex0 + TLword] • Lex1 (adding lexical item) refine AMTA 2004
Data: English - Spanish Training • First 200 sentences from AVENUE Elicitation Corpus • Lexicon: extracted semi-automatically from first 400 sentences (442 entries) Test • 32 sentences manually selected from the next 200 sentences in the EC to showcase a variety of MT errors AMTA 2004
Manual grammar • 12 rules (2 S, 7 NP, 3 VP) • Produces1.6 different translations on average AMTA 2004
Learned Grammar + feature constraints • 316 rules (194 S, 43 NP, 78 VP, 1 PP) • emulated decoder by reordering of 3 rules • Produces18.6 different translations on average AMTA 2004
Comparing Grammar Output: Results • Manually: • Automatic MT Evaluation: AMTA 2004
Error Analysis • Most of the errors produced by the manual grammar can be classified into: • lack of subj-pred agreement • wrong word order of object pronouns (clitic) • wrong preposition • wrong form (case) • OOV words • On top of these, the learned grammar output exhibited errors of the following type: • lack of agreement constraints • missing preposition • over-generalization AMTA 2004
Examples • Same (both good) • Manual Grammar better • Learned Grammar better • Different (both bad) AMTA 2004
Types of RR required for Manual Grammar • Bifurcate a rule to code an exception: • R0 R0 + R1 [=R0’ + contr] Cov[R0] Cov[R0,R1] • R0 R1[=R0 + constr= -] R2[=R0’ + constr=c +] Cov[R0] Cov[R1,R2] Learned Grammar • Adjust feature constraints, such as agreement: • R0 R1 [=R0 +|- constr] Cov[R0] Cov[R1] AMTA 2004
Conclusions • TCTool + RR can improve both hand-crafted and automatically learned grammars. • In the current experiment, MT errors differ almost 50% of the time, depending on the type of grammar. • Manual G will need to be refined to encode exceptions, whereas Learned G will need to be refined to achieve the right level of generalization. • We expect the RR to give the most leverage when combined with the Learned Grammar. AMTA 2004
Future Work • Experiment where user corrections are used both as new training examples for RL and to refine the existing grammar with the RR module. • Investigate using reference translations to refine MT grammars automatically... but much harder since they are not minimal post-editions. AMTA 2004
Questions???Thank you! AMTA 2004
RR Framework • types of operations: bifurcate, make more specific/general, add blocking constraints, etc. • formalizing error information (clue word) • finding triggering features AMTA 2004