1 / 28

Nobody is Perfect: ATR’s Hybrid Approach to Spoken Language Translation

Nobody is Perfect: ATR’s Hybrid Approach to Spoken Language Translation October 25, 2005 Michael Paul, Takao Doi, Youngsook Hwang, Kenji Imamura, Hideo Okuma, Eiichiro Sumita ATR Spoken Language Communication Research Laboratories Kyoto, Japan Translation Output Examples

paul2
Download Presentation

Nobody is Perfect: ATR’s Hybrid Approach to Spoken Language Translation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nobody is Perfect:ATR’s Hybrid Approachto Spoken Language Translation October 25, 2005 Michael Paul, Takao Doi, Youngsook Hwang, Kenji Imamura, Hideo Okuma, Eiichiro Sumita ATR Spoken Language Communication Research Laboratories Kyoto, Japan

  2. Translation Output Examples (Japanese-to-English)

  3. Translation Output Examples (Japanese-to-English)

  4. Translation Output Examples (Japanese-to-English)

  5. Select the Best Translation (Japanese-to-English)

  6. Talk Outline  ATR’s hybrid approach to speech translation  C3 (Corpus Centered Computation) project  MT engines  method to select best translation  application to IWSLT05 translation task  goals  track participation  discussion of evaluation results  conclusion

  7. C3 = C-cube (Corpus Centered Computation) C3 places corpora at the center of translation technology  translation knowledge is extracted from corpora  translation quality is improved by referring to corpora  selection ofbest translation is based on corpora Example-based Machine Translation (EBMT)  uses corpora directly  retrieves translation examples that matches the input closely  adjust examples to obtain translation Statistical Machine Translation (SMT)  learns statistical models for language and translation from corpora and dictionaries  searches for best translation at run-time according to its models

  8. (Hypo1) (Hypom) TMLM1 TMLMn System Outline Input Output Preprocessing Translation Selection Tagger MT1 SELECTOR Chunker MTm Parser Resources Statistical Models, Translation Examples Dictionary Thesaurus Corpus (monolingual) Corpus (bilingual)

  9. Element MT Engines

  10. Features of Element MT Engines Resources Unit Coverage Quality Speed

  11. Selection of Best Translation  calculate scores based on language and translation models  apply multiple comparisontest  check significance of score differences outputa MTa multiple comparison test TMi*LMi select best translation candidate outputb input output MTb outputz MTz train n multiple pairs of TM and LM … TM1 LM1 TMn LMn … subset1 subsetm parallel text corpus

  12. Selection of Best Translation  determine priority order of element MT engines  translate development set and evaluate MT outputs (WER)  calculate and assign multiple statistical scores (TMi・LMi 1in) to each translation hypothesis of the given test sentence  apply pair-wise comparison test ( Kruskal –Wallis test) in order to check whether the MT output score of first engine is better than MT output of second MT engine  if a significantly better MT output can be found, use this one for the comparison with remaining MT outputs. Otherwise, select the best MT output according to the priority order  continue significance test for remaining MT engines and output selected translation

  13. Talk Outline  ATR’s hybrid approach to speech translation  C3 (Corpus Centered Computation)  MT engines  method to select best translation  application to IWSLT05 translation task  goals  track participation  discussion of evaluation results  conclusion

  14. Our Goals for IWSLT 2005

  15. Track Participation

  16. Priority Order of Element MT Engines  large differences between languages and data tracks  selection of optimal combination difficult  highest priority to EM, rest MT order optimized on develop set

  17. Evaluation Results(official run submissions)  better performance for JE data tracks compared to CE data tracks  large gain for JE-T (vs. JE-S) due to word normalization  side-effects of NLP tools for CE-T

  18. Effects of Training Data Size (Japanese-to-English) 0.6 JE-BEST-SINGLE 0.5310 JE-SELECTOR 0.5 0.4349 0.4028 0.4 0.3268 0.3322 WER (%) 0.3 0.2768 0.2 0.1 0 20K 170K 540K Training size

  19. JE-BEST-SINGLE JE-SELECTOR Effects of Training Data Size (Chinese-to-English) 0.5645 0.6 0.4785 (0.4371) 0.5 0.5439 0.4389 0.4 (0.4072) WER (%) 0.3 0.2 0.1 0 20K 170K 540K Training size

  20. Effects on NLP Tools  3MT = SAT, PBHMTM, EM

  21. Effects on NLP Tools  comparison of JE-S vs. JE-T and CE-S vs. CE-T using the three element MT engines of the Supplied Track (SAT,PBHMTM,EM)  medium improvement of 3.5% in WER for JE  degradation in performance for CE due to word segmentation differences and lower coverage of our in-house tagging tool

  22. Effects on Multi-Engine Approach  SMT engines outperformed EBMT engines  best performing systems for JE is PBHMTM

  23. Effects of Multi-Engine Approach  SELECTOR outperforms all element MT engines  5% gain for JE-C and even up to 10% for JE-T  SELECTOR does not tap the full potential of element MT engines

  24. Distribution of Selected JE Hypotheses  SELECTOR biased toward SMT engines  features beyond statistical TM・LM score required to improve system performance

  25. Upper Boundary(Japanese-to-English) JE-BEST-SINGLE 0.6 0.5310 JE-SELECTOR 0.5 JE-ORACLE 0.4349 0.4028 0.4 0.3268 0.3322 0.3006 WER (%) 0.3 0.2202 0.2768 0.2 0.1696 0.1 0 20K 170K 540K Training size

  26. JE-BEST-SINGLE JE-SELECTOR JE-ORACLE Upper Boundary(Chinese-to-English) 0.5645 0.6 0.4785 0.4371 0.5 0.5439 0.4389 0.4 (0.4072) 0.3996 WER (%) 0.3246 0.3 0.2904 0.2 0.1 0 20K 170K 540K Training size

  27. Lessons learned from IWSLT 2005

  28. Conclusion  the proposed hybrid approach was successful on the IWSLT05 translation task the proposed selection method outperformed all element MT engines gaining 4-5% in WER towards the best MT engine  SMT-based selection of multiple MT outputs underachieved its task Future Work additional features besides the utilized statistical model scores have to be incorporated into the selection process in order to tap the full potential of the element MT engines  improve system performance of element MT engines in order torise the upper boundary

More Related