1 / 14

Handing Uncertain Observations in Unsupervised Topic-Mixture Language Model Adaptation

Handing Uncertain Observations in Unsupervised Topic-Mixture Language Model Adaptation. Ekapol Chuangsuwanich 1 , Shinji Watanabe 2 , Takaaki Hori 2 , Tomoharu Iwata 2 , James Glass 1.

jag
Download Presentation

Handing Uncertain Observations in Unsupervised Topic-Mixture Language Model Adaptation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Handing Uncertain Observations in Unsupervised Topic-MixtureLanguage Model Adaptation EkapolChuangsuwanich1, Shinji Watanabe2,Takaaki Hori2, TomoharuIwata2, James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA2NTT Communication Science Laboratories, NTT Corporation, Japan ICASSP 2012 報告者:郝柏翰

  2. Outline • Introduction • Topic Tracking Language Model(TTLM) • TTLM Using Confusion Network Inputs(TTLMCN) • Experiments • Conclusion

  3. Introduction • In a real environment, acoustic and language features often vary depending on the speakers, speaking styles and topic changes. • To accommodate these changes, speech recognition approaches that include the incremental tracking of changing environments have attracted attention. • This paper proposes a topic tracking language model that can adaptively track changes in topics based on current text information and previously estimated topic models in an on-line manner.

  4. TTLM • Tracking temporal changes in language environments

  5. TTLM • A long session of speech input is divided into chunks • Each chunk is modeled by different topic distributions • The current topic distribution depends on the topic distribution of the past H chunks and precision parameters α as follows:

  6. TTLM • With the topic distribution, the unigram probability of a word wm in the chunk can be recovered using the topic and word probabilities • Where θ is the unigram probabilities of word wm in topic k • The adapted n-gram can be used for a 2nd pass recognition for better results.

  7. TTLMCN • Consider a confusion network with M word slots. • Each word slot m can contain different number of arcs Am • with each arc containing a word wmaand a corresponding arc posterior dma. • Smis binary selection parameter, where sm = 1 indicates that the arc is selected. chunk1 chunk2 chunk3 slot2 slot3 slot1 … A1=3

  8. TTLMCN • For each chunk t, we can write the joint distribution of words, latent topics and arc selections conditioned on the topic probabilities, unigram probabilities, and arc posteriors as follows:

  9. TTLMCN • Graphical representation of TTLMCN

  10. Experiments(MIT-OCW) • MIT-OCW is mainly composed of lectures given at MIT. Each lecture is typically two hours long. We segmented the lectures using Voice Activity Detectors into utterances averaging two seconds each.

  11. Compare with TTLM and TTLMCN • We can see that the topic probability of TTLMCNI is more similar to the oracle experiment than TTLM, especially in the low probability regions. • KL between TTLM and ORACLE was 3.3, TTLMCN was 1.3

  12. Conclusion • We described an extension for the TTLM in order to handle errors in speech recognition. The proposed model used a confusion network as input instead of just one ASR hypothesis which improved performance even in high WER situations. • The gain in word error rate was not very large since the LM typically contributed little to the performance of LVCSR.

  13. Significance Test (T-Test) H0:實驗組與對照組的常態分佈一致 H1:實驗組與對照組的常態分佈不一致

  14. Significance Test (T-Test) • Significance Test (T-Test) • Example

More Related