1 / 47

Large Vocabulary Continuous Speech Recognition

Large Vocabulary Continuous Speech Recognition. Subword Speech Units. HMM-Based Subword Speech Units. Training of Subword Units. Training of Subword Units. Training Procedure. Errors and performance evaluation in PLU recognition. Substitution error (s) Deletion error (d)

kiril
Download Presentation

Large Vocabulary Continuous Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Large VocabularyContinuous Speech Recognition

  2. Subword Speech Units

  3. HMM-Based Subword Speech Units

  4. Training of Subword Units

  5. Training of Subword Units

  6. Training Procedure

  7. Errors and performance evaluation in PLU recognition • Substitution error (s) • Deletion error (d) • Insertion error (i) • Performance evaluation: • If the total number of PLUs is N, we define: • Correctness rate: N – s – d /N • Accuracy rate: N – s – d – i / N

  8. Language Models for LVCSR Word Pair Model: Specify which word pairs are valid

  9. Statistical Language Modeling

  10. Perplexity of the Language Model Entropy of the Source: First order entropy of the source: If the source is ergodic, meaning its statistical properties can be completely characterized in a sufficiently long sequence that the Source puts out,

  11. We often compute H based on a finite but sufficiently large Q: H is the degree of difficulty that the recognizer encounters, on average, When it is to determine a word from the same source. Using language model, if the N-gram language model PN(W) is used, An estimate of H is: In general: Perplexity is defined as:

  12. Overall recognition system based on subword units

  13. Naval Resource (Battleship) Management Task: 991-word vocabulary NG (no grammar): perplexity = 991

  14. Word pair grammar We can partition the vocabulary into four nonoverlapping sets of words: The overall FSN allows recognition of sentences of the form:

  15. WP (word pair) grammar: Perplexity=60 FSN based on Partitioning Scheme: 995 real arcs and 18 null arcs WB (word bigram) Grammar: Perplexity =20

  16. Control of word insertion/word deletion rate • In the discussed structure, there is no control on the sentence length • We introduce a word insertion penalty into the Viterbi decoding • For this, a fixed negative quantity is added to the likelihood score at the end of each word arc

  17. Context-dependent subword units Creation of context-dependent diphones and triphones

  18. If c(.) is the occurrence count for a given unit, we can use a unit reduction rule such as: CD units using only intraword units for “show all ships”: CD units using both intraword and itnerword units:

  19. Smoothing and interpolation of CD PLU models

  20. Implementation issues using CD units

  21. Word junction effects To handle known phonological changes, a set of phonological rules are Superimposed on both the training and recognition networks. Some typical phonological rules include:

  22. Recognition results using CD units

  23. Position dependent units

  24. Unit splitting and clustering

  25. A key source of difficulty in continuous speech recognition is the So-called function words, which include words like a, and, for, in, is. The function words have the following properties:

  26. Creation of vocabulary-independent units

  27. Semantic Postprocessor For Recognition

More Related