260 likes | 440 Views
A PLSA-based Language Model for Conversational Telephone Speech David Mrva and Philip C.Woodland. 2004/12/08 邱炫盛. Outline. Language Model PLSA Model Experimental Results Conclusion. Language Model. The task of a language model is to calculate probability n-gram model
E N D
A PLSA-based Language Model for Conversational Telephone SpeechDavid Mrva and Philip C.Woodland 2004/12/08 邱炫盛
Outline • Language Model • PLSA Model • Experimental Results • Conclusion
Language Model • The task of a language model is to calculate probability • n-gram model • Range of dependencies is limited to n-words • Information is ignored
Language Model (cont.) • Topic-based language model • Latent Semantic Analysis • Topic-based language model • PLSA-based language model
PLSA Model • PLSA is general machine learning technique for modeling the co-occurrences of events. • Co-occurrence of words and documents • Hidden variable = aspect • PLSA in this paper is a mixture of unigram distribution.
P(d) P(w|d) d w P(d) P(t|d) P(w|t) d t w PLSA Model (cont.) Graphical Model Representation
di P(wj|z1) P(z1|di) P(wj|z2) P(z2|di) ∑ w1 w2 w3…….wj P(zk|di) P(wj|zk) PLSA Model (cont.)
PLSA Model (cont.) M: number of words in vocabulary N: number of documents in training collection K: number of aspects or topics
PLSA Model (cont.) Use PLSA in language model: P(zk|di) are used as mixture weights when calculating the word probability. The history hi is used instead of di to re-estimate these weight on the test set.
PLSA Model (cont.) Account for the whole document history of word irrespective of the document length. Have no means for representing the word order because of mixture of unigram distribution. Combine n-gram with PLSA: When PLSA used in decoding, Viterbi-based decoder is not suitable. Two-pass decoder: • First pass: • n-gram, output a confidence score • Second pass: • PLSA, rescoring the lattices
PLSA Model (cont.) • During the re-scoring, the PLSA history comprises of all segments in a document but the current segment. • PLSA history is fixed for all words in a given segment. • Refer to “history “ as “context” (ctx). It contains both past and future words.
Experimental Results Two Test Sets • NIST’s Hub5 speech-to-text evaluation 2002(eval02) • Switchboard I and II • 62k words,19k form Switchboard I • NIST’s Rich Transcription Spring 2003 CTS speech-to-text evalation(eval03) • Switchboard II phase 5 and Fisher • 74k words, 36k from Fisher
Experimental Results (cont.) • The reduction is greater if PLSA’s training text relates to the test set. • PP of (ref.ctx,10) <PP of (rec.ctx,10) • b=10 is the best value • Use of confidence score makes the PLSA model less sensitive to b
Experimental Results (cont.) • baseline: n-gram trained on 20M words of Fisher transcripts. Increased to 500 classes • PLSA: 750 aspects,100 EM iterations • Separate into eval03dev,eval03tst • Interpolation weight of the word and class-based n-gram were set to minimize perplexity. • A slight improvement when side-based documents were used.
Experimental Results (cont.) • b=100 is best value • PLSA model needs much more data to estimate the topic of Fisher than SwbI • Having a long context is very important.
Conclusion • PLSA with the suggested modifications in a language model reduces perplexity. • Future work: • Re-score lattices to calculate WERs • Combine semantics-oriented model with syntax-based language model