190 likes | 331 Views
Recent Paper of Md. Akmal Haidar. Meeting before ICASSP 2013. 報告者:郝柏翰. Outline. “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010
E N D
Recent Paper of Md. AkmalHaidar Meeting before ICASSP 2013 報告者:郝柏翰
Outline • “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010 • “Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals”, 2011 • “Topic N-gram Count Language Model Adaptation for Speech Recognition”, 2012 • “Comparison of a Bigram PLSA and a Novel Context-Based PLSA Language Model for Speech Recognition”, 2013
Novel Weighting Scheme for Unsupervised Language Model Adaptation UsingLatent Dirichlet Allocation Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010
Introduction • Adaptation is required when the styles, domains or topics of the test data are mismatched with the training data. It is also important as natural language is highly variable since the topic information is highly non-stationary. • The idea of an unsupervised LM adaptation approach is to extract the latent topics from the training set and then adapt the topic specific LM with proper mixture weights, finally interpolated with the generic n-gram LM. • In this paper, we propose the idea that the weights of topic models are generated using the word count of the topics generated by a hard-clustering method.
Proposed Method • We have used the MATLAB topic modeling toolbox to get the word-topic matrix, WP, and the document-topic matrix, DP, using LDA. • Hard-clustering: • For training, topic clusters are formed by assigning a topic to a document as: • Adaptation: • we can create a dynamically adapted topic model by using a mixture of LMs from different topics as:
Proposed Method • To find topic mixture weight , we propose a scheme where the unigram count of the topics. • TF represents the number of times word is drawn from topic . • is the frequency of in document . • The adapted topic model is then interpolated with the generic LM as:
Experiment Setup • We evaluated the LM adaptation approach using the Brown Corpus and WSJ1 corpus transcription text data.
Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010
Introduction • To overcome the mismatch problem. we introduce an unsupervised language model adaptation approach using latent Dirichlet allocation (LDA) and dynamic marginals: locally estimated (smoothed) unigram probabilities from in-domain text data. • we extend our previous work to find an adapted model by using the minimum discriminant information (MDI), which uses KL divergence as the distance measure between probability distributions. • The final adapted model is formed by minimizing the KL divergence between the final adapted model and the LDA adapted topic model.
Proposed Method • Topic clustering • LDA adapted topic mixture model generation
Proposed Method • Adaptation using dynamic marginals • The adapted model using dynamic marginals is obtained by minimizing the KL-divergence between the adapted model and the background model subject to the marginalization constraint for each word w in the vocabulary: • The constraint optimization problem has close connection to the maximum entropy approach, which provides that the adapted model is a rescaled version of the background model:
Proposed Method • The background and the LDA adapted topic model have standard back-off structure and the above constraint, so the adapted LM has the following recursive formula:
Topic N-gram Count Language Model Adaptation for Speech Recognition Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010
Introduction • In this paper, we propose a new LM adaptation approach by considering the features of the LDA model. • Here, we compute the topic mixture weights of each n-gram in the training set using the probability of word given topic , as confidence measures for the words in the n-gram under different LDA latent topics. • The normalized topic mixture weights are then multiplied by the global count of the n-gram to determine the topic n-gram count for the respective topics. • The topic n-gram count LMs are then created using the respective topic n-gram counts and adapted by using the topic mixture weights obtained by averaging the confidence measures over the seen words of a development test set.
Proposed Method • The probability of word under LDA latent topic is computed as: • Using these features of the LDA model, we proposed two confidence measures to compute the topic mixture weights for each n-gram: • The topic n-gram language models are then generated using the topic n-gram counts and defined as TNCLM.
Proposed Method • In the LDA model, a document can be generated by a mixture of topics. So, for a test document , the dynamically adapted topic model by using a mixture of LMs from different topics is computed as: • The ANCLM are then interpolated with the background n-gram model to capture the local constraints using the linear interpolation as: