1 / 19

Recent Paper of Md. Akmal Haidar

Recent Paper of Md. Akmal Haidar. Meeting before ICASSP 2013. 報告者:郝柏翰. Outline. “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010

paige
Download Presentation

Recent Paper of Md. Akmal Haidar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recent Paper of Md. AkmalHaidar Meeting before ICASSP 2013 報告者:郝柏翰

  2. Outline • “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010 • “Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals”, 2011 • “Topic N-gram Count Language Model Adaptation for Speech Recognition”, 2012 • “Comparison of a Bigram PLSA and a Novel Context-Based PLSA Language Model for Speech Recognition”, 2013

  3. Novel Weighting Scheme for Unsupervised Language Model Adaptation UsingLatent Dirichlet Allocation Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010

  4. Introduction • Adaptation is required when the styles, domains or topics of the test data are mismatched with the training data. It is also important as natural language is highly variable since the topic information is highly non-stationary. • The idea of an unsupervised LM adaptation approach is to extract the latent topics from the training set and then adapt the topic specific LM with proper mixture weights, finally interpolated with the generic n-gram LM. • In this paper, we propose the idea that the weights of topic models are generated using the word count of the topics generated by a hard-clustering method.

  5. Proposed Method • We have used the MATLAB topic modeling toolbox to get the word-topic matrix, WP, and the document-topic matrix, DP, using LDA. • Hard-clustering: • For training, topic clusters are formed by assigning a topic to a document as: • Adaptation: • we can create a dynamically adapted topic model by using a mixture of LMs from different topics as:

  6. Proposed Method • To find topic mixture weight , we propose a scheme where the unigram count of the topics. • TF represents the number of times word is drawn from topic . • is the frequency of in document . • The adapted topic model is then interpolated with the generic LM as:

  7. Experiment Setup • We evaluated the LM adaptation approach using the Brown Corpus and WSJ1 corpus transcription text data.

  8. Experiments

  9. Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010

  10. Introduction • To overcome the mismatch problem. we introduce an unsupervised language model adaptation approach using latent Dirichlet allocation (LDA) and dynamic marginals: locally estimated (smoothed) unigram probabilities from in-domain text data. • we extend our previous work to find an adapted model by using the minimum discriminant information (MDI), which uses KL divergence as the distance measure between probability distributions. • The final adapted model is formed by minimizing the KL divergence between the final adapted model and the LDA adapted topic model.

  11. Proposed Method • Topic clustering • LDA adapted topic mixture model generation

  12. Proposed Method • Adaptation using dynamic marginals • The adapted model using dynamic marginals is obtained by minimizing the KL-divergence between the adapted model and the background model subject to the marginalization constraint for each word w in the vocabulary: • The constraint optimization problem has close connection to the maximum entropy approach, which provides that the adapted model is a rescaled version of the background model:

  13. Proposed Method • The background and the LDA adapted topic model have standard back-off structure and the above constraint, so the adapted LM has the following recursive formula:

  14. Experiments

  15. Topic N-gram Count Language Model Adaptation for Speech Recognition Md. AkmalHaidar and Douglas O’Shaughnessy INTERSPEECH 2010

  16. Introduction • In this paper, we propose a new LM adaptation approach by considering the features of the LDA model. • Here, we compute the topic mixture weights of each n-gram in the training set using the probability of word given topic , as confidence measures for the words in the n-gram under different LDA latent topics. • The normalized topic mixture weights are then multiplied by the global count of the n-gram to determine the topic n-gram count for the respective topics. • The topic n-gram count LMs are then created using the respective topic n-gram counts and adapted by using the topic mixture weights obtained by averaging the confidence measures over the seen words of a development test set.

  17. Proposed Method • The probability of word under LDA latent topic is computed as: • Using these features of the LDA model, we proposed two confidence measures to compute the topic mixture weights for each n-gram: • The topic n-gram language models are then generated using the topic n-gram counts and defined as TNCLM.

  18. Proposed Method • In the LDA model, a document can be generated by a mixture of topics. So, for a test document , the dynamically adapted topic model by using a mixture of LMs from different topics is computed as: • The ANCLM are then interpolated with the background n-gram model to capture the local constraints using the linear interpolation as:

  19. Experiments

More Related