170 likes | 362 Views
Hidden Topic Markov Models Amit Gruber, Michal Rosen-Zvi and Yair Weiss in AISTATS 2007. Discussion led by Chunping Wang ECE, Duke University March 2, 2009. Outline. Motivations Related Topic Models Hidden Topic Markov Models Inference Experiments Conclusions. Motivations.
E N D
Hidden Topic Markov ModelsAmit Gruber, Michal Rosen-Zvi and Yair Weissin AISTATS 2007 Discussion led by Chunping Wang ECE, Duke University March 2, 2009
Outline • Motivations • Related Topic Models • Hidden Topic Markov Models • Inference • Experiments • Conclusions
Motivations • Feature Reduction Extensively large text corpora a small number of variables • Topical segmentation Segment a document according to hidden topics • Word sense disambiguation Distinguish between different instances of the same word according to the context
Related Topic Models • LDA (JMLR 2003) 1. For , draw 2. For , (a) Draw (b) For , draw (c) For , draw Words in a document are exchangeable; documents are also exchangeable.
Related Topic Models • Dynamic Topic Models (ICML 2006) Words in a document are exchangeable; documents are not exchangeable.
Related Topic Models • Topic Modeling: Beyond Bag of Words (ICML 2006) Words in a document are not exchangeable; documents are exchangeable.
Related Topic Models • Integrating Topics and Syntax (NIPS 2005) LDA Semantic words HMM Non-semantic (syntactic) words Words in a document are not exchangeable; documents are exchangeable.
Hidden Topic Markov Models No topic transition is allowed within a sentence. Whenever a new sentence starts, either the old topic is kept or a new topic is drawn according to .
Hidden Topic Markov Models Transition matrices within a sentence or no transition between two sentences, with probability Transition occurs between two sentences, with probability Emission matrix Initial state distribution
Inference EM algorithm: • E-step Compute using the forward-backward algorithm; • M-step
Experiments • NIPS dataset (1740 documents, 1557 for training, 183 for testing) • Data preprocess Extract words in the vocabulary (J=12113, no stop words); Divide text to sentences according to “.?!;” . • Compare LDA, HTMM and VHTMM1 in terms of perplexity VHTMM1: a variant of HTMM with , a “bag of sentences” Ntest: the total length of the test document; N: the first N words of the document are observed. Average Ntest=1300
Experiments K=100 N=10 The lower the perplexity is, the better the model is in predicting unseen words.
Experiments • Topical segmentation HTMM LDA
Experiments • Top words of topics acknowledgments math reference HTMM LDA
Experiments As more topics are available, the topics become more specific and topic transitions are more frequent.
Experiments • Two toy datasets, generated using HTMM and LDA. Goal: to eliminate the option that the perplexity of HTMM might be lower than the perplexity of LDA only because it has less degrees of freedom. With toy datasets, other criteria can be used for comparison.
Conclusions • HTMM is another extension of LDA, which relaxes the “bag-of-words” assumption by modeling the topic dynamics with a Markov chain. • This extension leads to a significant improvement in perplexity, and makes additional inferences possible, such as topical segmentation and word sense disambiguation. • It requires a larger storage since the entire document has to be the input of the algorithm. • It only applies to structured data, where sentences are well defined.