1 / 42

TnT A Statistical POS Tagger by Thorsten Brants

Presentation on Paper "TnT A Statistical Part of Speech Tagger" by Thorsten Brants

faisalriaz
Download Presentation

TnT A Statistical POS Tagger by Thorsten Brants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TnT Part of Speech Tagger

  2. Outline This paper presents evaluation of statistical part of speech tagging method using second order Hidden Markov Model. A POS tagger is a piece of software that reads text in some language and assigns a part of speech tag to each one of the words. Predicting the part of speech (POS) tag of an unknown word in a sentence is a significant challenge.We will discover about the TnT Tagger with the help of following: • Why? • What is it? • How well? • Conclusion

  3. Why then • One of the first to show that tagger based on Markov models can yield state-of-the-art results. • In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging or word-category disambiguation. • Fall into two distinctive groups: • Rule-based • Stochastic

  4. Why Now? • Tested across different languages. • Tested across different domains. • The current citation count is 2133.

  5. What is it? • A second order Hidden Markov Model. • Equipped with careful decisions handling. • Ability of handling of start- and end-of-sequence . • Smoothing. • Capitalization • Handling of unknown words • Improving speed of tagging

  6. Second Order Hidden Markov Model TNT tagger use the second order hidden markov model.The states in an HMM are hidden .Let’s explain the working of this model • The transition probabilities would be somewhat like P (VP | NP) that is, what is the probability of the current word having a tag of Verb Phrase given that the previous tag was a Noun Phrase. • Emission probabilities would be P(john | NP) or P(will | VP) that is, what is the probability that the word is, say, John given that the tag is a Noun Phrase. • Second-order HMM relies on one observation in a current state and the transition probability function based on two previous states.

  7. Second Order Hidden Markov Model • Given word sequence: w1, w2, . . . , wT .Find the tag sequence: t1, t2, . . . , tT where ∀ti ∈ Tag Set. • Specifically we need s beginning-of-sequence and end-of-sequence. • Given word sequence: w1, w2, . . . , wT . Find the tag sequence: t1, t2, . . . , tT where ∀ti ∈ Tag Set NOTE: If sentence boundaries are not marked in the input, TnT adds these tags if it encounters one of [.!?; ] as a token.

  8. Working of TnT POS • If a sentence of length T, contains words/tokens w1, w2…,wT and POS tags for them are t1, t2….. ,tT then according to the topology of the HMM the joint probability of this combination will be • The first term is P(ti|ti-1,ti-2), and suggests that each word tag depends on 2 previous tags. • The second term is P(wi | ti), which determines the word probability distribution given a POS tag, and refer to it from now on as the word conditional probability. • Third term P(tT+1|tT) is the probability of that tag appearing at a particular position i.e. at the beginning or the end of the sequence. tags t−1, t0, and tT+1 are beginning-of-sequence and end-of-sequence markers.

  9. Define working of models • P =⇒ Maximum likelihood probability • N =⇒ Total number of tokens in the training corpus Unigrams: P(t3) = f(t3)/N Bigrams : P(t3|t2) = f(t2,t3)/ f(t2) Trigrams : P(t3|t1,t2) = f(t1,t2,t3)/ f(t1,t2) Lexical : P(w3|t3) = f(w3 ,t3)/ f(t3) Where all t1,t2,t3 are in tag set and w3 is in the lexicon. NOTE: P = 0 if numerator , denominator = 0

  10. Smoothing • Author has come up with a solution to that problem with a smoothing paradigm that produced the best results in Tnt was linear interpolation . In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. • P(t3|t1, t2) = λ1P(t3) + λ2P(t3|t2) + λ3P(t3|t1, t2) where 0 ≤ λi ≤ 1, i ∈ {1, 2, 3} such that λ1 + λ2 + λ3 = 1. • the values of λ ’s are estimated by deleted interpolation.

  11. Smoothing • Tested frequency groups were assigned set of s according to following rule • One set of s for each frequency value • Two classes on low and high frequency values on the two ends of the scale. • Several frequency groupings in between them and several setting for partitioning classes.

  12. Procedure to calculate λi • setλ1 =λ2 =λ3 =0 • for each trigram t1, t2, t3 with f (t1, t2, t3) > 0 do • depending on maximum value case f (t1,t2,t3)−1 / f (t1,t2)−1 : λ3 = λ3+ f (t1, t2, t3) case f (t2,t3)−1 / f (t2)−1 : λ2 = λ2+ f (t1, t2, t3) case f (t3)−1 / N−1 : λ1 = λ21+ f (t1, t2, t3) end • end for • end • end for • normalize λ1, λ2, λ3

  13. Capitalization • Capitalization plays a vital role English: Proper nouns German: All nouns • Probability distribution of tags around capitalized words differs from the rest.

  14. Capitalization • So use P(t3, c3|t1, c1, t2, c2) instead of P(t3|t1, t2). The tri-gram model equations need to be changed accordingly. • This condition implies that in order to estimates the MLE / probability of the capitalized tokens we have to update equation (3) to (5) incorporating the flag

  15. Handling of Unknown Words • Handled best by suffix analysis (proposed by Samuelson in 1993) for inflected languages • Tag probabilities of unknown words are estimated according to the letters where word is ending. In English language suffix is the strong predictor of the class of the word. • In a study conducted for the words in Wall Street Journal which is a part of Penn Treebank, words (e.g. fashionable) ending with “able” are adjectives (JJ) 98% of the total words observed. Whereas, only 2% were nouns (e.g. cable, variable). • What is meant by “suffix”? “Final sequence of characters of a word” which is not necessarily a linguistically meaningful suffix . • e.g: smoothing G, ng, ing, hing, thing, othing, oothing, moothing, smoothing

  16. Handling of Unknown Words • Probability distribution of a particular suffix is generated from all the words in the training set that share the same suffix of some predefined maximum length. • This calculates the probability of a tag t given the last m letters li of an n letter word: • P(t|ln−m+1, . . . ln). The sequence of increasingly more general contexts omits more and more characters of the suffix, such that P(t|ln−m+2, . . . , ln), P(t|ln−m+3, . . . , ln), . . . , P(t) are used for smoothing. As we have shown in the above for the case of smoothing. • Here is problem pointing to the choice of tokens/words in lexicon. . Should we use all words or fix words for suffix handling in the lexicons. • Author has assumed here that unknown words are those which are appearing less frequent in the lexicons and using the suffixes of infrequent words. • A threshold of frequency less than or equal to 10. This choice turned out to be a good choice.

  17. Handling of Unknown Words • Given suffix length: i = m to 0 P(ln−i+1, ...ln|t) ∝ P(t|ln−i+1, ...ln)P(t) • Define: P as the ML estimate obtained from frequencies in the lexicon P(t) = P’(t) P(t|ln−i+1, ...ln) = P’(t|ln−i+1,...ln)+θi P(t|ln−i ,...ln)/ 1 + θi. • Where Here m = 10

  18. Viterbi Algorithm • It works for every possible state in HMM ( Tags in our case here) record the highest probability path that ends in that state known as Viterbi Algorithm. • Viterbi algorithm is linear in the size of the input, it more accurate to say that the complexity of Viterbialgorithm is proportional to N2*T.

  19. Beam Search • The simplest idea is to rank all the alternatives produced so far by their probabilities, and always just expand the most likely sequences ( with highest probability). This is called a beam search. • A beam search prunes out the unpromising states at every step, leaving only the best to be used in future calculations. We can decide how many to keep using several different methods. • A faster and approximated version of Viterbi algorithm. • Explore states above a certain threshold. • Does not guarantee the correct path but performs well.

  20. Beam Search • The processing time for Viterbi Algorithm can be reduced by introducing a beam search. • Each state that receives a value smaller than the largest divided by some threshold value is excluded from further processing.

  21. Model Evaluation and Experiments Strategy • Accuracy of tagger was evaluated for ten iterations and then averaged out as the overall accuracy. In addition to that separate accuracies for known and unknown words were also measured. • Tagger was trained over corpora of different sizes ranging from 1000 token to the entire corpus. • Author has used 90/10 Train, Test ratio. • Tagger was trained with 10 Fold cross validation and results were averaged out for single outcome. • Accuracy is calculated by dividing the no of correctly assigned tags by total no of tokens in the corpus processed.

  22. Evaluation Setting By DataSets • Negra Corpus: German Newspaper corpus • Penn TreeBank: The Wall Street Journal portion of Penn-TreeBank corpus

  23. Evaluation Setting DataSet Split • Contiguous • Round-Robin

  24. Evaluation Setting Performance Metrics • Tagging Accuracy for known, and more importantly, unknown words. • Effect of amount of training dataset on accuracy. • Accuracy of Reliable Tag Assignments.

  25. The German Negra Corpus

  26. The German Negra Corpus • German Negra corpus consists of 20,000 sentences and approximately 355,000 tokens of news paper texts. • It was developed at the Saarland University. Author has only used the part of speech annotation in this evaluation.

  27. Learning curve of GERMNA NEGRA CORPUS

  28. Learning curve of GERMNA NEGRA CORPUS • Explaining the learning curve based on the training data. It is quite obvious that accuracy is directly proportional to the amount of training data. • Training length is the number of tokens and each training length was tested ten times and results were averaged out. • Author observed that tagging accuracy is very high for known words, which considerably improved the chances to correctly tag unknown words. If tagger had encountered that token only once during the training.

  29. Accuracy Of Reliable Assignments

  30. Accuracy Of Reliable Assignments • Explains the accuracy of tagger when assignments are separated by quotients larger and smaller than threshold • Reliable and unreliable assignments. As expected reliable assignments are much higher than unreliable assignments. • This is quite useful for corpus annotation projects if the best tag is classified as unreliable.

  31. The Penn Treebank

  32. The Penn Treebank • An important tagset for English is the 45-tag Penn Treebank tag set, which has been used to label many corpora. Author has used WSJ ( Wall Street Journal ) as contained in Penn Treebank. • WSJ consists of approx. 50,000 sentences and 1.2 million tokens. Corpus’s annotation consists of four parts. Author has only used Part of speech annotation.

  33. Learning curve Pen Tree Bank

  34. Learning curve Pen Tree Bank • It is explaining the learning curve based on the training data. It is quite obvious that accuracy is directly proportional to the amount of training data. • Training length is the number of tokens and each training length was tested ten times and results were averaged out • Author observed that tagging accuracy is very high for known words, which considerably improved the chances to correctly tag unknown words. • If tagger had encountered that token only once during the training.

  35. Evaluation by others

  36. Different Languages • Does not work well for morphologically complex languages (e.g Icelandic) • Solution: Fill gaps in lexicon using language specific morphological analyzers [Lof07] • Worked well for German though

  37. Different Domains • Works well for domain specific POS task. • If trained using large domain specific corpora. • If trained using large generic corpora with an additional small domain specific corpora [CPA+05]

  38. Accuracy Issue • Author has not explained anything about the sentence level accuracy. • At first glance, current part-of-speech taggers work rapidly and reliably, with per-token accuracies of slightly over 97% [1–4].

  39. Different POS Tagging Error types

  40. Morphological Languages • TnT Statistical Part of speech tagger did not perform well on morphological rich languages like Icelandic language. • Solution was proposed by (Loftsson, 2007) by filling in the gaps in the lexicon using language morphological analyzer.

  41. TnT proprietary Issue • According to the code it is not open source. • An ubiquitous problem in HMM tagging originates from the standard way of calculating lexical probabilities by means of a lexicon generated during training.

  42. Conclusion • A significant milestone in the history of Part-of-Speech Tagging. • A good point of entry into Statistical NLP. • Author clearly explained and outlined the procedure for handling of start and end sequence, exact smoothing technique. • . Author not only achieved good results for German and English Corpus but model also performed well for other set of languages

More Related