650 likes | 1.24k Views
PART-OF-SPEECH TAGGING. Topics of the next three lectures. Tagsets Rule-based tagging Brill tagger Tagging with Markov models The Viterbi algorithm. POS tagging: the problem. People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race /NN for/IN outer/JJ space/NN
E N D
Topics of the next three lectures • Tagsets • Rule-based tagging • Brill tagger • Tagging with Markov models • The Viterbi algorithm
POS tagging: the problem • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN • Problem: assign a tag to race • Requires: tagged corpus
Why is POS tagging useful? • Makes search of patterns of interest to linguists in a corpus much easier (original motivation!) • Useful as a basis for parsing • For applications such as IR, provides some degree of meaning distinction • In ASR, helps selection of next word
Ambiguity in POS tagging The ATman NN VBstill NN VB RBsaw NN VBDher PPO PP$
How hard is POS tagging? In the Brown corpus,- 11.5% of word types ambiguous- 40% of word TOKENS
Frequency + Context • Both the Brill tagger and HMM-based taggers achieve good results by combining • FREQUENCY • I poured FLOUR/NN into the bowl. • Peter should FLOUR/VB the baking tray • Information about CONTEXT • I saw the new/JJ PLAY/NN in the theater. • The boy will/MD PLAY/VBP in the garden.
The importance of context • Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN
Choosing a tagset • The choice of tagset greatly affects the difficulty of the problem • Need to strike a balance between • Getting better information about context (best: introduce more distinctions) • Make it possible for classifiers to do their job (need to minimize distinctions)
Some of the best-known Tagsets • Brown corpus: 87 tags • Penn Treebank: 45 tags • Lancaster UCREL C5 (used to tag the BNC): 61 tags • Lancaster C7: 145 tags
POS tags in the Brown corpus Television/NN has/HVZ yet/RB to/TO work/VB out/RP a/AT living/RBG arrangement/NN with/IN jazz/NN ,/, which/VDT comes/VBZ to/IN the/AT medium/NN more/QL as/CS an/AT uneasy/JJ guest/NN than/CS as/CS a/AT relaxed/VBN member/NN of/IN the/AT family/NN ./.
SGML-based POS in the BNC <div1 complete=y org=seq> <head> <s n=00040> <w NN2>TROUSERS <w VVB>SUIT </head> <caption> <s n=00041> <w EX0>There <w VBZ>is <w PNI>nothing <w AJ0>masculine <w PRP>about <w DT0>these <w AJ0>new <w NN1>trouser <w NN2-VVZ>suits <w PRP>in <w NN1>summer<w POS>'s <w AJ0>soft <w NN2>pastels<c PUN>. <s n=00042> <w NP0>Smart <w CJC>and <w AJ0>acceptable <w PRP>for <w NN1>city <w NN1-VVB>wear <w CJC>but <w AJ0>soft <w AV0>enough <w PRP>for <w AJ0>relaxed <w NN2>days </caption>
Quick test DoCoMo and Sony are to develop a chip that would let people pay for goods through their mobiles.
Tagging methods • Hand-coded • Brill tagger • Statistical (Markov) taggers
Hand-coded POS tagging: the two-stage architecture • Early POS taggers all hand-coded • Most of these (Harris, 1962; Greene and Rubin, 1971) and the best of the recent ones, ENGTWOL (Voutilainen, 1995) based on a two-stage architecture
Hand-coded rules (ENGTWOL) STEP 1: assign to each word a list of potential parts of speech- in ENGTWOL, this done by a two-lever morphological analyzer (a finite state transducer) STEP 2: use about 1000 hand-coded CONSTRAINTS (if-then rules) to choose a tag using contextual information- the constraints act as FILTERS
Example Pavlov had shown that salivation ….
A constraint ADVERBIAL-THAT RULE Given input: “that”if (+1 A/ADV/QUANT); /* next word adj,adv, quant */ (+2 SENT-LIM); /* and following that there is a sentence boundary */ (NOT –1 SVOC/A); /* and previous word is not verb `consider’ */then eliminate non-ADV tagselse eliminate ADV tag.
Tagging with lexical frequencies • Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN • Problem: assign a tag to race given its lexical frequency • Solution: we choose the tag that has the greater • P(race|VB) • P(race|NN) • Actual estimate from the Switchboard corpus: • P(race|NN) = .00041 • P(race|VB) = .00003
The Brill tagger • An example of TRANSFORMATION-BASED LEARNING • Very popular (freely available, works fairly well) • A SUPERVISED method: requires a tagged corpus • Basic idea: do a quick job first (using frequency), then revise it using contextual rules
An example • Examples: • It is expected to race tomorrow. • The race for outer space. • Tagging algorithm: • Tag all uses of “race” as NN (most likely tag in the Brown corpus) • It is expected to race/NN tomorrow • the race/NN for outer space • Use a transformation rule to replace the tag NN with VB for all uses of “race” preceded by the tag TO: • It is expected to race/VB tomorrow • the race/NN for outer space
Transformation-based learning in the Brill tagger • Tag the corpus with the most likely tag for each word • Choose a TRANSFORMATION that deterministically replaces an existing tag with a new one such that the resulting tagged corpus has the lowest error rate • Apply that transformation to the training corpus • Repeat • Return a tagger that • first tags using unigrams • then applies the learned transformations in order
Markov Model POS tagging • Again, the problem is to find an `explanation’ with the highest probability: • As in yesterday’s case, this can be ‘turned around’ using Bayes’ Rule:
Combining frequency and contextual information • As in the case of spelling, this equation can be simplified: • As we will see, once further simplifications are applied, this equation will encode both FREQUENCY and CONTEXT INFORMATION
Three further assumptions • MARKOV assumption: a tag only depends on a FIXED NUMBER of previous tags (here, assume bigrams) • Simplify second factor • INDEPENDENCE assumption: words are independent from each other. • A word’s identity only depends on its own tag • Simplify first factor
CONTEXT FREQUENCY The final equations
Estimating the probabilities Can be done using Maximum Likelihood Estimation as usual, for BOTH probabilities:
An example of tagging with Markov Models : • Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN • People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/DT for/IN outer/JJ space/NN • Problem: assign a tag to race given the subsequences • to/TO race/??? • the/DT race/??? • Solution: we choose the tag that has the greater of these probabilities: • P(VB|TO) P(race|VB) • P(NN|TO)P(race|NN)
Tagging with MMs (2) • Actual estimates from the Switchboard corpus: • LEXICAL FREQUENCIES: • P(race|NN) = .00041 • P(race|VB) = .00003 • CONTEXT: • P(NN|TO) = .021 • P(VB|TO) = .34 • The probabilities: • P(VB|TO) P(race|VB) = .00001 • P(NN|TO)P(race|NN) = .000007
Computing the most likely sequence of tags • In general, the problem of computing the most likely sequence t1 .. tn could have exponential complexity • It can however be solved in polynomial time using an example of DYNAMIC PROGRAMMING: the VITERBI ALGORITHM (Viterbi, 1967) • (Also called TRELLIS ALGORITHMs)
Markov chains and Hidden Markov Models Markov chain: only transition probabilities. Each node associated with a single OUTPUT Hidden Markov Models: nodes may have more than one output; probability P(w|t) of outputting word w from state t.
Training HMMs The reason why HMMS are so popular is because they come with a LEARNING ALGORITHM: the FORWARD-BACKWARD algorithm (an instance of a class of algorithms called EM algorithms) Basic idea of the forward-backward algorithm: start by assigning random transition and emission probabilities, then iterate
Evaluation of POS taggers Can reach up to 96.7% correct on Penn Treebank (see Brants, 2000) (But see next lecture)
Additional issues Most of the difference in performance between POS algorithms depends on their treatment of UNKNOWN WORDS Multiple token words (‘Penn Treebank’) Class-based N-grams