1 / 67

Lecture 3

kisha
Download Presentation

Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Lecture 3 Finish language models Take a look at some NYT articles Get started with part of speech tagging

    2. Estimating the probability of words Training a (language) model Unigram (word) probabilities Bigram (word) probabilities Trigram (word) probabilities Issues: The longer the context, the more accurate the prediction. But only if he have seen the context.

    3. Calculating the probabilities of sentences This is really the valuable application of the “huge probability tables” Under unigram LM P(w1w2…wn) = P(w1)P(w2)…P(wn) Under bigram LM P(w1w2…wn) = P(w1|<s>)P(w2|w1)…P(wn|wn-1)P(</s>|wn)

    4. Maximum likelihood estimate The simple count-and-divide method was just one way to estimate the probabilities of words It is easy, and also gives the highest possible probability of the training data But it is not necessarily the best estimate of what we will see in new texts

    5. Smoothing Smoothing helps us adjust these MLEs in a way that makes the training data less likely, but way be more predictive of new texts we see. Allowing for unseen words (=giving them non-zero probabilities) Estimates of probabilities we have seen just a few times are not that accurate

    6. Smoothing methods Add one (Laplace) Start counting from 1 instead of 0 Good-Turing Deal with classes of words rather than individual words Interpolation Combine evidence from unigram, bigram and trigram Backoff Favor higher order n-grams, but if they are not available, use shorter contexts

    7. Applications Language recognition Predictive text input on cell phones/assistive technology Machine translation Speech recognition Spell checking Authorship identification

    8. Evaluation Standard method Train parameters of our model on a training set. Look at the models performance on some new data This is exactly what happens in the real world; we want to know how our model performs on data we haven’t seen So use a test set. A dataset which is different than our training set, but is drawn from the same source Then we need an evaluation metric to tell us how well our model is doing on the test set. One such metric is perplexity (to be introduced below)

    9. Perplexity Perplexity is the probability of the test set (assigned by the language model), normalized by the number of words: Chain rule: For bigrams:

    10. Lower perplexity means a better model Training 38 million words, test 1.5 million words, WSJ

    11. Entropy p. 114 J&M

    12. How different are probability distributions? Compare probability distributions (tables) derived from two different corpora For example Shakespeare and NYT KL divergence is one way to measure the difference The more the probabilities differ, the larger the divergence

    13. Starting a new topic today Will continue the next two lectures Parts of speech (POS) Tagsets POS Tagging Rule-based tagging HMMs and Viterbi algorithm

    14. Parts of Speech 8 (ish) traditional parts of speech Noun, verb, adjective, preposition, adverb, article, interjection, pronoun, conjunction, etc Called: parts-of-speech, lexical categories, word classes, morphological classes, lexical tags... Lots of debate within linguistics about the number, nature, and universality of these We’ll completely ignore this debate.

    15. POS examples N noun chair, bandwidth, pacing V verb study, debate, munch ADJ adjective purple, tall, ridiculous ADV adverb unfortunately, slowly P preposition of, by, to PRO pronoun I, me, mine DET determiner the, a, that, those

    16. POS Tagging The process of assigning a part-of-speech or lexical class marker to each word in a collection.

    17. Why is POS Tagging Useful? First step of a vast number of practical tasks Speech synthesis How to pronounce “lead”? INsult inSULT OBject obJECT OVERflow overFLOW DIScount disCOUNT CONtent conTENT Parsing Need to know if a word is an N or V before you can parse Information extraction Finding names, relations, etc. Machine Translation

    18. Open and Closed Classes Closed class: a small fixed membership Prepositions: of, in, by, … Auxiliaries: may, can, will had, been, … Pronouns: I, you, she, mine, his, them, … Usually function words (short common words which play a role in grammar) Open class: new ones can be created all the time English has 4: Nouns, Verbs, Adjectives, Adverbs Many languages have these 4, but not all!

    19. Open Class Words Nouns Proper nouns (Boulder, Granby, Eli Manning) English capitalizes these. Common nouns (the rest). Count nouns and mass nouns Count: have plurals, get counted: goat/goats, one goat, two goats Mass: don’t get counted (snow, salt, communism) (*two snows) Adverbs: tend to modify things Unfortunately, John walked home extremely slowly yesterday Directional/locative adverbs (here,home, downhill) Degree adverbs (extremely, very, somewhat) Manner adverbs (slowly, slinkily, delicately) Verbs In English, have morphological affixes (eat/eats/eaten)

    20. Closed Class Words Examples: prepositions: on, under, over, … particles: up, down, on, off, … determiners: a, an, the, … pronouns: she, who, I, .. conjunctions: and, but, or, … auxiliary verbs: can, may should, … numerals: one, two, three, third, …

    21. Prepositions from CELEX

    22. English Particles

    23. Conjunctions

    24. POS Tagging Choosing a Tagset There are so many parts of speech, potential distinctions we can draw To do POS tagging, we need to choose a standard set of tags to work with Could pick very coarse tagsets N, V, Adj, Adv. More commonly used set is finer grained, the “Penn TreeBank tagset”, 45 tags PRP$, WRB, WP$, VBG Even more fine-grained tagsets exist

    25. Penn TreeBank POS Tagset

    26. Using the Penn Tagset The/DT grand/JJ jury/NN commmented/VBD on/IN a/DT number/NN of/IN other/JJ topics/NNS ./. Prepositions and subordinating conjunctions marked IN (“although/IN I/PRP..”) Except the preposition/complementizer “to” is just marked “TO”.

    27. POS Tagging Words often have more than one POS: back The back door = JJ On my back = NN Win the voters back = RB Promised to back the bill = VB The POS tagging problem is to determine the POS tag for a particular instance of a word.

    28. How Hard is POS Tagging? Measuring Ambiguity

    29. Two Methods for POS Tagging Rule-based tagging (ENGTWOL) Stochastic Probabilistic sequence models HMM (Hidden Markov Model) tagging MEMMs (Maximum Entropy Markov Models)

    30. Rule-Based Tagging Start with a dictionary Assign all possible tags to words from the dictionary Write rules by hand to selectively remove tags Leaving the correct tag for each word.

    31. Start With a Dictionary she: PRP promised: VBN,VBD to TO back: VB, JJ, RB, NN the: DT bill: NN, VB Etc… for the ~100,000 words of English with more than 1 tag

    32. Assign Every Possible Tag NN RB VBN JJ VB PRP VBD TO VB DT NN She promised to back the bill

    33. Write Rules to Eliminate Tags Eliminate VBN if VBD is an option when VBN|VBD follows “<start> PRP” NN RB JJ VB PRP VBD TO VB DT NN She promised to back the bill

    34. Stage 1 of ENGTWOL Tagging First Stage: Run words through FST morphological analyzer to get all parts of speech. Example: Pavlov had shown that salivation … Pavlov PAVLOV N NOM SG PROPER had HAVE V PAST VFIN SVO HAVE PCP2 SVO shown SHOW PCP2 SVOO SVO SV that ADV PRON DEM SG DET CENTRAL DEM SG CS salivation N NOM SG

    35. Stage 2 of ENGTWOL Tagging Second Stage: Apply NEGATIVE constraints. Example: Adverbial “that” rule Eliminates all readings of “that” except the one in “It isn’t that odd” Given input: “that” If (+1 A/ADV/QUANT) ;if next word is adj/adv/quantifier (+2 SENT-LIM) ;following which is E-O-S (NOT -1 SVOC/A) ; and the previous word is not a ; verb like “consider” which ; allows adjective complements ; in “I consider that odd” Then eliminate non-ADV tags Else eliminate ADV

    36. Hidden Markov Model Tagging Using an HMM to do POS tagging is a special case of Bayesian inference Foundational work in computational linguistics Bledsoe 1959: OCR Mosteller and Wallace 1964: authorship identification It is also related to the “noisy channel” model that’s the basis for ASR, OCR and MT

    37. POS Tagging as Sequence Classification We are given a sentence (an “observation” or “sequence of observations”) Secretariat is expected to race tomorrow What is the best sequence of tags that corresponds to this sequence of observations? Probabilistic view: Consider all possible sequences of tags Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn.

    38. Getting to HMMs We want, out of all sequences of n tags t1…tn the single tag sequence such that P(t1…tn|w1…wn) is highest. Hat ^ means “our estimate of the best one” Argmaxx f(x) means “the x such that f(x) is maximized”

    39. Getting to HMMs This equation is guaranteed to give us the best tag sequence But how to make it operational? How to compute this value? Intuition of Bayesian classification: Use Bayes rule to transform this equation into a set of other probabilities that are easier to compute

    40. Using Bayes Rule

    41. Likelihood and Prior

    42. Two Kinds of Probabilities Tag transition probabilities p(ti|ti-1) Determiners likely to precede adjs and nouns That/DT flight/NN The/DT yellow/JJ hat/NN So we expect P(NN|DT) and P(JJ|DT) to be high But P(DT|JJ) to be: Compute P(NN|DT) by counting in a labeled corpus:

    43. Two Kinds of Probabilities Word likelihood probabilities p(wi|ti) VBZ (3sg Pres verb) likely to be “is” Compute P(is|VBZ) by counting in a labeled corpus:

    44. Example: The Verb “race” Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NR People/NNS continue/VB to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN How do we pick the right tag?

    45. Disambiguating “race”

    46. Example P(NN|TO) = .00047 P(VB|TO) = .83 P(race|NN) = .00057 P(race|VB) = .00012 P(NR|VB) = .0027 P(NR|NN) = .0012 P(VB|TO)P(NR|VB)P(race|VB) = .00000027 P(NN|TO)P(NR|NN)P(race|NN)=.00000000032 So we (correctly) choose the verb reading,

    47. Hidden Markov Models What we’ve described with these two kinds of probabilities is a Hidden Markov Model (HMM)

    48. Definitions A weighted finite-state automaton adds probabilities to the arcs The sum of the probabilities leaving any arc must sum to one A Markov chain is a special case of a WFST in which the input sequence uniquely determines which states the automaton will go through Markov chains can’t represent inherently ambiguous problems Useful for assigning probabilities to unambiguous sequences

    49. Markov Chain for Weather

    50. Markov Chain for Words

    51. Markov Chain: “First-order observable Markov Model” A set of states Q = q1, q2…qN; the state at time t is qt Transition probabilities: a set of probabilities A = a01a02…an1…ann. Each aij represents the probability of transitioning from state i to state j The set of these is the transition probability matrix A Current state only depends on previous state

    52. Markov Chain for Weather What is the probability of 4 consecutive rainy days? Sequence is rainy-rainy-rainy-rainy I.e., state sequence is 3-3-3-3 P(3,3,3,3) = ?1a11a11a11a11 = 0.2 x (0.6)3 = 0.0432

    53. HMM for Ice Cream You are a climatologist in the year 2799 Studying global warming You can’t find any records of the weather in Baltimore, MA for summer of 2007 But you find Jason Eisner’s diary Which lists how many ice-creams Jason ate every date that summer Our job: figure out how hot it was

    54. Hidden Markov Model For Markov chains, the output symbols are the same as the states. See hot weather: we’re in state hot But in part-of-speech tagging (and other things) The output symbols are words But the hidden states are part-of-speech tags So we need an extension! A Hidden Markov Model is an extension of a Markov chain in which the input symbols are not the same as the states. This means we don’t know which state we are in.

    55. States Q = q1, q2…qN; Observations O= o1, o2…oN; Each observation is a symbol from a vocabulary V = {v1,v2,…vV} Transition probabilities Transition probability matrix A = {aij} Observation likelihoods Output probability matrix B={bi(k)} Special initial probability vector ? Hidden Markov Models

    56. Eisner Task Given Ice Cream Observation Sequence: 1,2,3,2,2,2,3… Produce: Weather Sequence: H,C,H,H,H,C…

    57. HMM for Ice Cream

    58. Transition Probabilities

    59. Observation Likelihoods

    60. Decoding Ok, now we have a complete model that can give us what we need. Recall that we need to get We could just enumerate all paths given the input and use the model to assign probabilities to each. Not a good idea. Luckily dynamic programming (last seen in Ch. 3 with minimum edit distance) helps us here

    61. The Viterbi Algorithm

    62. Viterbi Example

    63. Viterbi Summary Create an array With columns corresponding to inputs Rows corresponding to possible states Sweep through the array in one pass filling the columns left to right using our transition probs and observations probs Dynamic programming key is that we need only store the MAX prob path to each cell, (not all paths).

    64. Evaluation So once you have you POS tagger running how do you evaluate it? Overall error rate with respect to a gold-standard test set. Error rates on particular tags Error rates on particular words Tag confusions...

    65. Error Analysis Look at a confusion matrix See what errors are causing problems Noun (NN) vs ProperNoun (NNP) vs Adj (JJ) Preterite (VBD) vs Participle (VBN) vs Adjective (JJ)

    66. Evaluation The result is compared with a manually coded “Gold Standard” Typically accuracy reaches 96-97% This may be compared with result for a baseline tagger (one that uses no context). Important: 100% is impossible even for human annotators.

    67. Summary Parts of speech Tagsets Part of speech tagging HMM Tagging Markov Chains Hidden Markov Models

More Related