1 / 121

*Introduction to Natural Language Processing (600.465) Language Modeling (and the Noisy Channel)

*Introduction to Natural Language Processing (600.465) Language Modeling (and the Noisy Channel). Dr. Jan Hajič CS Dept., Johns Hopkins Univ. hajic@cs.jhu.edu www.cs.jhu.edu/~hajic. The Noisy Channel. Prototypical case:

meagan
Download Presentation

*Introduction to Natural Language Processing (600.465) Language Modeling (and the Noisy Channel)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. *Introduction to Natural Language Processing (600.465)Language Modeling (and the Noisy Channel) Dr. Jan Hajič CS Dept., Johns Hopkins Univ. hajic@cs.jhu.edu www.cs.jhu.edu/~hajic

  2. The Noisy Channel • Prototypical case: Input Output (noisy) The channel 0,1,1,1,0,1,0,1,... (adds noise) 0,1,1,0,0,1,1,0,... • Model: probability of error (noise): • Example: p(0|1) = .3 p(1|1) = .7 p(1|0) = .4 p(0|0) = .6 • The Task: known: the noisy output; want to know: the input (decoding)

  3. Noisy Channel Applications • OCR • straightforward: text → print (adds noise), scan →image • Handwriting recognition • text → neurons, muscles (“noise”), scan/digitize → image • Speech recognition (dictation, commands, etc.) • text → conversion to acoustic signal (“noise”) → acoustic waves • Machine Translation • text in target language → translation (“noise”) → source language • Also: Part of Speech Tagging • sequence of tags → selection of word forms → text

  4. Noisy Channel: The Golden Rule of ... OCR, ASR, HR, MT, ... • Recall: p(A|B) = p(B|A) p(A) / p(B) (Bayes formula) Abest = argmaxA p(B|A) p(A) (The Golden Rule) • p(B|A): the acoustic/image/translation/lexical model • application-specific name • will explore later • p(A): the language model

  5. Probabilistic Language Models • Today’s goal: assign a probability to a sentence • Machine Translation: • P(high winds tonite) > P(large winds tonite) • Spell Correction • The office is about fifteen minuets from my house • P(about fifteen minutes from) > P(about fifteen minuets from) • Speech Recognition • P(I saw a van) >> P(eyes awe of an) • + Summarization, question-answering, etc., etc.!! Why?

  6. Probabilistic Language Modeling • Goal: compute the probability of a sentence or sequence of words: • P(W) = P(w1,w2,w3,w4,w5…wn) • Related task: probability of an upcoming word: • P(w5|w1,w2,w3,w4) • A model that computes either of these: • P(W) or P(wn|w1,w2…wn-1) is called a language model. • Better: the grammar But language model or LM is standard

  7. The Perfect Language Model • Sequence of word forms [forget about tagging for the moment] • Notation: A ~ W = (w1,w2,w3,...,wd) • The big (modeling) question: p(W) = ? • Well, we know (Bayes/chain rule →): p(W) = p(w1,w2,w3,...,wd) = = p(w1)ⅹp(w2|w1)ⅹp(w3|w1,w2)ⅹ...ⅹp(wd|w1,w2,...,wd-1) • Not practical (even short W →too many parameters)

  8. Markov Chain • Unlimited memory (cf. previous foil): • for wi, we know all its predecessors w1,w2,w3,...,wi-1 • Limited memory: • we disregard “too old” predecessors • remember only k previous words: wi-k,wi-k+1,...,wi-1 • called “kth order Markov approximation” • + stationary character (no change over time): p(W) @Pi=1..dp(wi|wi-k,wi-k+1,...,wi-1), d = |W|

  9. n-gram Language Models • (n-1)th order Markov approximation → n-gram LM: p(W) =dfPi=1..dp(wi|wi-n+1,wi-n+2,...,wi-1) ! • In particular (assume vocabulary |V| = 60k): • 0-gram LM: uniform model, p(w) = 1/|V|, 1 parameter • 1-gram LM: unigram model, p(w), 6ⅹ104 parameters • 2-gram LM: bigram model, p(wi|wi-1) 3.6ⅹ109 parameters • 3-gram LM: trigram model, p(wi|wi-2,wi-1) 2.16ⅹ1014 parameters prediction history

  10. LM: Observations • How large n? • nothing is enough (theoretically) • but anyway: as much as possible (→close to “perfect” model) • empirically: 3 • parameter estimation? (reliability, data availability, storage space, ...) • 4 is too much: |V|=60k →1.296ⅹ1019 parameters • but: 6-7 would be (almost) ideal (having enough data): in fact, one can recover original from 7-grams! • Reliability ~ (1 / Detail) (→ need compromise) (detail=many gram) • For now, keep word forms (no “linguistic” processing)

  11. Parameter Estimation • Parameter: numerical value needed to compute p(w|h) • From data (how else?) • Data preparation: • get rid of formatting etc. (“text cleaning”) • define words (separate but include punctuation, call it “word”) • define sentence boundaries (insert “words” <s> and </s>) • letter case: keep, discard, or be smart: • name recognition • number type identification [these are huge problems per se!] • numbers: keep, replace by <num>, or be smart (form ~ pronunciation)

  12. Maximum Likelihood Estimate • MLE: Relative Frequency... • ...best predicts the data at hand (the “training data”) • Trigrams from Training Data T: • count sequences of three words in T: c3(wi-2,wi-1,wi) • [NB: notation: just saying that the three words follow each other] • count sequences of two words in T: c2(wi-1,wi): • either use c2(y,z) = Sw c3(y,z,w) • or count differently at the beginning (& end) of data! p(wi|wi-2,wi-1) =est. c3(wi-2,wi-1,wi) / c2(wi-2,wi-1) !

  13. Character Language Model • Use individual characters instead of words: • Same formulas etc. • Might consider 4-grams, 5-grams or even more • Good only for language comparison • Transform cross-entropy between letter- and word-based models: HS(pc) = HS(pw) / avg. # of characters/word in S p(W) =dfPi=1..dp(ci|ci-n+1,ci-n+2,...,ci-1)

  14. LM: an Example • Training data: <s> <s> He can buy the can of soda. • Unigram: p1(He) = p1(buy) = p1(the) = p1(of) = p1(soda) = p1(.) = .125 p1(can) = .25 • Bigram: p2(He|<s>) = 1, p2(can|He) = 1, p2(buy|can) = .5, p2(of|can) = .5, p2(the|buy) = 1,... • Trigram: p3(He|<s>,<s>) = 1, p3(can|<s>,He) = 1, p3(buy|He,can) = 1, p3(of|the,can) = 1, ..., p3(.|of,soda) = 1. • (normalized for all n-grams) Entropy: H(p1) = 2.75, H(p2) = .25, H(p3) = 0 ← Great?!

  15. Language Modeling Toolkits • SRILM • http://www.speech.sri.com/projects/srilm/

  16. Google N-Gram Release, August 2006

  17. Google N-Gram Release http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html serve as the incoming 92 serve as the incubator 99 serve as the independent 794 serve as the index 223 serve as the indication 72 serve as the indicator 120 serve as the indicators 45 serve as the indispensable 111 serve as the indispensible 40 serve as the individual 234

  18. Google Book N-grams http://ngrams.googlelabs.com/

  19. Evaluation: How good is our model? • Does our language model prefer good sentences to bad ones? • Assign higher probability to “real” or “frequently observed” sentences • Than “ungrammatical” or “rarely observed” sentences? • We train parameters of our model on a training set. • We test the model’s performance on data we haven’t seen. • A test set is an unseen dataset that is different from our training set, totally unused. • An evaluation metric tells us how well our model does on the test set.

  20. Extrinsic evaluation of N-gram models • Best evaluation for comparing models A and B • Put each model in a task • spelling corrector, speech recognizer, MT system • Run the task, get an accuracy for A and for B • How many misspelled words corrected properly • How many words translated correctly • Compare accuracy for A and B

  21. Difficulty of extrinsic (in-vivo) evaluation of N-gram models • Extrinsic evaluation • Time-consuming; can take days or weeks • So • Sometimes use intrinsic evaluation: perplexity • Bad approximation • unless the test data looks just like the training data • So generally only useful in pilot experiments • But is helpful to think about.

  22. Intuition of Perplexity mushrooms 0.1 pepperoni 0.1 anchovies 0.01 …. fried rice 0.0001 …. and 1e-100 • The Shannon Game: • How well can we predict the next word? • Unigrams are terrible at this game. (Why?) • A better model of a text • is one which assigns a higher probability to the word that actually occurs Claude Shannon I always order pizza with cheese and ____ The 33rd President of the US was ____ I saw a ____

  23. Perplexity The best language model is one that best predicts an unseen test set • Gives the highest P(sentence) Perplexity is the probability of the test set, normalized by the number of words: Chain rule: For bigrams: Minimizing perplexity is the same as maximizing probability

  24. The Shannon Game intuition for perplexity • From Josh Goodman • How hard is the task of recognizing digits ‘0,1,2,3,4,5,6,7,8,9’ • Perplexity 10 • How hard is recognizing (30,000) names at Microsoft. • Perplexity = 30,000 • If a system has to recognize • Operator (1 in 4) • Sales (1 in 4) • Technical Support (1 in 4) • 30,000 names (1 in 120,000 each) • Perplexity is 53 • Perplexity is weighted equivalent branching factor

  25. Perplexity as branching factor Let’s suppose a sentence consisting of random digits What is the perplexity of this sentence according to a model that assign P=1/10 to each digit?

  26. Lower perplexity = better model • Training 38 million words, test 1.5 million words, WSJ

  27. The wall street journal

  28. LM: an Example • Training data: <s> <s> He can buy the can of soda. • Unigram: p1(He) = p1(buy) = p1(the) = p1(of) = p1(soda) = p1(.) = .125 p1(can) = .25 • Bigram: p2(He|<s>) = 1, p2(can|He) = 1, p2(buy|can) = .5, p2(of|can) = .5, p2(the|buy) = 1,... • Trigram: p3(He|<s>,<s>) = 1, p3(can|<s>,He) = 1, p3(buy|He,can) = 1, p3(of|the,can) = 1, ..., p3(.|of,soda) = 1. • (normalized for all n-grams) Entropy: H(p1) = 2.75, H(p2) = .25, H(p3) = 0 ← Great?!

  29. LM: an Example (The Problem) • Cross-entropy: • S = <s> <s> It was the greatest buy of all. (test data) • Even HS(p1) fails (= HS(p2) = HS(p3) = ∞), because: • all unigrams but p1(the), p1(buy), p1(of) and p1(.) are 0. • all bigram probabilities are 0. • all trigram probabilities are 0. • We want: to make all probabilities non-zero.  data sparseness handling

  30. The Zero Problem • “Raw” n-gram language model estimate: • necessarily, some zeros • !many: trigram model → 2.16ⅹ1014 parameters, data ~ 109 words • which are true 0? • optimal situation: even the least frequent trigram would be seen several times, in order to distinguish it’s probability vs. other trigrams • optimal situation cannot happen, unfortunately (open question: how many data would we need?) • → we don’t know • we must eliminate the zeros • Two kinds of zeros: p(w|h) = 0, or even p(h) = 0!

  31. Why do we need Nonzero Probs? • To avoid infinite Cross Entropy: • happens when an event is found in test data which has not been seen in training data H(p) = ∞: prevents comparing data with ≥ 0 “errors” • To make the system more robust • low count estimates: • they typically happen for “detailed” but relatively rare appearances • high count estimates: reliable but less “detailed”

  32. Eliminating the Zero Probabilities:Smoothing • Get new p’(w) (same W): almost p(w) but no zeros • Discount w for (some) p(w) > 0: new p’(w) < p(w) Sw∈discounted (p(w) - p’(w)) = D • Distribute D to all w; p(w) = 0: new p’(w) > p(w) • possibly also to other w with low p(w) • For some w (possibly): p’(w) = p(w) • Make sure Sw∈W p’(w) = 1 • There are many ways of smoothing

  33. Smoothing by Adding 1(Laplace) • Simplest but not really usable: • Predicting words w from a vocabulary V, training data T: p’(w|h) = (c(h,w) + 1) / (c(h) + |V|) • for non-conditional distributions: p’(w) = (c(w) + 1) / (|T| + |V|) • Problem if |V| > c(h) (as is often the case; even >> c(h)!) • Example: Training data: <s> what is it what is small ? |T| = 8 • V = { what, is, it, small, ?, <s>, flying, birds, are, a, bird, . }, |V| = 12 • p(it)=.125, p(what)=.25, p(.)=0 p(what is it?) = .252ⅹ.1252@ .001 p(it is flying.) = .125ⅹ.25ⅹ02 = 0 • p’(it) =.1, p’(what) =.15, p’(.)=.05 p’(what is it?) = .152ⅹ.12@ .0002 p’(it is flying.) = .1ⅹ.15ⅹ.052 @ .00004 (assume word independence!)

  34. Adding less than 1 • Equally simple: • Predicting words w from a vocabulary V, training data T: p’(w|h) = (c(h,w) + l) / (c(h) + l|V|), l < 1 • for non-conditional distributions: p’(w) = (c(w) + l) / (|T| + l|V|) • Example: Training data: <s> what is it what is small ? |T| = 8 • V = { what, is, it, small, ?, <s>, flying, birds, are, a, bird, . }, |V| = 12 • p(it)=.125, p(what)=.25, p(.)=0 p(what is it?) = .252ⅹ.1252@ .001 p(it is flying.) = .125ⅹ.25´02 = 0 • Use l = .1: • p’(it)@ .12, p’(what)@ .23, p’(.)@ .01 p’(what is it?) = .232ⅹ.122@ .0007 p’(it is flying.) = .12ⅹ.23ⅹ.012 @ .000003

  35. Language Modeling Advanced: Good Turing Smoothing

  36. Reminder: Add-1 (Laplace) Smoothing

  37. More general formulations: Add-k

  38. Unigram prior smoothing

  39. Advanced smoothing algorithms • Intuition used by many smoothing algorithms • Good-Turing • Kneser-Ney • Witten-Bell • Use the count of things we’ve seenonce • to help estimate the count of things we’ve never seen

  40. Notation: Nc = Frequency of frequency c • Nc = the count of things we’ve seen c times • Sam I am I am Sam I do not eat I 3 sam 2 am 2 do 1 not 1 eat 1 N1 = 3 N2 = 2 N3 = 1

  41. Good-Turing smoothing intuition • You are fishing (a scenario from Josh Goodman), and caught: • 10 carp, 3 perch, 2 whitefish, 1 trout, 1 salmon, 1 eel = 18 fish • How likely is it that next species is trout? • 1/18 • How likely is it that next species is new (i.e. catfish or bass) • Let’s use our estimate of things-we-saw-once to estimate the new things. • 3/18 (because N1=3) • Assuming so, how likely is it that next species is trout? • Must be less than 1/18 – discounted by 3/18!! • How to estimate?

  42. Good Turing calculations • Seen once (trout) • c = 1 • MLE p = 1/18 • C*(trout) = 2 * N2/N1 = 2 * 1/3 = 2/3 • P*GT(trout) = 2/3 / 18 = 1/27 • Unseen (bass or catfish) • c = 0: • MLE p = 0/18 = 0 • P*GT (unseen) = N1/N = 3/18

  43. Ney et al.’s Good Turing Intuition H. Ney, U. Essen, and R. Kneser, 1995. On the estimation of 'small' probabilities by leaving-one-out. IEEE Trans. PAMI. 17:12,1202-1212 Held-out words:

  44. Held out Training Ney et al. Good TuringIntuition(slide from Dan Klein) N1 N0 • Intuition from leave-one-out validation • Take each of the c training words out in turn • c training sets of size c–1, held-out of size 1 • What fraction of held-out words are unseen in training? • N1/c • What fraction of held-out words are seen k times in training? • (k+1)Nk+1/c • So in the future we expect (k+1)Nk+1/c of the words to be those with training count k • There are Nk words with training count k • Each should occur with probability: • (k+1)Nk+1/c/Nk • …or expected count: N2 N1 N3 N2 . . . . . . . . N3511 N3510 N4417 N4416

  45. Good-Turing complications(slide from Dan Klein) • Problem: what about “the”? (say c=4417) • For small k, Nk > Nk+1 • For large k, too jumpy, zeros wreck estimates • Simple Good-Turing [Gale and Sampson]: replace empirical Nk with a best-fit power law once counts get unreliable N1 N2 N3 N1 N2

  46. Resulting Good-Turing numbers • Numbers from Church and Gale (1991) • 22 million words of AP Newswire

  47. Language Modeling Advanced: Kneser-Ney Smoothing

  48. Resulting Good-Turing numbers • Numbers from Church and Gale (1991) • 22 million words of AP Newswire • It sure looks like c* = (c - .75)

  49. Absolute Discounting Interpolation • Save ourselves some time and just subtract 0.75 (or some d)! • (Maybe keeping a couple extra values of d for counts 1 and 2) • But should we really just use the regular unigram P(w)? discounted bigram Interpolation weight unigram

  50. Kneser-Ney Smoothing I glasses • Better estimate for probabilities of lower-order unigrams! • Shannon game: I can’t see without my reading___________? • “Francisco” is more common than “glasses” • … but “Francisco” always follows “San” • The unigram is useful exactly when we haven’t seen this bigram! • Instead of P(w): “How likely is w” • Pcontinuation(w): “How likely is w to appear as a novel continuation? • For each word, count the number of bigram types it completes • Every bigram type was a novel continuation the first time it was seen Francisco

More Related