1 / 67

The Expectation Maximization (EM) Algorithm

The Expectation Maximization (EM) Algorithm. … continued!. Repeat until convergence!. General Idea. Start by devising a noisy channel Any model that predicts the corpus observations via some hidden structure (tags, parses, …) Initially guess the parameters of the model!

jhiles
Download Presentation

The Expectation Maximization (EM) Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Expectation Maximization (EM) Algorithm … continued! 600.465 - Intro to NLP - J. Eisner

  2. Repeat until convergence! General Idea • Start by devising a noisy channel • Any model that predicts the corpus observations via some hidden structure (tags, parses, …) • Initially guess the parameters of the model! • Educated guess is best, but random can work • Expectation step: Use current parameters (and observations) to reconstruct hidden structure • Maximization step: Use that hidden structure (and observations) to reestimate parameters 600.465 - Intro to NLP - J. Eisner

  3. initialguess E step Guess of unknown parameters (probabilities) M step General Idea Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  4. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  5. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  6. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  7. correct test trees accuracy expensive and/or wrong sublanguage cheap, plentifuland appropriate Grammar LEARNER training trees Grammar Reestimation E step P A R S E R s c o r e r test sentences M step 600.465 - Intro to NLP - J. Eisner

  8. EM by Dynamic Programming: Two Versions • The Viterbi approximation • Expectation: pick the best parse of each sentence • Maximization: retrain on this best-parsed corpus • Advantage: Speed! • Real EM • Expectation: find all parses of each sentence • Maximization: retrain on all parses in proportion to their probability (as if we observed fractional count) • Advantage: p(training corpus) guaranteed to increase • Exponentially many parses, so don’t extract them from chart – need some kind of clever counting why slower? 600.465 - Intro to NLP - J. Eisner

  9. compose these? Examples of EM • Finite-State case: Hidden Markov Models • “forward-backward” or “Baum-Welch” algorithm • Applications: • explain ice cream in terms of underlying weather sequence • explain words in terms of underlying tag sequence • explain phoneme sequence in terms of underlying word • explain sound sequence in terms of underlying phoneme • Context-Free case: Probabilistic CFGs • “inside-outside” algorithm: unsupervised grammar learning! • Explain raw text in terms of underlying cx-free parse • In practice, local maximum problem gets in the way • But canimprove a good starting grammar via raw text • Clustering case: explain points via clusters 600.465 - Intro to NLP - J. Eisner

  10. Our old friend PCFG S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow 600.465 - Intro to NLP - J. Eisner

  11. S S AdvP # copies of this sentencein the corpus NP VP Today stocks V PRT were up Viterbi reestimation for parsing • Start with a “pretty good” grammar • E.g., it was trained on supervised data (a treebank) that is small, imperfectly annotated, or has sentences in a different style from what you want to parse. • Parse a corpus of unparsed sentences: • Reestimate: • Collect counts: …; c(S  NP VP) += 12; c(S) += 2*12; … • Divide: p(S  NP VP | S) = c(S  NP VP) / c(S) • May be wise to smooth … 12Today stocks were up … 12 600.465 - Intro to NLP - J. Eisner

  12. Similar, but now we consider all parses of each sentence Parse our corpus of unparsed sentences: Collect counts fractionally: …; c(S  NP VP) += 10.8; c(S) += 2*10.8; … …; c(S  NP VP) += 1.2; c(S) += 1*1.2; … True EM for parsing NP S S AdvP # copies of this sentencein the corpus NP VP Today S stocks V PRT were up VP V PRT NP NP up stocks Today were … 12Today stocks were up … 10.8 1.2 600.465 - Intro to NLP - J. Eisner

  13. Where are the constituents? p=0.5 coal energy witness expert

  14. Where are the constituents? p=0.1 coal energy witness expert

  15. Where are the constituents? p=0.1 coal energy witness expert

  16. Where are the constituents? p=0.1 coal energy witness expert

  17. Where are the constituents? p=0.2 coal energy witness expert

  18. Where are the constituents? coal energy witness expert +0.1 +0.1 +0.2 = 1 0.5 +0.1

  19. Where are NPs, VPs, … ? NP locations VP locations S VP PP NP NP V P Det N Time Time flies flies like like an an arrow arrow

  20. Where are NPs, VPs, … ? NP locations VP locations Time Time flies flies like like an an arrow arrow (S (NP Time) (VP flies (PP like (NP an arrow)))) p=0.5

  21. Where are NPs, VPs, … ? NP locations VP locations Time Time flies flies like like an an arrow arrow (S (NP Time flies) (VP like (NP an arrow))) p=0.3

  22. Where are NPs, VPs, … ? NP locations VP locations Time Time flies flies like like an an arrow arrow (S (VP Time (NP (NP flies) (PP like (NP an arrow))))) p=0.1

  23. Where are NPs, VPs, … ? NP locations VP locations Time Time flies flies like like an an arrow arrow (S (VP (VP Time (NP flies)) (PP like (NP an arrow)))) p=0.1

  24. Where are NPs, VPs, … ? NP locations VP locations Time Time flies flies like like an an arrow arrow +0.3 +0.1 = 1 0.5 +0.1

  25. How many NPs, VPs, … in all? NP locations VP locations Time Time flies flies like like an an arrow arrow +0.3 +0.1 = 1 0.5 +0.1

  26. How many NPs, VPs, … in all? NP locations VP locations Time Time flies flies like like an an arrow arrow 2.1 NPs(expected) 1.1 VPs(expected)

  27. Where did the rules apply? S  NP VP locations NP  Det N locations Time Time flies flies like like an an arrow arrow

  28. Where did the rules apply? S  NP VP locations NP  Det N locations Time Time flies flies like like an an arrow arrow (S (NP Time) (VP flies (PP like (NP an arrow)))) p=0.5

  29. Where is S  NP VP substructure? S  NP VP locations NP  Det N locations Time Time flies flies like like an an arrow arrow (S (NP Time flies) (VP like (NP an arrow))) p=0.3

  30. Where is S  NP VP substructure? S  NP VP locations NP  Det N locations Time Time flies flies like like an an arrow arrow (S (VP Time (NP (NP flies) (PP like (NP an arrow))))) p=0.1

  31. Where is S  NP VP substructure? S  NP VP locations NP  Det N locations Time Time flies flies like like an an arrow arrow (S (VP (VP Time (NP flies)) (PP like (NP an arrow)))) p=0.1

  32. Why do we want this info? • Grammar reestimation by EM method • E step collects those expected counts • M step sets • Minimum Bayes Risk decoding • Find a tree that maximizes expected reward,e.g., expected total # of correct constituents • CKY-like dynamic programming algorithm • The input specifies the probability of correctness for each possible constituent (e.g., VP from 1 to 5)

  33. Why do we want this info? • Soft features of a sentence for other tasks • NER system asks: “Is there an NP from 0 to 2?” • True answer is 1 (true) or 0 (false) • But we return 0.3, averaging over all parses • That’s a perfectly good feature value – can be fed as to a CRF or a neural network as an input feature • Writing tutor system asks: “How many times did the student use S  NP[singular] VP[plural]?” • True answer is in {0, 1, 2, …} • But we return 1.8, averaging over all parses

  34. Similar, but now we consider all parses of each sentence Parse our corpus of unparsed sentences: Collect counts fractionally: …; c(S  NP VP) += 10.8; c(S) += 2*10.8; … …; c(S  NP VP) += 1.2; c(S) += 1*1.2; … But there may be exponentiallymany parses of a length-n sentence! How can we stay fast? Similar to taggings… True EM for parsing NP S S AdvP # copies of this sentencein the corpus NP VP Today S stocks V PRT were up VP V PRT NP NP up stocks Today were … 12Today stocks were up … 10.8 1.2 600.465 - Intro to NLP - J. Eisner

  35. The dynamic programming computation of a. (b is similar but works back from Stop.) Day 1: 2 cones Day 2: 3 cones Day 3: 3 cones a=0.1*0.08+0.1*0.01 =0.009 a=0.009*0.08+0.063*0.01 =0.00135 a=0.1 p(C|C)*p(3|C) p(C|C)*p(3|C) C C C p(C|Start)*p(2|C) 0.8*0.1=0.08 0.8*0.1=0.08 0.5*0.2=0.1 Start p(H|Start)*p(2|H) p(H|H)*p(3|H) 0.5*0.2=0.1 p(H|H)*p(3|H) H H H 0.8*0.7=0.56 0.8*0.7=0.56 p(C|H)*p(3|C) p(C|H)*p(3|C) a=0.1 0.1*0.1=0.01 0.1*0.1=0.01 p(H|C)*p(3|H) p(H|C)*p(3|H) a=0.1*0.07+0.1*0.56 =0.063 a=0.009*0.07+0.063*0.56 =0.03591 0.1*0.7=0.07 0.1*0.7=0.07 Analogies to a, b in PCFG? Call these aH(2) and bH(2) aH(3) and bH(3) 600.465 - Intro to NLP - J. Eisner

  36. “Inside Probabilities” S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow • Sum over all VP parses of “flies like an arrow”: VP(1,5) = p(flies like an arrow | VP) • Sum over all S parses of “time flies like an arrow”: S(0,5) = p(time flies like an arrow | S) 600.465 - Intro to NLP - J. Eisner

  37. Compute  Bottom-Up by CKY S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow VP(1,5) = p(flies like an arrow | VP) = … S(0,5) = p(time flies like an arrow | S) = NP(0,1) * VP(1,5) * p(S  NP VP|S) + … 600.465 - Intro to NLP - J. Eisner

  38. Compute  Bottom-Up by CKY 1 S  NP VP 6 S  Vst NP 2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP

  39. Compute  Bottom-Up by CKY 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP S 2-22

  40. Compute  Bottom-Up by CKY 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP S 2-27

  41. The Efficient Version: Add as we go 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP

  42. s(0,5) S(0,2) s(0,2)* PP(2,5)*p(S  S PP | S) PP(2,5) The Efficient Version: Add as we go 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP +2-22 +2-27

  43. X Y Z what if you changed + to max? what if you replaced all rule probabilities by 1? j k i Compute  probs bottom-up (CKY) need some initialization up here for the width-1 case for width := 2 to n (* build smallest first *) for i := 0 to n-width (* start *) let k := i + width (* end *) for j := i+1 to k-1 (* middle *) for all grammar rules X  Y Z X(i,k) +=p(X Y Z | X) * Y(i,j) *Z(j,k) 600.465 - Intro to NLP - J. Eisner

  44. S “outside” the VP NP time VP VP(1,5) = p(time VP today | S) VP NP today VP(1,5) = p(flies like an arrow | VP) “inside” the VP Inside & Outside Probabilities PP V flies P like NP VP(1,5) *VP(1,5) = p(time [VPflies like an arrow] today| S) Det an N arrow 600.465 - Intro to NLP - J. Eisner

  45. S NP time VP VP(1,5) = p(time VP today | S) VP NP today / S(0,6) p(time flies like an arrow today | S) Inside & Outside Probabilities PP V flies VP(1,5) = p(flies like an arrow | VP) P like NP VP(1,5) *VP(1,5) = p(timeflies like an arrowtoday & VP(1,5) | S) Det an N arrow = p(VP(1,5) | time flies like an arrow today, S) 600.465 - Intro to NLP - J. Eisner

  46. S NP time VP VP(1,5) = p(time VP today | S) C C C VP NP today Start H H H Inside & Outside Probabilities PP V flies VP(1,5) = p(flies like an arrow | VP) P like NP strictly analogousto forward-backward in the finite-state case! Det an N arrow So VP(1,5) *VP(1,5) / s(0,6) is probability that there is a VP here, given all of the observed data (words) 600.465 - Intro to NLP - J. Eisner

  47. S NP time VP VP(1,5) = p(time VP today | S) VP NP today Inside & Outside Probabilities PP V flies V(1,2) = p(flies | V) P like NP PP(2,5) = p(like an arrow | PP) Det an N arrow So VP(1,5) * V(1,2) *PP(2,5) / s(0,6) is probability that there is VP  V PP here, given all of the observed data (words) … or is it? 600.465 - Intro to NLP - J. Eisner

  48. S NP time VP VP(1,5) = p(time VP today | S) C C C VP NP today Start H H H sum prob over all position triples like (1,2,5) to get expected c(VP V PP); reestimate PCFG! Inside & Outside Probabilities strictly analogousto forward-backward in the finite-state case! PP V flies V(1,2) = p(flies | V) P like NP PP(2,5) = p(like an arrow | PP) Det an N arrow So VP(1,5) * p(VP V PP) *V(1,2) *PP(2,5) / s(0,6) is probability that there is VP V PP here (at 1-2-5), given all of the observed data (words) 600.465 - Intro to NLP - J. Eisner

  49. Summary: VP(1,5) += p(V PP |VP) * V(1,2) * PP(2,5) inside 1,5 inside 1,2 inside 2,5 VP(1,5) p(flies like an arrow |VP) VP += p(V PP |VP) * p(flies |V) p(like an arrow |PP) * Compute  probs bottom-up(gradually build up larger blue “inside” regions) PP(2,5) PP V flies V(1,2) P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

  50. Summary: V(1,2) += p(V PP |VP) * VP(1,5) * PP(2,5) S outside 1,2 outside 1,5 inside 2,5 p(time Vlike an arrow today | S) V(1,2) Compute  probs top-down (uses  probs as well) (gradually build up larger pink “outside” regions) NP time VP NP today VP(1,5) VP PP V flies PP(2,5) +=p(time VP today | S) * p(V PP |VP) * p(like an arrow | PP) P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

More Related