660 likes | 773 Views
Basic Parsing with Context-Free Grammars. Some slides adapted from Julia Hirschberg and Dan Jurafsky. Announcements. To view past videos: http://globe.cvn.columbia.edu:8080/oncampus.php?c=133ae14752e27fde909fdbd64c06b337
E N D
Basic Parsing with Context-Free Grammars Some slides adapted from Julia Hirschberg and Dan Jurafsky
Announcements • To view past videos: • http://globe.cvn.columbia.edu:8080/oncampus.php?c=133ae14752e27fde909fdbd64c06b337 • Usually available only for 1 week. Right now, available for all previous lectures
Earley Parsing • Allows arbitrary CFGs • Fills a table in a single sweep over the input words • Table is length N+1; N is number of words • Table entries represent • Completed constituents and their locations • In-progress constituents • Predicted constituents
States/Locations • It would be nice to know where these things are in the input so… S -> · VP [0,0] A VP is predicted at the start of the sentence NP -> Det· Nominal [1,2] An NP is in progress; the Det goes from 1 to 2 VP -> V NP · [0,3] A VP has been found starting at 0 and ending at 3
Earley Algorithm • March through chart left-to-right. • At each step, apply 1 of 3 operators • Predictor • Create new states representing top-down expectations • Scanner • Match word predictions (rule with word after dot) to words • Completer • When a state is complete, see what rules were looking for that completed constituent • Done when an S spans from 0 to n
Predictor • Given a state • With a non-terminal to right of dot (not a part-of-speech category) • Create a new state for each expansion of the non-terminal • Place these new states into same chart entry as generated state, beginning and ending where generating state ends. • So predictor looking at • S -> . VP [0,0] • results in • VP -> . Verb [0,0] • VP -> . Verb NP [0,0]
Scanner • Given a state • With a non-terminal to right of dot that is a part-of-speech category • If the next word in the input matches this POS • Create a new state with dot moved over the non-terminal • So scanner looking at VP -> . Verb NP [0,0] • If the next word, “book”, can be a verb, add new state: • VP -> Verb . NP [0,1] • Add this state to chart entry following current one • Note: Earley algorithm uses top-down input to disambiguate POS! Only POS predicted by some state can get added to chart!
Completer • Applied to a state when its dot has reached right end of role. • Parser has discovered a category over some span of input. • Find and advance all previous states that were looking for this category • copy state, move dot, insert in current chart entry • Given: • NP -> Det Nominal . [1,3] • VP -> Verb. NP [0,1] • Add • VP -> Verb NP . [0,3]
How do we know we are done? • Find an S state in the final column that spans from 0 to n and is complete. • If that’s the case you’re done. • S –> α· [0,n]
Earley • More specifically… • Predict all the states you can upfront • Read a word • Extend states based on matches • Add new predictions • Go to 2 • Look at Nto see if you have a winner
Example • Book that flight • We should find… an S from 0 to 3 that is a completed state…
Details • What kind of algorithms did we just describe • Not parsers – recognizers • The presence of an S state with the right attributes in the right place indicates a successful recognition. • But no parse tree… no parser • That’s how we solve (not) an exponential problem in polynomial time
Converting Earley from Recognizer to Parser • With the addition of a few pointers we have a parser • Augment the “Completer” to point to where we came from.
Augmenting the chart with structural information S8 S9 S8 S10 S9 S11 S8 S12 S13
Retrieving Parse Trees from Chart • All the possible parses for an input are in the table • We just need to read off all the backpointers from every complete S in the last column of the table • Find all the S -> X . [0,N+1] • Follow the structural traces from the Completer • Of course, this won’t be polynomial time, since there could be an exponential number of trees • We can at least represent ambiguity efficiently
Left Recursion vs. Right Recursion • Depth-first search will never terminate if grammar is left recursive (e.g. NP --> NP PP)
Solutions: • Rewrite the grammar (automatically?) to a weakly equivalent one which is not left-recursive e.g. The man {on the hill with the telescope…} NP NP PP (wanted: Nom plus a sequence of PPs) NP Nom PP NP Nom Nom Det N …becomes… NP Nom NP’ Nom Det N NP’ PP NP’ (wanted: a sequence of PPs) NP’ e • Not so obvious what these rules mean…
Harder to detect and eliminate non-immediate left recursion • NP --> Nom PP • Nom --> NP • Fix depth of search explicitly • Rule ordering: non-recursive rules first • NP --> Det Nom • NP --> NP PP
Another Problem: Structural ambiguity • Multiple legal structures • Attachment (e.g. I saw a man on a hill with a telescope) • Coordination (e.g. younger cats and dogs) • NP bracketing (e.g. Spanish language teachers)
Solution? • Return all possible parses and disambiguate using “other methods”
Summing Up • Parsing is a search problem which may be implemented with many control strategies • Top-Down or Bottom-Up approaches each have problems • Combining the two solves some but not all issues • Left recursion • Syntactic ambiguity • Rest of today (and next time): Making use of statistical information about syntactic constituents • Read Ch 14
How to do parse disambiguation • Probabilistic methods • Augment the grammar with probabilities • Then modify the parser to keep only most probable parses • And at the end, return the most probable parse
Probabilistic CFGs • The probabilistic model • Assigning probabilities to parse trees • Getting the probabilities for the model • Parsing with probabilities • Slight modification to dynamic programming approach • Task is to find the max probability tree for an input
Probability Model • Attach probabilities to grammar rules • The expansions for a given non-terminal sum to 1 VP -> Verb .55 VP -> Verb NP .40 VP -> Verb NP NP .05 • Read this as P(Specific rule | LHS)
Probability Model (1) • A derivation (tree) consists of the set of grammar rules that are in the tree • The probability of a tree is just the product of the probabilities of the rules in the derivation.
Probability model P(T,S) = P(T)P(S|T) = P(T); since P(S|T)=1
Probability Model (1.1) • The probability of a word sequence P(S) is the probability of its tree in the unambiguous case. • It’s the sum of the probabilities of the trees in the ambiguous case.
Getting the Probabilities • From an annotated database (a treebank) • So for example, to get the probability for a particular VP rule just count all the times the rule is used and divide by the number of VPs overall.
Example sentences from those rules • Total: over 17,000 different grammar rules in the 1-million word Treebank corpus
Probabilistic Grammar Assumptions • We’re assuming that there is a grammar to be used to parse with. • We’re assuming the existence of a large robust dictionary with parts of speech • We’re assuming the ability to parse (i.e. a parser) • Given all that… we can parse probabilistically
Typical Approach • Bottom-up (CKY) dynamic programming approach • Assign probabilities to constituents as they are completed and placed in the table • Use the max probability for each constituent going up
What’s that last bullet mean? • Say we’re talking about a final part of a parse • S->0NPiVPj The probability of the S is… P(S->NP VP)*P(NP)*P(VP) The green stuff is already known. We’re doing bottom-up parsing
Max • I said the P(NP) is known. • What if there are multiple NPs for the span of text in question (0 to i)? • Take the max (where?)
Problems with PCFGs • The probability model we’re using is just based on the rules in the derivation… • Doesn’t use the words in any real way • Doesn’t take into account where in the derivation a rule is used
Solution • Add lexical dependencies to the scheme… • Infiltrate the predilections of particular words into the probabilities in the derivation • I.e. Condition the rule probabilities on the actual words
Heads • To do that we’re going to make use of the notion of the head of a phrase • The head of an NP is its noun • The head of a VP is its verb • The head of a PP is its preposition (It’s really more complicated than that but this will do.)