150 likes | 278 Views
Review of. “The Necessity of Parsing for Predicate Argument Recognition” (Gildea and Palmer, 2002). Parse Tree. TAGGER. PARSER. Auto Summary. SEM. INTERP. The Focus of the Paper: Parsing and its effect on semantic interpretation. Semantic Roles Theme:…banks Predicate: support
E N D
Review of “The Necessity of Parsing for Predicate Argument Recognition” (Gildea and Palmer, 2002)
Parse Tree TAGGER PARSER Auto Summary SEM. INTERP. The Focus of the Paper:Parsing and its effect on semantic interpretation Semantic Roles Theme:…banks Predicate: support Object: …floor traders Manner: by buying… Parts of Speech Big/JJ Investment/NNBanks/NNS…
Parse Tree Two kinds of parsers Finite-State Recognizers“Chunking Parser” Statistical Parsers • Older methodology • Computationally expensive • Generate fully parse tree with attachments • Newer methodology • Faster • Outputs “chunks” with no attachments Chunks [ NP Big investment banks ] [ VP refused to step ] [ ADVP up] [ PP to ] …
Scientific Method • Problem • Hypothesis • Materials • Procedure • Results • Conclusion / Discussion
Problem & Hypothesis Problem: Do modern ‘chunking parsers’ deliver inferior input to semantic interpreter? Hypothesis: Yes, a semantic interpreter should perform worse when given chunked input (versus a full parse tree)
Materials & Procedure Materials: A semantic interpreter (‘SEM’), and a semantic interpreter programmed to accept chunked input (‘SEM-lite’) Procedure: Compare SEM’s output to a gold standard. Compare SEM-lite’s output the same gold standard. Then compare their respective accuracy (or precision/recall)
The Experiments Experiment 1: Known Boundary Condition Constituents already identified in sentence; find the correct role. Measure: Accuracy Experiment 2: Unknown Boundary Condition Find and label the semantic arguments. Measure: Precision, Recall
The Results Experiment 1 Results Best SEM: 82.3% accurate Best SEM-lite: 77.0% accurate 6.4% decrease in accuracy Experiment 2 Results Best SEM: 71.1% precise Best SEM-lite: 49.5% precise 30.4% decrease in precision
Parse Tree The Experiment - Conclusion Statistical Parsers Finite-State Recognizers • Older methodology • Computationally expensive • Generate fully parse tree with attachments • Newer methodology • Faster • Outputs “chunks” with no attachments Chunks [ NP Big investment banks ] [ VP refused to step ] [ ADVP up] [ PP to ] …
Why? Full statistical parsers provide 3 pieces of information that chunking parsers do not • Constituent boundaries • Grammatical relationship (paths) • Head words
Why? Parse tree provides: • Constituent boundaries • Grammatical relationship (paths) • Head words Semantic Interpreter maximizes P for P=(r | pt, path, position, voice, hw, p) The red values are derived from the parse tree KEY r: semantic role pt: phrase type (e.g. NP, VP, S,...) path: parse tree path (e.g. VBVPNP) position: position w/r/t the predicate voice: active/passivehw: head word in an NPp: predicate
Problems With Experiment • What if SM-lite is broken? (not same software under test) • Should find average decrease across multiple semantic interpreters, not just their own
Review I feel they did support their claim that chunking parsers output leads to lower quality output from the semantic interpreter
Awful Sentence Big investment banks refused to step up to the plate to support the beleaguered floor traders by buying big blocks of stock, traders say.
References The Necessity of Syntactic Parsing for Predicate Argument Recognition, Daniel Gildea and Martha Palmer. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), pp. 239–246, Philadelphia, PA, 2002. Definition of normative: http://en.wikipedia.org/wiki/Normative Loser graphic: http://fingers.typepad.com/photos/uncategorized/loser.jpg Google logo:http://www.google.com/