1 / 28

Reference Resolution

Reference Resolution. CMSC 35900-1 Discourse and Dialogue September 30, 2004. Agenda. Coherence: Holding discourse together Coherence types and relations Reference resolution Knowledge-rich, deep analysis approaches Lappin&Leass, Centering, Hobbs

ingrid
Download Presentation

Reference Resolution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reference Resolution CMSC 35900-1 Discourse and Dialogue September 30, 2004

  2. Agenda • Coherence: Holding discourse together • Coherence types and relations • Reference resolution • Knowledge-rich, deep analysis approaches • Lappin&Leass, Centering, Hobbs • Knowledge-based, shallow analysis: CogNIAC (‘95) • Learning approaches: Fully, Weakly Supervised • Cardie&Ng ’02,’03,’04

  3. Coherence: Holding Discourse Together • Cohesion: • Necessary to make discourse a semantic unit • All utterances linked to some preceding utterance • Expresses continuity • Key: Enables hearers to interpret missing elements, through textual and environmental context links

  4. Cohesive Ties (Halliday & Hasan, 1972) • “Reference”: e.g. “he”,”she”,”it”,”that” • Relate utterances by referring to same entities • “Substitution”/”Ellipsis”:e.g. Jack fell. Jill did too. • Relate utterances by repeated partial structure w/contrast • “Lexical Cohesion”: e.g. fell, fall, fall…,trip.. • Relate utterances by repeated/related words • “Conjunction”: e.g. and, or, then • Relate continuous text by logical, semantic, interpersonal relations. Interpretation of 2nd utterance depands on first

  5. Reference Resolution • Match referring expressions to referents • Syntactic & semantic constraints • Syntactic & semantic preferences • Reference resolution algorithms

  6. Reference Resolution Approaches • Common features • “Discourse Model” • Referents evoked in discourse, available for reference • Structure indicating relative salience • Syntactic & Semantic Constraints • Syntactic & Semantic Preferences • Differences: • Which constraints/preferences? How combine? Rank?

  7. A Resolution Algorithm(Lappin & Leass) • Discourse model update: • Evoked entities: • Equivalence classes: Coreferent referring expressions • Salience value update: • Weighted sum of salience values: • Based on syntactic preferences • Pronoun resolution: • Exclude referents that violate syntactic constraints • Select referent with highest salience value

  8. Example • John saw a beautiful Acura Integra in the dealership. • He showed it to Bob. • He bought it.

  9. Hobbs’ Tree-based Resolution • Uses full syntactic analyses as structure • Ranking of possible antecedents based on: • Breadth-first left-to-right tree traversal • Moving backward through sentences

  10. Centering • Identify the local “center” of attention • Pronominalization focuses attention, appropriate use establishes coherence • Identify entities available for reference • Describe shifts in what discourse is about • Prefer different types for coherence

  11. Reference Resolution: Differences • Different structures to capture focus • Different assumptions about: • # of foci, ambiguity of reference • Different combinations of features

  12. Reference Resolution: Agreements • Knowledge-based • Deep analysis: full parsing, semantic analysis • Enforce syntactic/semantic constraints • Preferences: • Recency • Grammatical Role Parallelism (ex. Hobbs) • Role ranking • Frequency of mention • Local reference resolution • Little/No world knowledge • Similar levels of effectiveness

  13. Alternative Strategies • Knowledge-based, but • Shallow processing, simple rules! • CogNIAC (Baldwin ’95) • Data-driven • Fully or weakly supervised learning • Cardie & Ng ( ’02-’04)

  14. CogNIAC • Goal: Resolve with high precision • Identify where ambiguous, use no world knowledge, simple syntactic analysis • Precision: # correct labelings/# of labelings • Recall: # correct labelings/# of anaphors • Uses simple set of ranked rules • Applied incrementally left-to-right • Designed to work on newspaper articles • Tune/rank rules

  15. CogNIAC: Rules • Only resolve reference if unique antecedent • 1) Unique in discourse • 2) Reflexive: nearest legal in same sentence • 3) Unique in current & prior: • 4) Possessive Pro: single exact poss in prior • 5) Unique in current • 6) Unique subj/subj pronoun

  16. CogNIAC: Example • John saw a beautiful Acura Integra in the dealership. • He showed it to Bill. • He= John : Rule 1; it -> ambiguous (Integra) • He bought it. • He=John: Rule 6; it=Integra: Rule 3

  17. Data-driven Reference Resolution • Prior approaches • Knowledge-based, hand-crafted • Data-driven machine learning approach • Cast coreference as classification problem • For each pair NPi,NPj, do they corefer? • Cluster to form equivalence classes

  18. NP Coreference Examples • Link all NPs refer to same entity Queen Elizabeth set about transforming herhusband, King George VI, into a viable monarch. Logue, a renowned speech therapist, was summoned to help the King overcome hisspeech impediment...

  19. Training Instances • 25 features per instance: 2NPs, features, class • lexical (3) • string matching for pronouns, proper names, common nouns • grammatical (18) • pronoun_1, pronoun_2, demonstrative_2, indefinite_2, … • number, gender, animacy • appositive, predicate nominative • binding constraints, simple contra-indexing constraints, … • span, maximalnp, … • semantic (2) • same WordNet class • alias • positional (1) • distance between the NPs in terms of # of sentences • knowledge-based (1) • naïve pronoun resolution algorithm

  20. Classification & Clustering • Classifiers: • C4.5 (Decision Trees), RIPPER • Cluster: Best-first, single link clustering • Each NP in own class • Test preceding NPs • Select highest confidence coref, merge classes • Tune: Training sample skew: class, type

  21. Weakly Supervised Learning • Exploit small pool of labeled training data • Larger pool unlabeled • Single-View Multi-Learner Co-training • 2 different learning algorithms, same feature set • each classifier labels unlabeled instances for the other classifier • data pool is flushed after each iteration

  22. Effectiveness • Supervised learning approaches • Comparable performance to knowledge-based • Weakly supervised approaches • Decent effectiveness, still lags supervised • Dramatically less labeled training data • 1K vs 500K

  23. Reference Resolution: Extensions • Cross-document co-reference • (Baldwin & Bagga 1998) • Break “the document boundary” • Question: “John Smith” in A = “John Smith” in B? • Approach: • Integrate: • Within-document co-reference • with • Vector Space Model similarity

  24. Cross-document Co-reference • Run within-document co-reference (CAMP) • Produce chains of all terms used to refer to entity • Extract all sentences with reference to entity • Pseudo per-entity summary for each document • Use Vector Space Model (VSM) distance to compute similarity between summaries

  25. Cross-document Co-reference • Experiments: • 197 NYT articles referring to “John Smith” • 35 different people, 24: 1 article each • With CAMP: Precision 92%; Recall 78% • Without CAMP: Precision 90%; Recall 76% • Pure Named Entity: Precision 23%; Recall 100%

  26. Conclusions • Co-reference establishes coherence • Reference resolution depends on coherence • Variety of approaches: • Syntactic constraints, Recency, Frequency,Role • Similar effectiveness - different requirements • Co-reference can enable summarization within and across documents (and languages!)

  27. Coherence & Coreference • Cohesion: Establishes semantic unity of discourse • Necessary condition • Different types of cohesive forms and relations • Enables interpretation of referring expressions • Reference resolution • Syntactic/Semantic Constraints/Preferences • Discourse, Task/Domain, World knowledge • Structure and semantic constraints

  28. Challenges • Alternative approaches to reference resolution • Different constraints, rankings, combination • Different types of referent • Speech acts, propositions, actions, events • “Inferrables” - e.g. car -> door, hood, trunk,.. • Discontinuous sets • Generics • Time

More Related