1 / 48

Curs 5 Rezoluţia anaforei - continuare

Curs 5 Rezoluţia anaforei - continuare. Designing Test-Beds for General Anaphora Resolution. Work done in collaboration with: Oana Postolache oana@coli.uni-saarland.de University of Saarland, Germany DAARC ’0 4. input. markables. output. Evaluation of a minimal AR system. RE-extractor.

jorryn
Download Presentation

Curs 5 Rezoluţia anaforei - continuare

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Curs 5Rezoluţia anaforei - continuare

  2. Designing Test-Beds for General Anaphora Resolution Work done in collaboration with: Oana Postolache oana@coli.uni-saarland.de University of Saarland, Germany DAARC’04

  3. input markables output Evaluation of a minimal AR system RE-extractor AR-engine

  4. P, R. F P, R, F P, R. F input re-test re-test-coref-test re-gold-coref-test re-gold re-gold-coref-gold Evaluation of a minimal AR system Test the whole system globally Test the RE-extractor RE-extractor AR-engine Test only the AR-engine

  5. The Orwell corpus • Chapters 1, 2, 3 and 5 from George Orwell’s “Ninety eighty four” • Automatic detection of markables • POS-tagging • FDG parser • markable = any construction dominated by a noun/pronoun • detection of head and lemma (given) • deletion of relative clauses

  6. The Penn Treebank corpus • 7 files from WSJ • Extraction of markables from the PTB-style constituency trees • Collin’s rules to extract head • WordNet script for lemma • dependency links between words

  7. Dimensions of corpora

  8. Markables • Generally, conformant with MUC-7 and ACE criteria • Differences: • do not include relative clauses • each term of an apposition is taken separately ([Big Brother], [the primal traitor]) • conjoined expressions are annotated individually ([John] and [Mary], [hills] and [mountains]) • modifying nouns appearing in noun-noun modification are not marked separately ([glass doors], [prison food], [the junk bond market])

  9. Markables What do we mark? • noun phrases • definite: (the principle, the flying object) • indefinite: (a book, a future star) • undetermined (sole guardian of truth) • names (Winston Smith, The Ministry of Love) • dates (April) • currency expressions ($40) • percentages (48%) • pronouns • personal (I, you, he, him, she, her, it, they, them) • possessive (his, her, hers, its, their, theirs) • reflexive (himself, herself, itself, themselves) • demonstrative (this, that, these, those) • wh-pronouns – when they replace an entity (which, who, whom, whose, that) • numerals • when they refer to entities (four of them, the first, the second)

  10. Our model: primary attributes • Lexical & morphological: • lemma • number • POS • headForm • Syntactic: • synt-role • dependency-link • npText • includedNPs • isDefinite, isIndefinite, • predNameOf • Semantic: • isMaleName, isFemaleName, isFamilyName, isPerson • HeSheItThey • Positional: • offset • sentenceID

  11. Our model: knowledge sources • For each attribute there is a knowledge source that fetches the value using: • The POS tagger output • The FDG structure • Large name databases • The WordNet hierarchy • Punctuation

  12. Knowledge sources - HeSheItThey • HeSheItThey = [Phe, Pshe, Pit, Pthey] • for pronouns – straightforward; • for NPs: • n = # synsets of the head • f = # synsets which are hyponyms of <female> • m = # synsets which are hyponyms of <male> • p = # synsets which are hyponyms of <person> If NP is plural: Phe=0, Pshe=0, Pit=0, Pthey=1 Else: Phe= , Pshe= , Pit= , Pthey=0

  13. Our model: rules • Demolishing rules: • IncludingRule: prohibits coreference between nested REs • Certifying rules: • PredNameRule • ProperNameRule • whRule • Promoting/demoting rules: • HeSheItTheyRule • RoleRule • NumberRule • LemmaRule • PersonRule • SynonymyRule • HypernymyRule • WordnetChainRule

  14. Our model: the whRules (example) • Rules for detecting the antecedent of a wh-pronoun: • Case1: I saw [a blond boy]whowas playing in the garden. • Case2: [The colour of the chair]which was underneath the table… [The atmosphere of happiness]which she carried with her.

  15. Our model: domain of referential accessibility • Linear

  16. P, R, F input re-test re-test-coref-test re-gold-coref-test re-gold re-gold-coref-gold Evaluation of the RE-extractor Test the RE-extractor RE-extractor AR-engine When a gold-test pair of markables match?

  17. Orwell-HM P=0,85 R=0,94 F=0,89 P, R, F PTB-HM P=0,90 R=0,95 F=0,92 input re-test re-test-coref-test re-gold-coref-test re-gold-coref-gold re-gold Evaluation of the RE-extractor Test the RE-extractor RE-extractor AR-engine markable gold • When a gold-test pair of markables match? • head matching (HM): if they have the same head test markable

  18. Orwell-PM P=0,74 R=0,80 F=0,76 PTB-PM P=0,78 R=0,82 F=0,79 input re-test re-test-coref-test re-gold-coref-test re-gold re-gold-coref-gold Evaluation of the RE-extractor Test the RE-extractor P, R, F RE-extractor AR-engine l1 gold • When a gold-test pair of markables match? • partial matching (PM): if they have the same head and the mutual overlap is higher than 50% (compared to the longest span) test l l/ l1 > 0.5 l2

  19. gold Evaluation of the AR-engine • Same set of markables (on the “identity of head” criterion) • For each anaphor in the gold: • If it belongs to a chain that doesn’t contain any other anaphor, then we look in the test set to see if it belongs to a similar trivial chain • if yes will get the value 1; i Ci = 1 test

  20. gold Evaluation of the AR-engine • Same set of markables (on the “identity of head” criterion) • For each anaphor in the gold: • If it belongs to a chain that doesn’t contain any other anaphor, then we look in the test set to see if it belongs to a similar trivial chain • if yes will get the value 1; • otherwise it will get the value 0; i Ci = 0 test

  21. Evaluation of the AR-engine • Same set of markables (on the “identity of head” criterion) • For each anaphor in the gold: • If the anaphor belongs to a chain containing other n anaphors, then we look in the test set and count how many of these n anaphors belong to the chain corresponding to the current test-set anaphor (we note this number with m). The ratio m/n will be the value assigned to the current anaphor. i 1 1 1 gold ci = 2/3 test 1 0 1

  22. Evaluation of the AR-engine • Same set of markables (on the “identity of head” criterion) • For each anaphor in the gold: • If the anaphor belongs to a chain containing other n anaphors, then we look in the test set and count how many of these n anaphors belong to the chain corresponding to the current test-set anaphor (we note this number with m). The ratio m/n will be the value assigned to the current anaphor. • Then we add this number for all anaphors and divide by no. of anaphors: ci / N i 1 1 1 gold ci = 2/3 test 1 0 1

  23. Orwell SR = 0.66 MUC F = 0.72 PTB SR = 0.69 MUC F = 0.75 input re-test re-test-coref-test re-gold-coref-test re-gold re-gold-coref-gold Evaluation of the AR-engine working on coreferences RE-extractor AR-engine

  24. Evaluation of the whole system • Possibly different sets of markables, identified on the “identity of head” criterion, as found in gold and test, with possibly different spans • same global formula but the contribution of each markable is factored by the mutual overlapping score, showing the test versus gold overlapping of markables a gold test mosi = b/a b

  25. Evaluation of the whole system • Possibly different sets of markables, identified on the “identity of head” criterion, as found in gold and test, with possibly different spans • same global formula but the contribution of each markable is factored by the mutual overlapping score, showing the test versus gold overlapping of markables i 1 1 1 gold ci = (0.7+0.5)/3 = 1.2/3 test 0.5 0 0.7 R = ci / Ng

  26. Evaluation of the whole system • Possibly different sets of markables, identified on the “identity of head” criterion, as found in gold and test, with possibly different spans • “misses” (failings to find certain markables) influence R • “false-alarms” (markables erroneously considered in the test) influence P i 1 1 1 gold test 0 0.7 false-alarm miss

  27. Orwell SR=0,55 PTB SR=0,61 input re-test re-test-coref-test re-gold-coref-test re-gold re-gold-coref-gold Evaluation of the whole system RE-extractor AR-engine

  28. Commentaries • RE-extractor module gives better results on PTB than on Orwell • human syntactic annotation versus automatic FDG structure detection • AR module gives slightly better results on PTB than on Orwell • news (finance) versus belles-lettres • heads: in PTB – extracted by rules relying on the human syntactic annotation, in Orwell – extracted by rules relying on the FDG results • Difficult to compare with other approaches/authors • apparently we are in the upper class • BUT: not the same corpus, not the same evaluation metric

  29. Transferring Coreference Chains through Word Alignment In collaboration with Oana Postolache and Constantin Orăsan oana@coli.uni-saarland.de C.Orasan@wlv.ac.uk University of Saarland, Germany University of Wolverhampton, United Kingdom LREC’06

  30. En: John broke hisarm recently. It hurts badly. Ro: Ionşi-a rupt braţul recent. Îl doare rău. The goal • Automatic annotation of coreference chains for languages with sparser resources (Romanian). • Difficulties:

  31. English-Romanian parallel corpus • George Orwell’s novel “1984” • 6,411 sentences • the English version is in the process of being manually annotated with coreference chains (now we have half of the corpus) • Experimental data: three parts from the first chapter • ~13K words. • 638 sentences • the Romanian version manually annotated for evaluation purposes

  32. Coreference information annotated • Conformant with MUC-7 and ACE 2003. • Referential expressions are: • Noun-phrases: definite, indefinite & undetermined • Proper names • Pronouns and wh-pronouns • Numerals • The REs include only restrictive clauses • The term of an apposition is taken separately • Conjoined expressions are taken individually • Noun premodifiers are not marked

  33. Experiment:Automatic word alignment • We used the Romanian-English aligner COWAL (Tufiş et al., 2006) • Performance: 83.30% F-measure • The first ranked system out of 37 at ACL’05 shared task on word alignment

  34. Experiment:Extraction of the Romanian REs • For an Eng RE with words e1, e2, … en, we extract the Rom set of words r1, r2, …rm, surface ordered • Heads are transferred through the alignment from Eng to Rom (1:n) • We consider the Rom RE as the span of words between r1 and rm

  35. Experiment 1:Extraction of the Romanian REs Four situations: • An Eng RE has a corresponding Rom RE with ONE head. • An Eng RE has a corresponding Rom RE with ONE OR MORE heads. • An Eng RE has a corresponding Rom RE with NO head. • An Eng RE has NO corresponding Rom RE. Only REs conforming to 1. and 2. are considered. The head of the Rom RE is taken as the leftmost head whose POS is Noun, Pronoun or Numeral.

  36. Experiment 2:Coreference chains transfer • As the Eng REs are clustered in chains referring to the same entity, and we have the corresponding Rom REs, we simply “import” the clustering. • As not all Eng REs have a corresponding Rom RE, the no. of clusters between Eng and Rom may differ. • Also there are differences between the lengths of corresponding clusters.

  37. Evaluation • Transferred data (system) is compared against gold standard data (manual) for Rom.

  38. Evaluation of the RE heads • We only consider the heads of the system REs and the heads of the gold standard REs.

  39. Evaluation of the RE spans (1/2) • All REs The overlaps between the system REs and the gold standard REs: 2 * #(wordsSystemRE  wordsGoldRE) Overlap= ------------------------------------------------------- #wordsSystemRE + #wordsGoldRE

  40. Evaluation of the RE spans (2/2) The previous numbers reflect also the penalties for not having a certain REs in the system, or having wrong REs (errors also contained in the heads evaluation). Only correct system REs: • System REs with a correct head against the corresponding gold REs.

  41. Evaluation of coreference chains • All system REs against the gold REs • The correct systems REs against the gold REs

  42. Error analysis (1/3)Incorrect detection of Rom REs • Wrong alignment • Eng. adjs/advs/verbs translated in Rom by nouns En: naturally sanguine face Ro: faţă sangvină de la natură (face sanguine from the nature) • Choices of the Romtranslator En: The actual writing would be easy Ro: Scrisul în sine era o treabă uşoară (The writing itself was an easy job) • Eng noun premodifiers translated in Rom as prepositional phrase postmodifiers or possesives En: a forced labour camp Ro: un lagăr de muncă silnică (a camp of forced labour)

  43. Error analysis (2/3)Errors in the spans overlap • Wrong alignment • Triggered by the choice of translation: En: [Someone with a comb and a piece of toilet paper] was trying to keep tune with the music. Ro: [Cinevase străduia, cu un pieptene şi o bucată de hârtie igienică], săţină isonul muzicii. ([Someone was trying, with a comb and a piece of toilet paper], to keep tune with the music.)

  44. Error analysis (3/3)Incorrect detection of coreference chains • Errors due to translation choice: En: The sky was a harsh blue. A predicative noun – subject relationship. Ro: Cerul era de un albastru strident. (The sky was as a harsh blue.)

  45. Conclusions • What and why? • An automatic method for projecting coreference chains in parallel corpora • To augment the scarce resources of coreference information • A preprocessing step prior to manual correction in the annotation effort • How good? • References: high precision (> 95%) but smaller recall (~ 70%) • Coreference chains: relatively high F-measure (> 90%) for correct REs

  46. Our anaphoric… references • Cristea,D., Dima,G.E. (2001): An integrating framework for anaphora resolution. In Information Science and Technology, Romanian Academy Publishing House, Bucharest, vol. 4, no. 3-4, p 273-291. • Cristea,D., Postolache,O.-D., Dima,G.E., Barbu,C. (2002): AR-Engine – a framework for unrestricted co-reference resolution. In Proceedings of The Third International Conference on Language Resources and Evaluation, LREC-2002, Las Palmas, Spain. • Cristea, D., Dima, G.E., Postolache, O.D., Mitkov, R. (2002): Handling complex anaphora resolution cases. In Proceedings of the Discourse Anaphora and Anaphor Resolution Colloquium, Lisbon, September 18-20, 2002. • Postolache, O. and Cristea, D. (2004): Designing Test-beds for General Anaphora Resolution, in Proceedings of the Discourse Anaphora and Anaphor Resolution Colloquium – DAARC, St. Miguel, Portugal. • Cristea,D.; Postolache,O.D. (2005): How to deal with wicked anaphora, in António Branco, Tony McEnery and Ruslan Mitkov (eds.): Anaphora Processing: Linguistic, Cognitive and Computational Modelling, Current Issues in Linguistic Theory, Benjamin Publishing Books, ISBN 90-272-4777-3 (Eur)/1-58811-621-2 (USA). • Postolache, O., Cristea, D., Orasan, C. (2006). Transferring Coreference Chains through Word Alignment. In Proceedings of LREC-2006, Geneva, May 2006.

  47. Contest on AR @ 6th DAARCMarch 2007, Lagos, Portugal • On English only, 4 tracks: • with markables given, resolve only pronouns • with markables given, resolve all anaphors • no markables given, resolve only pronouns • no markables given, resolve all anaphors Call to be issued soon…

More Related