230 likes | 406 Views
National University of Singapore at the TREC-13 Question Answering Main Task Hang Cui Keya Li Renxu Sun Tat-Seng Chua Min-Yen Kan {cuihang, likeya, sunrenxu, chuats, kanmy}@comp.nus.edu.sg. System Architecture. Passage Retrieval Using Query Expansion with Google snippets. Question Analysis.
E N D
National University of Singapore at the TREC-13 Question Answering Main Task Hang CuiKeya LiRenxu SunTat-Seng ChuaMin-Yen Kan{cuihang, likeya, sunrenxu, chuats, kanmy}@comp.nus.edu.sg NUS at TREC-13 QA Main Task
System Architecture Passage Retrieval Using Query Expansion with Google snippets Question Analysis Answer Extraction Using Approximate Dependency Relation Matching Topic Analysis and Document Retrieval Definition Generation with Soft Patterns NUS at TREC-13 QA Main Task
What’s New This Year • Approximate matching of grammatical dependency relations for answer extraction • Soft matching patterns in identifying definition sentences. • See [Cui et al., 2004a] and [Cui et al., 2004b] • Exploiting definitions to answer factoid and list questions. NUS at TREC-13 QA Main Task
Outline • System architecture • New Features in TREC-13 QA Main Task • Approximate Dependency Relation Matching for Answer Extraction • Soft Matching Patterns for Definition Generation • Definition Sentences in Answering Topically-Related Factoid/List Questions • Conclusion NUS at TREC-13 QA Main Task
Dependency Relation Matching in QA • Why need to consider dependency relations? • An upper bound of 70% for answer extraction (Light et al., 2001) • Many NE’s with the same type appearing close to each other. • Some questions don’t have NE-type targets. • E.g. what does AARP stand for? • Tried before • PIQASso and MIT systems have applied dependency relations in QA. • However: • Poor performance due to low recall. • Used exact match of relations to extract answers directly. NUS at TREC-13 QA Main Task
Extracting Dependency Relation Triples • Minipar-based (Lin, 1998) dependency parsing • Relation triple: two anchor words and their relationship • E.g. <“desk”, complement, “on”> for “on the desk”. • Relation path: path of relations between two words • E.g., <“desk”, mod, complement “floor”> for “on the desk at the fourth floor” NUS at TREC-13 QA Main Task
Examples of relation triples Q: What American revolutionary general turned over West Point to the British? q1) General sub obj West Point q2) West Point mod pcomp-n British A: …… Benedict Arnold’s plot to surrender West Point to the British …… s1) Benedict Arnold poss s sobj West Point s2) West Point mod pcomp-n British • So, in most cases, correct answers can’t be extracted by exact match of relations. NUS at TREC-13 QA Main Task
Learning Relation Similarity • We need a measure to find the similarity between two different paths. • Adopt a statistical method to learn similarity from past QA pairs. • Training data preparation • Around 1,000 factoid question-answer pairs from the past two years’ TREC QA task. • Extract all relation paths between all non-trivial words • 2,557 path pairs. • Align the paths according to identical anchor nodes. NUS at TREC-13 QA Main Task
Using Mutual Information to Measure Relation Co-occurrence • Two relations’ similarity measured by their co-occurrences in the question and answer paths. • Variation of mutual information (MI) • a: reciprocal of the length sum of the two relation paths. • to discount the score of two relations appearing in long paths. Relation-1 Relation-2 Similarity whn pcomp-n 0.43 whn i 0.42 i pcomp-n 0.39 i s 0.37 pred mod 0.37 appo vrel 0.35 NUS at TREC-13 QA Main Task
Measuring Path Similarity – 1 • We adopt two methods to compute path similarity using different relation alignment methods. • Option 1: ignore the words of those relations along the given paths – Total Path Matching. • A path consists of only a list of relations: no relation context (anchor words) considered. • Relation alignment by permutation of all possibilities. • Adopt IBM’s Model 1 for statistical translation: NUS at TREC-13 QA Main Task
Measuring Path Similarity – 2 • Option 2: consider the words of those relations along a path – Triple Matching. • A path consists of a list of relations and their words. • Requires match of relation context (anchor words). • Only those relations with matched words count. • More strict match in relation alignment. NUS at TREC-13 QA Main Task
Selecting Answer Strings Statistically • Use the top 50 ranked sentences from the passage retrieval module for answer extraction. • Evaluate the path similarity for relation paths between the question target or answer candidate and other question terms. • Non-NE questions: evaluate all noun/verb phrases. NUS at TREC-13 QA Main Task
Discussions on Evaluation Results • The use of approximate relation matching outperforms our previous answer extraction technique. • 22% improvement for overall questions. • 45% improvement for Non-NE questions (69 out of 230 questions). • The two path similarity measurements do not make obvious difference. • Total Path Matching performs slightly better than Triple Matching. • Minipar can’t resolve long distance dependency as well. NUS at TREC-13 QA Main Task
Outline • System architecture • New Experiments in TREC-13 QA Main Task • Approximate Dependency Relation Matching for Answer Extraction • Soft Matching Patterns for Definition Generation • Definition Sentences in Answering Topically-Related Factoid/List Questions • Conclusion NUS at TREC-13 QA Main Task
Question Typing and Passage Retrieval for Factoid/List Q’s • Question typing • Leveraging our past question typology and rule-based question typing module. • Offline tagging of the whole TREC corpus using our rule-based named entity tagger. • Passage retrieval – on two sources: • Topic-relevant document set by the document retrieval module: NUSCHUA1 and 2. • Definition sentences for a specific topic by the definition generation module: NUSCHUA3 • Question-specific wrappers on definitions. NUS at TREC-13 QA Main Task
Exploiting Definition Sentences to Answer Factoid/List Questions • Conduct passage retrieval for factoid/list questions on the definition sentences about the topic. • Much more efficient due to smaller search space. • Average accuracy of 0.50, lower than that over all topic-related documents. • Due to low recall – imposed cut-off for selecting definition sentences (naïve use of definitions). • Some sentences for answering factoid/list questions are not definition sentences. NUS at TREC-13 QA Main Task
Exploiting Definitions from External Knowledge • Pre-complied wrappers for extraction of specific fields of information for list questions • Works, product names and person titles. • From both generated definition sentences and existing definitions: cross validation. • Achieves F-measure of 0.81 for 8 list questions about works. NUS at TREC-13 QA Main Task
Outline • System architecture • New Experiments in TREC-13 QA Main Task • Approximate Dependency Relation Matching for Answer Extraction • Soft Matching Patterns for Definition Generation • Definition Sentences in Answering Topically-Related Factoid/List Questions • Conclusion NUS at TREC-13 QA Main Task
Conclusion • Approximate relation matching for answer extraction • Still have a hard time in dealing with difficult questions. • Dependency relation alignment problem – words often can’t be matched due to linguistic variations. • Semantic matching of words/phrases is needed with relation matching. • More effective use of topic related sentences in answering factoid/list questions. NUS at TREC-13 QA Main Task
Q & A Thanks! NUS at TREC-13 QA Main Task
A Question Example • Topic #14: Horus • Q1: Horus is the god of what? • Osiris, the god of the underworld, his wife, Isis, the goddess of fertility, and their son, Horus, were worshiped by ancient Egyptians. • The mummified hawk probably was dedicated to one of several gods associated with falcons, such as the skygodHorus, the war god Montu and the sun god Re. • The stolen pieces included stones from the entrances of tombs and a statue of the godHorus, who was half-man, half-falcon. • No explicit question target • Relying on keyword matching or density-based answer extraction may lead to wrong answer. NUS at TREC-13 QA Main Task