1 / 30

Automatic Set Expansion for List Question Answering

Automatic Set Expansion for List Question Answering. Richard C. Wang , Nico Schlaefer, William W. Cohen, and Eric Nyberg Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 USA. Task.

rachel
Download Presentation

Automatic Set Expansion for List Question Answering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Set Expansion for List Question Answering Richard C. Wang, Nico Schlaefer, William W. Cohen, and Eric Nyberg Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 USA

  2. Set Expansion for List Question Answering Task • Automatically improve answers generated by Question Answering systems for list questions, by using a Set Expansion system. • For example: • Name cities that have Starbucks. Better!

  3. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  4. Set Expansion for List Question Answering Question Answering (QA) • Question Answeringtask: • Retrieve answers to natural language questions • Different question types: • Factoid questions • List questions • Definitional questions • Opinion questions • Major QA evaluations: • Text REtrieval Conference (TREC): English • NTCIR: Japanese, Chinese • CLEF: European languages

  5. Set Expansion for List Question Answering Typical QA Pipeline Question String “Who invented the smiley?” QuestionAnalysis Answer type: PersonKeywords: invented, smiley ... Analyzed Question Knowledge Sources Query Generation & Search The two original text smileys were invented on September 19, 1982 by Scott E. Fahlman ... Search Results CandidateGeneration • smileys • September 19, 1982 • Scott E. Fahlman Candidate Answers AnswerScoring Scored Answers

  6. Set Expansion for List Question Answering QA System: Ephyra (Schlaefer et al., TREC 2007) • History: • Developed at University of Karlsruhe, Germany and Carnegie Mellon University, USA • TREC participations in 2006 (13th out of 27 teams) and 2007 (7th out of 21 teams) • Released into open source in 2008 • Different candidate generators: • Answer type classification • Regular expression matching • Semantic parsing • Available for download at: http://www.ephyra.info/

  7. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  8. Set Expansion for List Question Answering Set Expansion (SE) • For example, • Given a query: {“survivor”, “amazing race”} • Answer is: {“american idol”, “big brother”, ....} • More formally, • Given a small number of seeds: x1, x2, …, xk where each xiSt • Answer is a listing of other probable elements: e1, e2, …, en where each eiSt • A well-known example of a web-based set expansion system is Google Sets™ • http://labs.google.com/sets

  9. Set Expansion for List Question Answering SE System: SEAL (Wang & Cohen, ICDM 2007) • Features • Independent of human/markup language • Support seeds in English, Chinese, Japanese, Korean, ... • Accept documents in HTML, XML, SGML, TeX, WikiML, … • Does not require pre-annotatedtraining data • Utilize readily-available corpus: World Wide Web • Based on two research contributions • Automatically construct wrappers for extracting candidate items • Rank extracted items using random graph walk • Try it out for yourself: http://rcwang.com/seal

  10. Set Expansion for List Question Answering SEAL’s SE Pipeline Pentax Sony Kodak Minolta Panasonic Casio Leica Fuji Samsung … Canon Nikon Olympus • Fetcher: downloads web pages from the Web • Extractor: learns wrappers from web pages • Ranker: ranks entities extracted by wrappers

  11. Set Expansion for List Question Answering Challenge • SE systems require relevant (non-noisy) seeds, but answers produced by QA systems are often noisy. • How can we integrate those two systems together? • We propose three extensions to SEAL • Aggressive Fetcher • Lenient Extractor • Hinted Expander

  12. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  13. Set Expansion for List Question Answering Original Fetcher Procedure: • Compose a search query by concatenating all seeds • Use Google to request top 100 web pages • Fetch web pages and send to the Extractor

  14. Set Expansion for List Question Answering Proposed Fetcher • Aggressive Fetcher (AF) • Sends a two-seed query for every possible pair of seeds to the search engines • More likely to compose queries containing only relevant seeds

  15. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  16. Set Expansion for List Question Answering Original Extractor • A wrapper is a pair of L and R context string • Maximally-long contextual strings that bracket at least one instance of every seed • Extracts strings between L and R • Learn wrappers from web pages and seeds on the fly • Utilize semi-structured documents • Wrappers defined at character level • No tokenization required (language-independent) • However, very page specific (page-dependent)

  17. Set Expansion for List Question Answering

  18. Set Expansion for List Question Answering Proposed Extractor • Lenient Extractor (LE) • Maximally-long contextual strings that bracket at least one instance of a minimum of twoseeds • More likely to find useful contexts that bracket only relevant seeds

  19. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  20. Set Expansion for List Question Answering Hinted Expander (HE) • Utilizes contexts in the question to constrain SEAL’s search space on the Web • Extract up to three keywords from the question using Ephyra’s keyword extractor • Append the keywords to the search query • Example: • Name cities that have Starbucks. • More likely to find documents containing desired set of answers

  21. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  22. Set Expansion for List Question Answering Experiment #1: Ephyra • Evaluate on TREC 13, 14, and 15 datasets • 55, 93, and 89 list questions respectively • Use SEAL to expand top four answers from Ephyra • Outputs a list of answers ranked by confidence scores • For each dataset, we report: • Mean Average Precision (MAP) • Mean of average precision for each ranked list • Average F1 with Optimal Per-Question Threshold • For each question, cut off the list at a threshold which maximizes the F1 score for that particular question

  23. Set Expansion for List Question Answering Experiment #1: Ephyra

  24. Set Expansion for List Question Answering Experiment #2: Ephyra • In practice, thresholds are unknown • For each dataset, do 5-fold cross validation: • Train: Find one optimal threshold for four folds • Test: Use the threshold to evaluate the fifth fold • Introduce a fourth dataset: All • Union of TREC 13, 14, and 15 • Introduce another system: Hybrid • Intersection of original answers from Ephyra and expanded answers from SEAL

  25. Set Expansion for List Question Answering Experiment #2: Ephyra

  26. Set Expansion for List Question Answering Outline • Introduction • Question Answering • Set Expansion • Proposed Approach • Aggressive Fetcher • Lenient Extractor • Hinted Expander • Experimental Results • QA System: Ephyra • Other QA Systems • Conclusion

  27. Set Expansion for List Question Answering Experiment: Other QA Systems • Top five QA systems that perform the best on list questions in TREC 15 evaluation • Language Computer Corporation (lccPA06) • The Chinese University of Hong Kong (cuhkqaepisto) • National University of Singapore (NUSCHUAQA1) • Fudan University (FDUQAT15A) • National Security Agency (QACTIS06C) • For each QA system, train thresholds for SEAL and Hybrid on the union of TREC 13 and 14 • Expand top four answers from the QA systems on TREC 15, and apply the trained threshold

  28. Set Expansion for List Question Answering Experiment: Top QA Systems

  29. Set Expansion for List Question Answering Conclusion • A feasible method for integrating a SE approach into any QA system • Proposed SE approach is effective • Improves QA systems on list questions by using only a few top answers as seeds • Proposed hybrid system is effective • Improves Ephyra and (most) top five QA systems

  30. Set Expansion for List Question Answering Thank You!

More Related