280 likes | 302 Views
Automatic Acquisition of Synonyms Using the Web as a Corpus. Svetlin Nakov, Sofia University "St. Kliment Ohridski" nakov@fmi-uni-sofia.bg. 3rd Annual South East European Doctoral Student Conference (DSC2008): Infusing Knowledge and Research in South East Europe. Introduction.
E N D
Automatic Acquisition of Synonyms Using the Web as a Corpus Svetlin Nakov, Sofia University "St. Kliment Ohridski" nakov@fmi-uni-sofia.bg 3rd Annual South East European Doctoral Student Conference (DSC2008): Infusing Knowledge and Research in South East Europe DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Introduction • We want to automatically extract all pairs of synonyms inside given text • Our goal is: • Design an algorithm that can distinguish between synonyms and non-synonyms • Our approach: • Measure semantic similarity using the Web as a corpus • Synonyms are expected to have higher semantic similarity than non-synonyms DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
The Paper in One Slide • Measuring semantic similarity • Analyze the words local contexts • Use the Web as a corpus • Similar contexts similar words • TF.IDF weighting & reverse context lookup • Evaluation • 94 words (Russian fine arts terminology) • 50 synonym pairs to be found • 11pt average precision: 63.16% DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • What is local context? • Few words before and after the target word • The words in the local context of given word are semantically related toit • Need to exclude the stop words: prepositions, pronouns, conjunctions, etc. • Stop words appear in all contexts • Need of sufficiently big corpus Same day delivery of fresh flowers, roses, and unique gift baskets from our online boutique. Flower delivery online by local florists for birthday flowers. DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • Web as a corpus • The Web can be used as a corpus to extract the local context for given word • The Web is the largest possible corpus • Contains large corpora in any language • Searching some word in Google can return up to 1 000 snippets of texts • The target word is given along with its local context: few words before and after it • Target language can be specified DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • Web as a corpus • Example: Google query for "flower" DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • Measuring semantic similarity • For given two words their local contexts are extracted from the Web • A set of words and their frequencies • Semantic similarity is measured as similarity between these local contexts • Local contexts are represented as frequency vectors for given set of words • Cosine between the frequency vectors in the Euclidean space is calculated DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • Example of context words frequencies word: flower word: computer DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Contextual Web Similarity • Example of frequency vectors • Similarity = cosine(v1, v2) v1: flower v2: computer DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
TF.IDF Weighting • TF.IDF (term frequency times inverted document frequency) • Statistical measure in information retrieval • Shows how important is a certain word for a given document in a set of documents • Increases proportionally to the number of word's occurrences in the document • Decreases proportionally to the total number of documents containing the word DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Reverse Context Lookup • Local context extracted from the Web can contain arbitrary parasite words like "online", "home", "search", "click", etc. • Internet terms appear in any Web page • Such words are not likely to be associated with the target word • Example (for the word flowers) • "send flowers online", "flowers here", "order flowers here" • Will the word "flowers" appear in the local context of "send", "online" and "here"? DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Reverse Context Lookup • If two words are semantically related, then • Both of them should appear in the local contexts of each other • Let #{x,y}= number of occurrences of x in the local context of y • For any word w and a word from its local context wc, we define their strength of semantic associationp(w,wc) as follows: • p(w, wc) = min{ #(w, wc), #(wc,w) } • We use p(w, wc) as vector coordinates • We introduce a minimal occurrence threshold (e.g. 5) to filter words appearing just by chance DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Data Set • We use a list of 94 Russian words: • Terms extracted from texts in the subject of fine arts • Limited to nouns only • The data set: • There are 50 synonym pairs in these words • We expect to find them by our algorithms DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Experiments • We tested few modifications of our contextual Web similarity algorithm • Basic algorithm (without modifications) • TF.IDF weighting • Reverse context lookup with different frequency threshold DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Experiments • RAND – random ordering of all the pairs • SIM – the basic algorithm for extraction of semantic similarity from the Web • Context size of 3 words • Without analyzing the reverse context • With lemmatization • SIM+TFIDF – modification of the SIM algorithm with TF.IDF weighting • REV2, REV3, REV4, REV5, REV6, REV7 – the SIM algorithm + “reverse context lookup” with frequency thresholds of: 2, 3, 4, 5, 6 and 7 DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Resources Used • We used the following resources: • Google Web search engine: extracted the first 1 000 results for 82 645 Russian words • Russian lemma dictionary: 1 500 000 wordforms and 100 000 lemmata • A list of 507 Russian stop words DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Evaluation • Our algorithms arrange all pairs of words according to their semantic similarity • We expect the 50 synonyms pairs to be at the top of the result list • We count how many synonyms are found in the top N results (e.g. top 5, top 10, etc.) • We measure precision and recall • We measure 11pt average precision to evaluate the results DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
SIM Algorithm – Results Precision and recall obtained by the SIM algorithm DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Comparison of the Algorithms Comparison of the algorithms (number of synonyms in the top results) DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
11pt Average Precision 70,00% 63,16% 58,98% 60,00% 50,00% 40,00% 30,00% 20,00% 10,00% 1,15% n/a n/a n/a n/a n/a n/a 0,00% RAND SIM SIM+TFIDF REV2 REV3 REV4 REV5 REV6 REV7 Comparison of the Algorithms(11pt Average Precision) Comparing RAND, SIM, SIM+TDIDF and REV2 … REV7 DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Results (Precision-Recall Graph) Comparing the recall-precision graphs of evaluated algorithms DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Discussion • Our approach is original because: • Measures automatically semantic similarity • Uses the Web as a corpus • Does not rely on any preexisting corpora • Does not requires semantic resources like WordNet and EuroWordNet • Works for any language • Tested for Bulgarian and Russian • Uses reverse-context lookup and TF.IDF • Significant improvement in quality DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Discussion • Good accuracy, but far away from 100% • Known problems of the proposed algorithms: • Semantically related words are not always synonyms • red – blue • wood – pine • apple – computer • Similar contexts does not always mean similar words (distributional hypothesis) • The Web as a corpus introduces noise • Google returns the first 1 000 results only DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Discussion • Known problems of the proposed algorithms: • Google ranks higher news portals, travel agencies and retail sites than books, articles and forum messages • Local context always contain noise • Working with words, not capturing phrases DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Conclusion and Future Work • Conclusion • Our algorithms can distinguish between synonyms and non-synonyms • Accuracy should be improved • Future Work • Additional techniques to distinguish between synonyms and semantically related words • Improve the semantic similarity measure algorithm DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
References • Hearst M. (1991). "Noun Homograph Disambiguation Using Local Context in Large Text Corpora". In Proceedings of the 7th Annual Conference of the University of Waterloo Centre for the New OED and Text Research, Oxford, England, pages 1-22. • Nakov P., Nakov S., Paskaleva E. (2007a). “Improved Word Alignments Using the Web as a Corpus”. In Proceedings of RANLP'2007, pages 400-405, Borovetz, Bulgaria. • Nakov S., Nakov P., Paskaleva E. (2007b). “Cognate or False Friend? Ask the Web!”. In Proceedings of the Workshop on Acquisition and Management of Multilingual Lexicons, held in conjunction with RANLP'2007, pages 55-62, Borovetz, Bulgaria. • Sparck-Jones K. (1972). “A Statistical Interpretation of Term Specificity and its Application in Retrieval”. Journal of Documentation, volume 28, pages 11-21. • Salton G., McGill M. (1983), Introduction to Modern Information Retrieval, McGraw-Hill, New York. • Paskaleva E. (2002). “Processing Bulgarian and Russian Resources in Unified Format”. In Proceedings of the 8th International Scientific Symposium MAPRIAL, Veliko Tarnovo, Bulgaria, pages 185-194. • Harris, Z. (1954). "Distributional structure”. Word, 10, pages 146-162. • Lin D. (1998). "Automatic Retrieval and Clustering of Similar Words". In Proceedings of COLING-ACL'98, Montreal, Canada, pages 768-774. • Curran J., Moens M. (2002). "Improvements in Аutomatic Тhesaurus Еxtraction". In Proceedings of the Workshop on Unsupervised Lexical Acquisition, SIGLEX 2002, Philadelphia, USA, pages 59-67. DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
References • Plas L., Tiedeman J. (2006). "Finding Synonyms Using Automatic Word Alignment and Measures of Distributional Similarity". In Proceedings of COLING/ACL 2006, Sydney, Australia. • Och F., Ney H. (2003). "A Systematic Comparison of Various Statistical Alignment Models". Computational Linguistics, 29 (1), 2003. • Hagiwara М., Ogawa Y., Toyama K. (2007). "Effectiveness of Indirect Dependency for Automatic Synonym Acquisition". In Proceedings of CoSMo 2007 Workshop, held in conjuction with CONTEXT 2007, Roskilde, Denmark. • Kilgarriff A., Grefenstette G. (2003). "Introduction to the Special Issue on the Web as Corpus", Computational Linguistics, 29(3):333–347. • Inkpen D. (2007). "Near-synonym Choice in an Intelligent Thesaurus". In Proceedings of the NAACL-HLT, New York, USA. • Chen H., Lin M., Wei Y. (2006). "Novel Association Measures Using Web Search with Double Checking". In Proceedings of the COLING/ACL 2006, Sydney, Australia, pages 1009-1016. • Sahami M., Heilman T. (2006). "A Web-based Kernel Function for Measuring the Similarity of Short Text Snippets". In Proceedings of 15th International World Wide Web Conference, Edinburgh, Scotland. • Bollegala D., Matsuo Y., Ishizuka M. (2007). "Measuring Semantic Similarity between Words Using Web Search Engines", In Proceedings of the 16th International World Wide Web Conference (WWW2007), Banff, Canada, pages 757-766. • Sanchez D., Moreno A. (2005), "Automatic Discovery of Synonyms and Lexicalizations from the Web". Artificial Intelligence Research and Development, Volume 131, 2005. DSC 2008 – 26-27 June 2008, Thessaloniki, Greece
Questions? Automatic Acquisition of Synonyms Using the Web as a Corpus DSC 2008 – 26-27 June 2008, Thessaloniki, Greece