550 likes | 787 Views
An overview of the technology used Information Retrieval. Louise Guthrie University of Sheffield. What is Information Retrieval(IR)?. Retrieval of unstructured data Most often - Retrieval of Text Retrieval of Videos Retrieval of Images. Retrieval of Text Documents.
E N D
An overview of the technology used Information Retrieval Louise Guthrie University of Sheffield
What is Information Retrieval(IR)? • Retrieval of unstructured data • Most often - Retrieval of Text • Retrieval of Videos • Retrieval of Images
Retrieval of Text Documents • Searching for precedent in legal cases • Searching files on your computer • Searching on the web • Siri
Give me all documents where Enron executives discuss the company stock Information Retrieval ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~
QUERY Information Retrieval ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ DOCUMENTS ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~
Concerns of an IR system • How do you represent the text? • How do you represent the query? • How do you decide the documents to return? • how do you find them efficiently? • how do decide what is presented first to the user? • How do you evaluate the system?
Finding Relevant Documents • All systems want to return documents that satisfy the query • To satisfy a natural language query perfectly requires understanding • Understanding is still a research topic
How is the text represented? • Bag of words approach • Pay no heed to inter-word relations: • syntax, semantics • Bag does characterise document • Not perfect, words are • ambiguous • used in different forms or synonymously
Still work to be done in IR The following query: “The destruction of the amazon rain forests” would not trigger a system to find an article about “Brazilian jungles being destroyed”
Forms of query/retrieval system • Boolean • Rooted in commercial systems from 1970s - Spotlight on the MAC - Westlaw - the system used to find legal cases • Ranked retrieval • Long championed by academics
Boolean searching • To find articles about the destruction of the Amazon rain forest • “amazon” & “rain forest*” & (“destroy” | “destruction”) • Break collection into two unordered sets • Documents that match the query • Documents that don’t • User has complete control but… • …not easy to use. • Often results in too many or to few results – AND gives too few; OR too many
Considerations in “matching the query? • Should the system match upper and lower case letter – amazon and Amazon? • Should destroying match destroy?
Matching different forms of a word • Matching the query term “forests” • to “forest” and “forested” • Stemmers remove affixes • removal of suffixes - worker • prefixes? - megavolt • infixes? - un-bloody-likely • Stick with suffixes
Plural stemmer • Plurals in English • If word ends in “ies” but not “eies”, “aies” • “ies” -> “y” • if word ends in “es” but not “aes, “ees”, “oes” • “es” -> “e” • if word ends in “s” but not “us” or “ss” • “s” -> “” • First applicable rule is the one used
Plural stemmer examples Forests - ? Statistics - ? Queries - ? Foes - ? Does - ? Is - ? Plus - ? Plusses - ?
For other endings - When to strip, when to stop? • “ed”, “ing”, “ational”, “ation”, “able”, “ism”, etc, etc. • What about • “bring”, “table”, “prism”, “bed”, “thing”? • Use a dictionary as well? • “Buttered”
Is stemming used? • Research says it is useful • Web search engines hardly use it • Why? • Unexpected results • computer, computation, computing, computational, etc. • Foreign languages?
white sand vs. white sands http://www.google.com/search?client=safari&rls=en&q=white+sand &ie=UTF-8&oe=UTF-8 http://www.google.com/search?client=safari&rls=en&q="white+sand"&ie=UTF-8&oe=UTF-8 -
Ranked retrieval • The users query is one or more words in natural language • A similarity score is calculated between query and every document • Sort documents by their score • Present top scoring documents to the user
Common techniques • Stop word removal • From fixed list • “destruction amazon rain forests” • Stemming • Match lower and upper case
a always are being co about am around below con above among as beside could across amongst at besides couldn’t after amount back between cry again and because both detail all another become bottom done Stop Word Examples
Measuring Similarity • In place of understanding, we try to measure the similarity of a document to the query and we return to the user the most similar ones • There are MANY different measures of similarity
How do we assign a score? • A first try - Jaccard Coefficient • jaccard(A,B) = |A ∩ B| / |A ∪ B| • jaccard(A,A) = 1 • jaccard(A,B) = 0if A ∩ B = 0 • Doesn’t consider how many times a word occurs • Doesn’t take into account that rare terms are more informative
TF • More often a term is used in a document • More likely document is about that term • Depends on document length • Harman, D. (1992): Ranking algorithms, in Frakes, W. & Baeza-Yates, B. (eds.), Information Retrieval: Data Structures & Algorithms: 363-392 • Watch out for mistake: not unique terms. • Problems with spamming
IDF • Some query terms better than others? • In general, fair to say that… • “amazon” > “forest”“destruction” > “rain” • Inverse document frequency (idf) • n: Number of documents term occurs in • N: Number of documents in collection
The scoring • For each document • Term frequency (tf) • t: Number of times term occurs in document • dl: Length of document (number of terms)
Very successful • Simple, but effective • Core of most weighting functions • tf (term frequency) • idf (inverse document frequency)
Robertson’s BM25 • Q is a query containing terms T • w is a form of IDF • k1, b, k2, k3 are parameters. • tf is the document term frequency. • qtf is the query term frequency. • dl is the document length (arbitrary units). • avdl is the average document length.
Getting the balance • Documents with all the query terms? • Just those with high tf•idf terms? • Just nouns? • Just noun phrases? • With ancestor or children terms?
Other considerations • Should spelling be corrected? • Should we remove punctuation? • Should Feb. 20, 1980 match 2/20/80?
Spamming the tf weight A white font can be used to spam the tf weight SEX SEXY MONICA LEWINSKY JENNIFER LOPEZ CLAUDIA SCHIFFER CINDY CRAWFORD JENNIFER ANNISTON GILLIAN ANDERSON MADONNA NIKI TAYLOR ELLE MACPHERSON KATE MOSS CAROL ALT TYRA BANKS FREDERIQUE KATHY IRELAND PAM ANDERSON KAREN MULDER VALERIA MAZZA SHALOM HARLOW AMBER VALLETTA LAETITA CASTA BETTIE PAGE HEIDI KLUM PATRICIA FORD DAISY FUENTES KELLY BROOK SEX SEXY MONICA LEWINSKY JENNIFER LOPEZ CLAUDIA SCHIFFER CINDY CRAWFORD JENNIFER ANNISTON GILLIAN ANDERSON MADONNA NIKI TAYLOR ELLE MACPHERSON KATE MOSS CAROL ALT TYRA BANKS FREDERIQUE KATHY IRELAND PAM ANDERSON KAREN MULDER VALERIA MAZZA SHALOM HARLOW AMBER VALLETTA LAETITA CASTA BETTIE PAGE HEIDI KLUM PATRICIA FORD DAISY FUENTES KELLY BROOK SEX SEXY MONICA LEWINSKY JENNIFER LOPEZ CLAUDIA SCHIFFER CINDY CRAWFORD JENNIFER ANNISTON GILLIAN ANDERSON MADONNA NIKI TAYLOR ELLE MACPHERSON KATE MOSS CAROL ALT TYRA BANKS FREDERIQUE KATHY IRELAND PAM ANDERSON KAREN MULDER VALERIA MAZZA SHALOM HARLOW AMBER VALLETTA LAETITA CASTA BETTIE PAGE HEIDI KLUM PATRICIA FORD DAISY FUENTES KELLY BROOK SEX SEXY MONICA LEWINSKY JENNIFER LOPEZ CLAUDIA SCHIFFER CINDY CRAWFORD JENNIFER ANNISTON GILLIAN ANDERSON MADONNA NIKI TAYLOR ELLE MACPHERSON KATE MOSS CAROL ALT TYRA BANKS FREDERIQUE KATHY IRELAND PAM ANDERSON KAREN MULDER VALERIA MAZZA SHALOM HARLOW AMBER VALLETTA LAETITA CASTA BETTIE PAGE HEIDI KLUM PATRICIA FORD DAISY FUENTES KELLY BROOK
IDF and collection context • IDF sensitive to the document collection content • General newspapers • “amazon” > “forest”“destruction” > “rain” • Amazon book store press releases • “forest”“destruction” > “rain” > “amazon”
Models • Mathematically modelling the retrieval process • So as to better understand it • Draw on work of others • Vector space • Probabilistic
A vector space model • In a vector space model, similarity is measured as the cosine of the angle between the two vectors (one is the document vector and one is the query vector) • The value of each component of the vector might be • 0 or 1, • might be TF, • might be TF(IDF) • Some other weight ?
A possible Anthony and Cleopatra vector (0, . . . 0, 157, 0, . . . 0, 4, 0, . . . 0, 235, 0, . . . 0, 57, 0, . . . 0, 2, 0, . . . 0, 2))
Authority • In classic IR • authority not so important • On the web • very important • Query “Harvard” • Dwane’s Harvard home page • The Harvard University home page
Simple methods • URL length • Domain name
Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Research interests/publications/lecturing/supervising My publications list probably does a reasonable job of ostensively defining my interests and past activities. For those with a preference for more explicit definitions, see below. I'm now working at the CIIR with an interest in automatically constructed categorisations and means of explaining these constructions to users. I also plan to work some more on a couple aspects of my thesis that look promising. Supervised by Keith van Rijsbergen in the Glasgow IR group, I finished my Ph.D. in 1997 looking at the issues surrounding the use of Word Sense Disambiguation applied to IR: a number of publications have resulted from this work. While doing my Ph.D., I was fortunate enough to apply for and get a small grant which enabled Ross Purves and I to investigate the use of IR ranked retrieval in the field of avalanche forecasting (snow avalanches that is), this resulted in a paper in JDoc. At Glasgow, I also worked on a number of TREC submissions and also co-wrote the guidelines for creating the very short queries introduced in TREC-6. Finally, I was involved in lecturing work on the AIS Advanced MSc course writing and presenting two short courses on Implementation and NLP applied to IR. I also supervised/co-supervised four MSc students. The work of three of these bright young things have been published in a number of good conferences. My first introduction to IR was the building (with Iain Campbell) of an interface to a probabilistic IR system Hubs and Authorities Authorities - sites that other web pages link to frequently on a particular topic Hubs - sites that tend to cite authorities
Evaluation • Measure how well an IR system is doing • Effectiveness • Number of relevant documents retrieved • Also • Speed • Storage requirements • Usability
Search Engine Technology - What we get Recall 3 / 11
Search Engine Technology Precision - 3 / 9
How do we tell what is good? • In small closed collections, human judgments are used. • On large collections the overlap of systems is used. • Search engines don’t say very much about how they work - the number of click-throughs may certainly be a factor.
Effectiveness • Precision is easy • P at rank 10. • Recall is hard • Total number of relevant documents?
Test collections • Test collection • Set of documents (few thousand-few million) • Set of queries (50-400) • Set of relevance judgements • Humans check all documents! • Use pooling • Take top 100 from every submission • Remove duplicates • Manually assess these only.
Test collections • Small collections (~3Mb) • Cranfield, NPL, CACM - title (& abstract) • Medium (~4 Gb) • TREC - full text • Large (~100Gb) • VLC track of TREC • Compare with reality (~40Tb) • CIA, GCHQ, Large search services
It is the End Of World . . Index the documents Document 1: It’s the end of the world as we know it. Document 2: The end of the world is near. #docs It’s -> it is • The index stores unique terms for fast retrieval. • Each word points to documents containing the term and its term frequency. 1: 2 1 1: 2 2: 1 2 2 1: 2 2: 2 Don’t duplicate 1: 1 2: 1 2 2 1: 1 2: 1 2 1: 1 2: 1 Term frequency Document ID