560 likes | 660 Views
Information Retrieval. CSE 8337 (Part D) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/
E N D
Information Retrieval CSE 8337 (Part D) Spring 2009 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and BerthierRibeiro-Netohttp://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to Information Retrieval by Christopher D. Manning, PrabhakarRaghavan, and HinrichSchutze http://informationretrieval.org
CSE 8337 Outline • Introduction • Simple Text Processing • Boolean Queries • Web Searching/Crawling • Indexes • Vector Space Model • Matching • Evaluation
Why System Evaluation? • There are many retrieval models/ algorithms/ systems, which one is the best? • What does best mean? • IR evaluation may not actually look at traditional CS metrics of space/time. • What is the best component for: • Ranking function (dot-product, cosine, …) • Term selection (stopword removal, stemming…) • Term weighting (TF, TF-IDF,…) • How far down the ranked list will a user need to look to find some/all relevant documents?
Measures for a search engine • How fast does it index • Number of documents/hour • (Average document size) • How fast does it search • Latency as a function of index size • Expressiveness of query language • Ability to express complex information needs • Speed on complex queries • Uncluttered UI • Is it free?
Measures for a search engine • All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise • The key measure: user happiness • What is this? • Speed of response/size of index are factors • But blindingly fast, useless answers won’t make a user happy • Need a way of quantifying user happiness
Happiness: elusive to measure • Most common proxy: relevance of search results • But how do you measure relevance? • We will detail a methodology here, then examine its issues • Relevant measurement requires 3 elements: • A benchmark document collection • A benchmark suite of queries • A usually binary assessment of either Relevant or Nonrelevant for each query and each document
Difficulties in Evaluating IR Systems • Effectiveness is related to the relevancy of retrieved items. • Relevancy is not typically binary but continuous. • Even if relevancy is binary, it can be a difficult judgment to make. • Relevancy, from a human standpoint, is: • Subjective: Depends upon a specific user’s judgment. • Situational: Relates to user’s current needs. • Cognitive: Depends on human perception and behavior. • Dynamic: Changes over time.
How to perform evaluation • Start with a corpus of documents. • Collect a set of queries for this corpus. • Have one or more human experts exhaustively label the relevant documents for each query. • Typically assumes binary relevance judgments. • Requires considerable human effort for large document/query corpora.
IR Evaluation Metrics • Precision/Recall • P/R graph • Regular • Smoothing • Interpolating • Averaging • ROC Curve • MAP • R-Precision • P/R points • F-Measure • E-Measure • Fallout • Novelty • Coverage • Utility • ….
retrieved & irrelevant Not retrieved & irrelevant Entire document collection irrelevant Relevant documents Retrieved documents retrieved & relevant not retrieved but relevant relevant retrieved not retrieved Precision and Recall
Precision and Recall • Precision • The ability to retrievetop-ranked documents that are mostly relevant. • Recall • The ability of the search to find all of the relevant items in the corpus.
Determining Recall is Difficult • Total number of relevant items is sometimes not available: • Sample across the database and perform relevance judgment on these items. • Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.
Returns relevant documents but misses many useful ones too The ideal Returns most relevant documents but includes lots of junk Desired areas Trade-off between Recall and Precision 1 Precision 0 1 Recall
Recall-Precision Graph Smoothing • Avoid sawtooth lines by smoothing • Interpolate for one query • Average across queries
Interpolating a Recall/Precision Curve • Interpolate a precision value for each standard recall level: • rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} • r0 = 0.0, r1 = 0.1, …, r10=1.0 • The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level:
Interpolated precision • Idea: If locally precision increases with increasing recall, then you should get to count that… • So you max of precisions to right of value(Need not be at only standard levels.)
Precision across queries • Recall and Precision are calculated for a specific query. • Generally want a value for many queries. • Calculate average precision recall over a set of queries. • Average precision at recall level r: • Nq – number of queries • Pi(r) - precision at recall level r for ith query
Average Recall/Precision Curve • Typically average performance over a large set of queries. • Compute average precision at each standard recall level across all queries. • Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.
Compare Two or More Systems • The curve closest to the upper right-hand corner of the graph indicates the best performance
ROC Curve Data • False Positive Rate vs True Positive Rate • True Positive Rate • Sensitivity • Recall • tn/(fp+tn) • False Positive Rate • fp/(fp+tn) • 1-Specificity • Specificity -
Yet more evaluation measures… • Mean average precision (MAP) • Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved • Avoids interpolation, use of fixed recall levels • MAP for query collection is arithmetic ave. • Macro-averaging: each query counts equally • R-precision • If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Reldocs returned • Perfect system could score 1.0.
Variance • For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7) • Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. • That is, there are easy information needs and hard ones!
Evaluation • Graphs are good, but people want summary measures! • Precision at fixed retrieval level • Precision-at-k: Precision of top k results • Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages • But: averages badly and has an arbitrary parameter of k • 11-point interpolated average precision • The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them • Evaluates performance at all recall levels
Typical (good) 11 point precisions • SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)
Computing Recall/Precision Points • For a given query, produce the ranked list of retrievals. • Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures. • Mark each document in the ranked list that is relevant according to the gold standard. • Compute a recall/precision pair for each position in the ranked list that contains a relevant document.
Computing Recall/Precision Points: An Example (modified from [Salton83]) Let total # of relevant docs = 6 Check each new recall point: R=1/6=0.167; P=1/1=1 R=2/6=0.333; P=2/2=1 R=3/6=0.5; P=3/4=0.75 R=4/6=0.667; P=4/6=0.667 Missing one relevant document. Never reach 100% recall R=5/6=0.833; p=5/13=0.38
F-Measure • One measure of performance that takes into account both recall and precision. • Harmonic mean of recall and precision: • Calculated at a specific document in the ranking. • Compared to arithmetic mean, both need to be high for harmonic mean to be high. • Compromise between precision and recall
A combined measure: F • Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean): • People usually use balanced F1measure • i.e., with = 1 or = ½
E Measure (parameterized F Measure) • A variant of F measure that allows weighting emphasis on precision or recall: • Value of controls trade-off: • = 1: Equally weight precision and recall (E=F). • > 1: Weight precision more. • < 1: Weight recall more.
Fallout Rate • Problems with both precision and recall: • Number of irrelevant documents in the collection is not taken into account. • Recall is undefined when there is no relevant document in the collection. • Precision is undefined when no document is retrieved.
Fallout • Want fallout to be close to 0. • In general want to maximize recall and minimize fallout. • Examine fallout-recall graph. More systems oriented than recall-precision.
Subjective Relevance Measure • Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware. • Ability to find new information on a topic. • Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. • Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).
Utility • Subjective measure • Cost-Benefit Analysis for retrieved documents • Cr – Benefit of retrieving relevant document • Cnr – Cost of retrieving a nonrelevant document • Crn – Cost of not retrieving a relevant document • Nr – Number of relevant documents retrieved • Nnr – Number of nonrelevant documents retrieved • Nrn – Number of relevant documents not retrieved
Other Factors to Consider • User effort: Work required from the user in formulating queries, conducting the search, and screening the output. • Response time: Time interval between receipt of a user query and the presentation of system responses. • Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials. • Collection coverage: Extent to which any/all relevant items are included in the document corpus.
Experimental Setup for Benchmarking • Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision. • Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments. • Performance data is valid only for the environment under which the system is evaluated.
Evaluation Algorithm under test Benchmarks • A benchmark collection contains: • A set of standard documents and queries/topics. • A list of relevant documents for each query. • Standard collections for traditional IR: TREC: http://trec.nist.gov/ Precision and recall Retrieved result Standard document collection Standard queries Standard result
Benchmarking The Problems • Performance data is valid only for a particular benchmark. • Building a benchmark corpus is a difficult task. • Benchmark web corpora are just starting to be developed. • Benchmark foreign-language corpora are just starting to be developed.
The TREC Benchmark • TREC: Text REtrieval Conference (http://trec.nist.gov/) • Originated from the TIPSTER program sponsored by • Defense Advanced Research Projects Agency (DARPA). • Became an annual conference in 1992, co-sponsored by the • National Institute of Standards and Technology (NIST) and • DARPA. • Participants are given parts of a standard set of documents • and TOPICS (from which queries have to be derived) in • different stages for training and testing. • Participants submit the P/R values for the final document • and query corpus and present their results at the conference.
The TREC Objectives • Provide a common ground for comparing different IR • techniques. • Same set of documents and queries, and same evaluation method. • Sharing of resources and experiences in developing the • benchmark. • With major sponsorship from government to develop large benchmark collections. • Encourage participation from industry and academia. • Development of new evaluation techniques, particularly for • new applications. • Retrieval, routing/filtering, non-English collection, web-based collection, question answering.
From document collections to test collections • Still need • Test queries • Relevance assessments • Test queries • Must be germane to docs available • Best designed by domain experts • Random query terms generally not a good idea • Relevance assessments • Human judges, time-consuming • Are human panels perfect?
Unit of Evaluation • We can compute precision, recall, F, and ROC curve for different units. • Possible units • Documents (most common) • Facts (used in some TREC evaluations) • Entities (e.g., car companies) • May produce different results. Why?
Kappa measure for inter-judge (dis)agreement • Kappa measure • Agreement measure among judges • Designed for categorical judgments • Corrects for chance agreement • Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] • P(A) – proportion of time judges agree • P(E) – what agreement would be by chance • Kappa = 0 for chance agreement, 1 for total agreement.
Kappa Measure: Example P(A)? P(E)?
Kappa Example • P(A) = 370/400 = 0.925 • P(nonrelevant) = (10+20+70+70)/800 = 0.2125 • P(relevant) = (10+20+300+300)/800 = 0.7878 • P(E) = 0.2125^2 + 0.7878^2 = 0.665 • Kappa = (0.925 – 0.665)/(1-0.665) = 0.776 • Kappa > 0.8 = good agreement • 0.67 < Kappa < 0.8 -> “tentative conclusions” (Carletta ’96) • Depends on purpose of study • For >2 judges: average pairwisekappas