1 / 142

Information Retrieval

Information Retrieval. CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza -Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/

aletta
Download Presentation

Information Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and BerthierRibeiro-Netohttp://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to Information Retrieval by Christopher D. Manning, PrabhakarRaghavan, and HinrichSchutze http://informationretrieval.org

  2. CSE 8337 Outline • Introduction • Text Processing • Indexes • Boolean Queries • Web Searching/Crawling • Vector Space Model • Matching • Evaluation • Feedback/Expansion

  3. Why System Evaluation? • There are many retrieval models/ algorithms/ systems, which one is the best? • What does best mean? • IR evaluation may not actually look at traditional CS metrics of space/time. • What is the best component for: • Ranking function (dot-product, cosine, …) • Term selection (stopword removal, stemming…) • Term weighting (TF, TF-IDF,…) • How far down the ranked list will a user need to look to find some/all relevant documents?

  4. Measures for a search engine • How fast does it index • Number of documents/hour • (Average document size) • How fast does it search • Latency as a function of index size • Expressiveness of query language • Ability to express complex information needs • Speed on complex queries • Uncluttered UI • Is it free?

  5. Measures for a search engine • All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise • The key measure: user happiness • What is this? • Speed of response/size of index are factors • But blindingly fast, useless answers won’t make a user happy • Need a way of quantifying user happiness

  6. Happiness: elusive to measure • Most common proxy: relevance of search results • But how do you measure relevance? • We will detail a methodology here, then examine its issues • Relevant measurement requires 3 elements: • A benchmark document collection • A benchmark suite of queries • A usually binary assessment of either Relevant or Nonrelevant for each query and each document

  7. Difficulties in Evaluating IR Systems • Effectiveness is related to the relevancy of retrieved items. • Relevancy is not typically binary but continuous. • Even if relevancy is binary, it can be a difficult judgment to make. • Relevancy, from a human standpoint, is: • Subjective: Depends upon a specific user’s judgment. • Situational: Relates to user’s current needs. • Cognitive: Depends on human perception and behavior. • Dynamic: Changes over time.

  8. How to perform evaluation • Start with a corpus of documents. • Collect a set of queries for this corpus. • Have one or more human experts exhaustively label the relevant documents for each query. • Typically assumes binary relevance judgments. • Requires considerable human effort for large document/query corpora.

  9. IR Evaluation Metrics • Precision/Recall • P/R graph • Regular • Smoothing • Interpolating • Averaging • ROC Curve • MAP • R-Precision • P/R points • F-Measure • E-Measure • Fallout • Novelty • Coverage • Utility • ….

  10. retrieved & irrelevant Not retrieved & irrelevant Entire document collection irrelevant Relevant documents Retrieved documents retrieved & relevant not retrieved but relevant relevant retrieved not retrieved Precision and Recall

  11. Determining Recall is Difficult • Total number of relevant items is sometimes not available: • Sample across the database and perform relevance judgment on these items. • Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.

  12. Returns relevant documents but misses many useful ones too The ideal Returns most relevant documents but includes lots of junk Desired areas Trade-off between Recall and Precision 1 Precision 0 1 Recall

  13. Recall-Precision Graph Example

  14. A precision-recall curve

  15. Recall-Precision Graph Smoothing • Avoid sawtooth lines by smoothing • Interpolate for one query • Average across queries

  16. Interpolating a Recall/Precision Curve • Interpolate a precision value for each standard recall level: • rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} • r0 = 0.0, r1 = 0.1, …, r10=1.0 • The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level:

  17. Interpolated precision • Idea: If locally precision increases with increasing recall, then you should get to count that… • So you max of precisions to right of value(Need not be at only standard levels.)

  18. Precision across queries • Recall and Precision are calculated for a specific query. • Generally want a value for many queries. • Calculate average precision recall over a set of queries. • Average precision at recall level r: • Nq – number of queries • Pi(r) - precision at recall level r for ith query

  19. Average Recall/Precision Curve • Typically average performance over a large set of queries. • Compute average precision at each standard recall level across all queries. • Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.

  20. Compare Two or More Systems • The curve closest to the upper right-hand corner of the graph indicates the best performance

  21. Operating Characteristic Curve (ROC Curve)

  22. ROC Curve Data • False Positive Rate vs True Positive Rate • True Positive Rate • Sensitivity – proportion of positive results received • Recall • tn/(fp+tn) • False Positive Rate • fp/(fp+tn) • 1-Specificity • Specificity – proportion of negative results not received

  23. Yet more evaluation measures… • Mean average precision (MAP) • Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved • Avoids interpolation, use of fixed recall levels • MAP for query collection is arithmetic ave. • Macro-averaging: each query counts equally • R-precision • If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Reldocs returned • Perfect system could score 1.0.

  24. Variance • For a test collection, it is usual that a system does poor on some information needs (e.g., MAP = 0.1) and well on others (e.g., MAP = 0.7) • Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. • That is, there are easy information needs and hard ones!

  25. Evaluation • Graphs are good, but people want summary measures! • Precision at fixed retrieval level • Precision-at-k: Precision of top k results • Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages • But: averages badly and has an arbitrary parameter of k • 11-point interpolated average precision • The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them • Evaluates performance at all recall levels

  26. Typical (good) 11 point precisions • SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

  27. Computing Recall/Precision Points • For a given query, produce the ranked list of retrievals. • Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures. • Mark each document in the ranked list that is relevant according to the gold standard. • Compute a recall/precision pair for each position in the ranked list that contains a relevant document.

  28. Computing Recall/Precision Points: An Example (modified from [Salton83]) Let total # of relevant docs = 6 Check each new recall point: R=1/6=0.167; P=1/1=1 R=2/6=0.333; P=2/2=1 R=3/6=0.5; P=3/4=0.75 R=4/6=0.667; P=4/6=0.667 R=5/6=0.833; p=5/13=0.38

  29. F-Measure • One measure of performance that takes into account both recall and precision. • Harmonic mean of recall and precision: • Calculated at a specific document in the ranking. • Compared to arithmetic mean, both need to be high for harmonic mean to be high. • Compromise between precision and recall

  30. A combined measure: F • Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean): • People usually use balanced F1measure • i.e., with  = 1 or  = ½

  31. E Measure (parameterized F Measure) • A variant of F measure that allows weighting emphasis on precision or recall: • Value of  controls trade-off: •  = 1: Equally weight precision and recall (E=F). •  > 1: Weight precision more. •  < 1: Weight recall more.

  32. Fallout Rate • Problems with both precision and recall: • Number of irrelevant documents in the collection is not taken into account. • Recall is undefined when there is no relevant document in the collection. • Precision is undefined when no document is retrieved.

  33. Fallout • Want fallout to be close to 0. • In general want to maximize recall and minimize fallout. • Examine fallout-recall graph. More systems oriented than recall-precision.

  34. Subjective Relevance Measures • Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware. • Ability to find new information on a topic. • Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. • Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).

  35. Utility • Subjective measure • Cost-Benefit Analysis for retrieved documents • Cr – Benefit of retrieving relevant document • Cnr – Cost of retrieving a nonrelevant document • Crn – Cost of not retrieving a relevant document • Nr – Number of relevant documents retrieved • Nnr – Number of nonrelevant documents retrieved • Nrn – Number of relevant documents not retrieved

  36. Other Factors to Consider • User effort: Work required from the user in formulating queries, conducting the search, and screening the output. • Response time: Time interval between receipt of a user query and the presentation of system responses. • Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials. • Collection coverage: Extent to which any/all relevant items are included in the document corpus.

  37. Experimental Setup for Benchmarking • Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision. • Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments. • Performance data is valid only for the environment under which the system is evaluated.

  38. Evaluation Algorithm under test Benchmarks • A benchmark collection contains: • A set of standard documents and queries/topics. • A list of relevant documents for each query. • Standard collections for traditional IR: TREC: http://trec.nist.gov/ Precision and recall Retrieved result Standard document collection Standard queries Standard result

  39. Benchmarking  The Problems • Performance data is valid only for a particular benchmark. • Building a benchmark corpus is a difficult task. • Benchmark web corpora are just starting to be developed. • Benchmark foreign-language corpora are just starting to be developed.

  40. The TREC Benchmark • TREC: Text REtrieval Conference (http://trec.nist.gov/) • Originated from the TIPSTER program sponsored by • Defense Advanced Research Projects Agency (DARPA). • Became an annual conference in 1992, co-sponsored by the • National Institute of Standards and Technology (NIST) and • DARPA. • Participants are given parts of a standard set of documents • and TOPICS (from which queries have to be derived) in • different stages for training and testing. • Participants submit the P/R values for the final document • and query corpus and present their results at the conference.

  41. The TREC Objectives • Provide a common ground for comparing different IR • techniques. • Same set of documents and queries, and same evaluation method. • Sharing of resources and experiences in developing the • benchmark. • With major sponsorship from government to develop large benchmark collections. • Encourage participation from industry and academia. • Development of new evaluation techniques, particularly for • new applications. • Retrieval, routing/filtering, non-English collection, web-based collection, question answering.

  42. From document collections to test collections • Still need • Test queries • Relevance assessments • Test queries • Must be germane to docs available • Best designed by domain experts • Random query terms generally not a good idea • Relevance assessments • Human judges, time-consuming • Are human panels perfect?

  43. Kappa measure for inter-judge (dis)agreement • Kappa measure • Agreement measure among judges • Designed for categorical judgments • Corrects for chance agreement • Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] • P(A) – proportion of time judges agree • P(E) – what agreement would be by chance • Kappa = 0 for chance agreement, 1 for total agreement.

  44. Kappa Measure: Example P(A)? P(E)?

  45. Kappa Example • P(A) = 370/400 = 0.925 • P(nonrelevant) = (10+20+70+70)/800 = 0.2125 • P(relevant) = (10+20+300+300)/800 = 0.7878 • P(E) = 0.2125^2 + 0.7878^2 = 0.665 • Kappa = (0.925 – 0.665)/(1-0.665) = 0.776 • Kappa > 0.8 = good agreement • 0.67 < Kappa < 0.8 -> “tentative conclusions” (Carletta ’96) • Depends on purpose of study • For >2 judges: average pairwisekappas

  46. Interjudge Agreement: TREC 3

  47. Impact of Inter-judge Agreement • Impact on absolute performance measure can be significant (0.32 vs 0.39) • Little impact on ranking of different systems or relative performance • Suppose we want to know if algorithm A is better than algorithm B • A standard information retrieval experiment will give us a reliable answer to this question.

  48. Critique of pure relevance • Relevance vsMarginal Relevance • A document can be redundant even if it is highly relevant • Duplicates • The same information from different sources • Marginal relevance is a better measure of utility for the user. • Using facts/entities as evaluation units more directly measures true relevance. • But harder to create evaluation set

  49. Can we avoid human judgment? • No • Makes experimental work hard • Especially on a large scale • In some very specific settings, can use proxies • E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm • But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)

  50. Evaluation at large search engines • Search engines have test collections of queries and hand-ranked results • Recall is difficult to measure on the web • Search engines often use precision at top k, e.g., k = 10 • . . . or measures that reward you more for getting rank 1 right than for getting rank 10 right. • NDCG (Normalized Cumulative Discounted Gain) • Search engines also use non-relevance-based measures. • Clickthrough on first result • Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate. • Studies of user behavior in the lab • A/B testing

More Related