380 likes | 633 Views
Document ranking. Paolo Ferragina Dipartimento di Informatica Università di Pisa. The big fight: find the best ranking. Ranking: Google vs Google.cn. Document ranking. Text-based Ranking (1° generation). Reading 6.2 and 6.3. Similarity between binary vectors.
E N D
Document ranking Paolo Ferragina Dipartimento di Informatica Università di Pisa
Document ranking Text-based Ranking (1° generation) Reading 6.2 and 6.3
Similarity between binary vectors • Document is binary vector X,Y in {0,1}D • Score: overlap measure What’s wrong ?
Normalization • Dice coefficient (wrt avg #terms): • Jaccard coefficient (wrt possible terms): NO, triangular OK, triangular
What’s wrong in doc-similarity ? Overlap matching doesn’t consider: • Term frequency in a document • Talks more of t ? Then t should be weighted more. • Term scarcity in collection • of commoner than baby bed • Length of documents • score should be normalized
æ ö n where nt = #docs containing term t n = #docs in the indexed collection = ç ÷ idf log è ø t n t A famous “weight”: tf-idf = tf Number of occurrences of term t in doc d t,d Vector Space model
Sec. 6.3 Why distance is a bad idea
Postulate: Documents that are “close together” in the vector space talk about the same things. Euclidean distance sensible to vector length !! Easy to Spam A graphical example t3 cos(a)= v w / ||v|| * ||w|| d2 d3 Sophisticated algos to find top-k docs for a query Q d1 a t1 d5 t2 d4 The user query is a very short doc
Sec. 6.3 cosine(query,document) Dot product qi is the tf-idf weight of term i in the query di is the tf-idf weight of term i in the document cos(q,d) is the cosine similarity of q and d … or, equivalently, the cosine of the angle between q and d.
Cos for length-normalized vectors • For length-normalized vectors, cosine similarity is simply the dot product (or scalar product): for q, d length-normalized.
Cosine similarity among 3 docs How similar are the novels SaS: Sense and Sensibility PaP: Pride and Prejudice, and WH: Wuthering Heights? Term frequencies (counts) Note: To simplify this example, we don’t do idf weighting.
3 documents example contd. Log frequency weighting After length normalization cos(SaS,PaP) ≈ 0.789 × 0.832 + 0.515 × 0.555 + 0.335 × 0.0 + 0.0 × 0.0 ≈ 0.94 cos(SaS,WH) ≈ 0.79 cos(PaP,WH) ≈ 0.69
Sec. 7.1.2 Storage • For every term, we store the IDF in memory, in terms of nt, which is actually the length of its posting list (so anyway needed). • For every docID d in the posting list of term t, we store its frequency tft,dwhich is tipically small and thus stored with unary/gamma.
Vector spaces and other operators • Vector space OK for bag-of-words queries • Clean metaphor for similar-document queries • Not a good combination with operators: Boolean, wild-card, positional, proximity • First generation of search engines • Invented before “spamming” web search
Document ranking Top-k retrieval Reading 7
Sec. 7.1.1 Speed-up top-k retrieval • Costly is the computation of the cos • Find a set A of contenders, with K < |A| << N • Set A does not necessarily contain the top K, but has many docs from among the top K • Return the top K docs in A, according to the score • The same approach is also used for other (non-cosine) scoring functions • Will look at several schemes following this approach
Sec. 7.1.2 How to select A’s docs • Consider docs containing at least one query term. Hence this means… • Take this further: • Only consider high-idf query terms • Champion lists: top scores • Only consider docs containing many query terms • Fancy hits: for complex ranking functions • Clustering
Sec. 7.1.2 Approach #1: High-idf query terms only • For a query such as catcher in the rye • Only accumulate scores from catcher and rye • Intuition: in and the contribute little to the scores and so don’t alter rank-ordering much • Benefit: • Postings of low-idf terms have many docs these (many) docs get eliminated from set A of contenders
Approach #2: Champion Lists • Preprocess: Assign to each term, its m best documents • Search: • If |Q| = q terms, merge their preferred lists ( mq answers). • Compute COS between Q and these docs, and choose the top k. Need to pick m>k to work well empirically. Now SE use tf-idf PLUS PageRank (PLUS other weights)
Approach #3: Docs containing many query terms • For multi-term queries, compute scores for docs containing several of the query terms • Say, at least 3 out of 4 • Imposes a “soft conjunction” on queries seen on web search engines (early Google) • Easy to implement in postings traversal
Sec. 7.1.2 3 2 4 4 8 8 16 16 32 32 64 64 128 128 1 2 3 5 8 13 21 34 3 of 4 query terms Antony Brutus Caesar Calpurnia 13 16 32 Scores only computed for docs 8, 16 and 32.
Sec. 7.1.4 Complex scores • Consider a simple total score combining cosine relevance and authority • net-score(q,d) = PR(d) + cosine(q,d) • Can use some other linear combination than an equal weighting • Now we seek the top K docs by net score
Approach #4: Fancy-hits heuristic • Preprocess: • Assign docID by decreasing PR weight • Define FH(t) =m docs for t with highest tf-idf weight • Define IL(t) = the rest (i.e. incr docID = decr PR weight) • Idea: a document that scores high should be in FH or in the front of IL • Search for a t-term query: • First FH: Take the common docs of their FH • compute the score of these docs and keep the top-k docs. • Then IL: scan ILs and check the common docs • Compute the score and possibly insert them into the top-k. • Stop when M docs have been checked or the PR score becomes smaller than some threshold.
Sec. 7.1.6 Approach #5: Clustering Query Leader Follower
Sec. 7.1.6 Cluster pruning: preprocessing • Pick N docs at random: call these leaders • For every other doc, pre-compute nearest leader • Docs attached to a leader: its followers; • Likely: each leader has ~ N followers.
Sec. 7.1.6 Cluster pruning: query processing • Process a query as follows: • Given query Q, find its nearest leader L. • Seek K nearest docs from among L’s followers.
Sec. 7.1.6 Why use random sampling • Fast • Leaders reflect data distribution
Sec. 7.1.6 General variants • Have each follower attached to b1=3 (say) nearest leaders. • From query, find b2=4 (say) nearest leaders and their followers. • Can recur on leader/follower construction.
Document ranking Relevance feedback Reading 9
Sec. 9.1 Relevance Feedback • Relevance feedback: user feedback on relevance of docs in initial set of results • User issues a (short, simple) query • The user marks some results as relevant or non-relevant. • The system computes a better representation of the information need based on feedback. • Relevance feedback can go through one or more iterations.
Sec. 9.1.1 Rocchio (SMART) • Used in practice: • Dr =set of known relevant doc vectors • Dnr = set of known irrelevant doc vectors • qm = modified query vector; q0 = original query vector; α,β,γ: weights (hand-chosen or set empirically) • New query moves toward relevant documents and away from irrelevant documents
Relevance Feedback: Problems • Users are often reluctant to provide explicit feedback • It’s often harder to understand why a particular document was retrieved after applying relevance feedback • There is no clear evidence that relevance feedback is the “best use” of the user’s time.
Sec. 9.1.6 Pseudo relevance feedback • Pseudo-relevance feedback automates the “manual” part of true relevance feedback. • Retrieve a list of hits for the user’s query • Assume that the top k are relevant. • Do relevance feedback (e.g., Rocchio) • Works very well on average • But can go horribly wrong for some queries. • Several iterations can cause query drift.
Sec. 9.2.2 Query Expansion • In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents • In query expansion, users give additional input (good/bad search term) on words or phrases
Sec. 9.2.2 How augment the user query? • Manual thesaurus (costly to generate) • E.g. MedLine: physician, syn: doc, doctor, MD • Global Analysis(static; all docs in collection) • Automatically derived thesaurus • (co-occurrence statistics) • Refinements based on query-log mining • Common on the web • Local Analysis (dynamic) • Analysis of documents in result set
Query assist Would you expect such a feature to increase the query volume at a search engine?