580 likes | 778 Views
New Concept - Associative Arrays. Can incorporate combiner functionality inside mapper How to aggregate for word count in Mapper? Could compute partial count of words processed by Mapper How do to this? Include associative array Tallies up counts within single document
E N D
New Concept - Associative Arrays • Can incorporate combiner functionality inside mapper • How to aggregate for word count in Mapper? • Could compute partial count of words processed by Mapper • How do to this? • Include associative array • Tallies up counts within single document • (k,v) for each unique word, not just each (word, 1)
Information Retrieval with MapReduce Chapter 4 Some slides from Jimmy Lin
Topics • Introduction to IR • Inverted Indexing • Retrieval
Web search problems • Web search problems decompose into three components • Gather web content (crawling) • Construct inverted index based on web content gathered (indexing) • Ranking documents given a query using inverted index (retrieval)
Web search problems • Crawling and indexing share similar characteristics and requirements • Both are offline problems, no need for real-time • Tolerable for a few minutes delay before content searchable • OK to run smaller-scale index updates frequently • Querying online problem • Demands sub-second response time • Low latency high throughput • Loads can very greatly
Web Crawler • To acquire the document collection over which indexes are built • Acquiring web content requires crawling • Traverse web by repeatedly following hyperlinks and storing downloaded pages • Start by populating a queue with seed pages
Web Crawler Issues • Shouldn’t overload web servers with crawling • Prioritize order in which unvisited pages downloaded • Avoid downloading page multiple times – coordination and load balancing • Robust when failures • Learn update patterns so content current • Identify near duplicates and select best for index • Identify dominant language on page
How do we represent text? • Remember: computers don’t “understand” anything! • “Bag of words” • Treat all the words in a document as index terms for that document • Assign a “weight” to each term based on “importance” • Disregard order, structure, meaning, etc. of the words • Simple, yet effective! • Assumptions • Term occurrence is independent • Document relevance is independent • “Words” are well-defined
What’s a word? 天主教教宗若望保祿二世因感冒再度住進醫院。這是他今年第二度因同樣的病因住院。 وقال مارك ريجيف - الناطق باسم الخارجية الإسرائيلية - إن شارون قبل الدعوة وسيقوم للمرة الأولى بزيارة تونس، التي كانت لفترة طويلة المقر الرسمي لمنظمة التحرير الفلسطينية بعد خروجها من لبنان عام 1982. Выступая в Мещанском суде Москвы экс-глава ЮКОСа заявил не совершал ничего противозаконного, в чем обвиняет его генпрокуратура России. भारत सरकार ने आर्थिक सर्वेक्षण में वित्तीय वर्ष 2005-06 में सात फ़ीसदी विकास दर हासिल करने का आकलन किया है और कर सुधार पर ज़ोर दिया है 日米連合で台頭中国に対処…アーミテージ前副長官提言 조재영기자= 서울시는 25일이명박시장이 `행정중심복합도시'' 건설안에대해 `군대라도동원해막고싶은심정''이라고말했다는일부언론의보도를부인했다.
Sample Document “Bag of Words” McDonald's slims down spuds Fast-food chain to reduce certain types of fat in its french fries with new cooking oil. NEW YORK (CNN/Money) - McDonald's Corp. is cutting the amount of "bad" fat in its french fries nearly in half, the fast-food chain said Tuesday as it moves to make all its fried menu items healthier. But does that mean the popular shoestring fries won't taste the same? The company says no. "It's a win-win for our customers because they are getting the same great french-fry taste along with an even healthier nutrition profile," said Mike Roberts, president of McDonald's USA. But others are not so sure. McDonald's will not specifically discuss the kind of oil it plans to use, but at least one nutrition expert says playing with the formula could mean a different taste. Shares of Oak Brook, Ill.-based McDonald's (MCD: down $0.54 to $23.22, Research, Estimates) were lower Tuesday afternoon. It was unclear Tuesday whether competitors Burger King and Wendy's International (WEN: down $0.80 to $34.91, Research, Estimates) would follow suit. Neither company could immediately be reached for comment. … 16 × said 14 × McDonalds 12 × fat 11 × fries 8 × new 6 × company, french, nutrition 5 × food, oil, percent, reduce, taste, Tuesday …
aid 0 1 all 0 1 back 1 0 brown 1 0 come 0 1 dog 1 0 fox 1 0 good 0 1 jump 1 0 lazy 1 0 men 0 1 now 0 1 over 1 0 party 0 1 quick 1 0 their 0 1 time 0 1 Representing Documents Document 1 Term Document 1 Document 2 The quick brown fox jumped over the lazy dog’s back. Stopword List for is of Document 2 the to Now is the time for all good men to come to the aid of their party.
Inverted Index • Inverted indexing is fundamental to all IR models • Consists of postings lists, one with each term in the collection • Posting list – document id and payload • Payload can be term frequency or number of times occurs on document, position of occurrence, properties, etc. • Can be ordered by document id, page rank, etc. • Data structure necessary to map from document id to for example, URL
Term Doc 2 Doc 3 Doc 4 Doc 1 Doc 5 Doc 6 Doc 7 Doc 8 aid 0 0 0 1 0 0 0 1 all 0 1 0 1 0 1 0 0 back 1 0 1 0 0 0 1 0 brown 1 0 1 0 1 0 1 0 come 0 1 0 1 0 1 0 1 dog 0 0 1 0 1 0 0 0 fox 0 0 1 0 1 0 1 0 good 0 1 0 1 0 1 0 1 jump 0 0 1 0 0 0 0 0 lazy 1 0 1 0 1 0 1 0 men 0 1 0 1 0 0 0 1 now 0 1 0 0 0 1 0 1 over 1 0 1 0 1 0 1 1 party 0 0 0 0 0 1 0 1 quick 1 0 1 0 0 0 0 0 their 1 0 0 0 1 0 1 0 time 0 1 0 1 0 1 0 0 Inverted Index Term Postings aid 4 8 all 2 4 6 back 1 3 7 brown 1 3 5 7 come 2 4 6 8 dog 3 5 fox 3 5 7 good 2 4 6 8 jump 3 lazy 1 3 5 7 men 2 4 8 now 2 6 8 over 1 3 5 7 8 party 6 8 quick 1 3 their 1 5 7 time 2 4 6
Process query - retrieval • Given a query, fetch posting lists associated with query, traverse postings to compute result set • Query document scores must be computed • Top k documents extracted • Optimization strategies to reduce # postings must examine
Index • Size of index depends on payload • Well-optimized inverted index can be 1/10 of size of original document collection • If store position info, could be several times larger • Usually can hold entire vocabulary in memory (using front-coding) • Postings lists usually too large to store in memory • Query evaluation involves random disk access and decoding postings • Try to minimize random seeks
Indexing: Performance Analysis • The indexing problem • Must be relatively fast, but need not be real time • For Web, incremental updates are important • How large is the inverted index? • Size of vocabulary • Size of postings • Fundamentally, a large sorting problem • Terms usually fit in memory • Postings usually don’t
Vocabulary Size: Heaps’ Law Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn. Typically, K is between 10 and 100, is between 0.4 and 0.6 V is vocabulary size n is corpus size (number of documents) K and are constants, determine empirically When adding new documents, the system is likely to have seen most terms already… but the postings keep growing
Postings Size: Zipf’s Law • George Kingsley Zipf (1902-1950) observed the following relation between frequency and rank • In other words: • A few elements occur very frequently • Many elements occur very infrequently • Zipfian distributions: • English words • Library book checkout patterns • Website popularity (almost anything on the Web) f = frequency r = rank c = constant or
Word Frequency in English Frequency of 50 most common words in English (sample of 19 million words)
Question to consider • How to create an inverted index using map reduce?
MapReduce: Index Construction • Map over all documents – does what? • What is input? • Docid and content • What is output? • Emit postings: term as key, (docid, frequency) as value • Emit other information as necessary (e.g., term position) • Reduce – does what? • What is input? • term as key, (docid, frequency) as value • What is output? • Trivial: combines list of postings • Might want to sort the postings (e.g., by docid or term_frequency) for group by
1: class Mapper 2: procedure Map(docid n; doc d) 3: H ← new AssociativeArray 4: for all term t € doc d do 5: H{t} ← H{t} + 1 6: for all term t € H do 7: Emit(term t, posting (n, H{t})) 1: class Reducer 2: procedure Reduce(term t, postings [(n1, f1), (n2, f2) : : :]) 3: P ← new List 4: for all posting (a, f) € postings [(n1, f1), (n2, f2) : : :] do 5: Append(P, (a, f)) 6: Sort(P) 7: Emit(term t; postings P) Figure 4.2: Pseudo-code of the baseline inverted indexing algorithm in MapReduce. Mappers emit postings keyed by terms, the execution framework groups postings by term, and the reducers write postings lists to disk.
Creating an inverted index – Map • Input is docids and content • Documents processed in parallel by mappers • Document analyzed and broken down into component terms • May have to strip off HTML tags, JavaScript code, remove stop words • Iterate over all terms and keep track of counts • Emit (term, posting) where posting is docid and payload
Creating Inverted Index - Reduce • After shuffle sort, Reduce is easy • Just combines individual postings by appending to initially empty list and writes to disk • Usually compresses list • How would this be done without MapReduce?
MapReduce • Map emits term postings • Reduce performs large distributed group by of postings by term
Retrieval • Have just showed how to create an index, but what about retrieving info for a query? • What is involved: • Identify query terms • Look up posting lists corresponding to query terms • Traverse posting lists to compute query-document scores • Return top k results
Retrieval • Must have sub-second response • Optimized for low-latency • True of MapReduce? • True of underlying file system? • Look up postings • Postings too large to fit in memory • (random disk seeks) • Round-trip network exchanges • Must find location of data block from master, etc.
Solution? • Distributed Retrieval
Distributed Retrieval • Distribute retrieval across large number of machines • Break up index – how? • Document partitioning • Term partitioning • Need Query Broker and Partition Servers
Partitioning • Document broken up into partitions and assigned to a server • Document partitioning • Servers hold complete index for a subset of entire collection • Vertical partitioning • Term partitioning • Each server responsible for subset of terms for entire collection • Horizontal partitioning
Document Partitioning • Document Partitioning • Query broker forwards query to all partition servers • Query broker merges partial results from each • Query broker returns final result to user • Pros/Cons? • Query must be processed by every server • Each partition operates independently, traverse postings in parallel • shorter query latencies • Assuming N terms, K partitions: • Number of disk accesses - Big O? • O(K*N)
Term Partitioning • Term (word) Partitioning • Suppose query contains 3 terms, t1, t2 and t3 • Uses pipelined strategy – accumulators and route: • Broker forwards query to server that holds postings for t1 • Server traverses postings list, computes partial query scores • Partial scores sent to server with t2, then t3 • Then final result passed to query broker • Pros/Cons • Increases throughput - smaller total # of seeks per query • Better if memory limited system • Load balancing tricky - may create hotspots on servers • Number of disk accesses - Big O? • O(K)
Which is best? • Google adopts which partitioning? • Document partitioning • Keeps indexes in memory (not commonly done) • Quality degrades with machine failure • Servers offline won’t deliver results • User doesn’t know • Users less discriminating as to which relevant documents appear in results • Even is no failure may cycle through different partitions
Other aspects of Partitioning • Can partition documents along a dimension, e.g. by quality • Search highest quality first • Search lower quality if needed • Include another dimension, e.g. content • Only send queries to partitions likely to contain relevant documents instead of to all partitions
Replication and Partitioning • Replication? • Within same partition as well as across geographically distributed data centers • Query routing problems • Server clients from the closest data centers • Must also route to appropriate locations • If single data center, must balance load across replicas
Displaying to User • Postings only store docid • Get a list of ranked docids • Responsibility of document servers (not partition servers) to generate meaningful output • Title, URL, snippet of result entry • Caching useful if index not in memory and can even cache results of entire queries • Why cache results? • Zipfian!!
Summary • Documents gathered by web crawling • Inverted indexes built from documents • Use indexes to respond to queries • More info about Google’s approach 2009
Index Compression • How are posting compressed and stored on disk?
Boolean Retrieval • Users express queries as a Boolean expression • AND, OR, NOT • Can be arbitrarily nested • Retrieval is based on the notion of sets • Any given query divides the collection into two sets: retrieved, not-retrieved • Pure Boolean systems do not define an ordering of the results
Posting list – document id and payload • For simple boolean retrieval, only need document ID • Payload is not needed
Boolean Retrieval • To execute a Boolean query: • Build query syntax tree • For each clause, look up postings • Traverse postings and apply Boolean operator • Efficiency analysis • Postings traversal is linear (assuming sorted postings) • Start with shortest posting first AND quick OR ( fox or dog ) and quick fox dog dog 3 5 fox 3 5 7 dog 3 5 OR = union 3 5 7 fox 3 5 7
Boolean Query - Extensions • Implementing proximity operators • Store word offset in postings • Handling term variations • Stem words: love, loving, loves … lov
Boolean Query - Strengths and Weaknesses • Strengths • Precise, if you know the right strategies • Precise, if you have an idea of what you’re looking for • Implementations are fast and efficient • Weaknesses • Users must learn Boolean logic • Boolean logic insufficient to capture the richness of language • No control over size of result set: either too many hits or none • When do you stop reading? All documents in the result set are considered “equally good” • What about partial matches? Documents that “don’t quite match” the query may be useful also
Query Execution • MapReduce is meant for large-data batch processing • Not suitable for lots of real time operations requiring low latency • The solution: “the secret sauce” • Involves document partitioning • Lots of system engineering: e.g., caching, load balancing, etc.
MapReduce: Query Execution • High-throughput batch query execution: • Instead of sequentially accumulating scores per query term: • Have mappers traverse postings in parallel, emitting partial score components • Reducers serve as the accumulators, summing contributions for each query term • MapReduce does: • Replace random access with sequential reads • Amortize over lots of queries • Examine multiple postings in parallel