250 likes | 564 Views
BBM: Bayesian Browsing Model from Petabyte -scale Data. Chao Liu , MSR-Redmond Fan Guo, Carnegie Mellon University Christos Faloutsos, Carnegie Mellon University. Massive Log Streams. Search log 10+ terabyte each day (keeps increasing!)
E N D
BBM: Bayesian Browsing Model from Petabyte-scale Data Chao Liu, MSR-Redmond Fan Guo, Carnegie Mellon University Christos Faloutsos, Carnegie Mellon University
Massive Log Streams • Search log • 10+ terabyte each day (keeps increasing!) • Involves billions of distinct (query, url)’s • Questions • Can we infer user-perceived relevance for each (query, url) pair? • How many passes of the data are needed? Is one enough? • Can the inference be parallel? • Our answer: Yes, Yes, and Yes!
BBM: Bayesian Browsing Model URL1 URL2 URL3 URL4 query S4 S1 S2 S3 Relevance Examine Snippet E4 E1 E2 E3 C4 C1 C2 C3 ClickThroughs
Dependencies in BBM … Si S1 S2 … Ei E1 E2 the preceding click position before i Ci C1 C2 …
Road Map • Exact Model Inference • Algorithms through an Example • Experiments • Conclusions
Notations • For a given query • Top-M positions, usually M=10 • Positional relevance • M(M+1)/2 combinations of (r, d)’s • n search instances • N documents impressed in total: • Document relevance
Model Inference • Ultimate goal • Observation: conditional independence
P(C|S) by Chain Rule • Likelihood of search instance • From S to R:
Putting things together • Posterior with • Re-organize by Rj’s How many times dj was not clicked when it is at position (r + d) and the preceding click is on position r How many times dj was clicked
What Tells US • Exact inference with joint posterior in closed form • Joint posterior factorizes and hence mutually independent • At most M(M+1)/2 + 1 numbers to fully characterize each posterior • Count vector:
Road Map • Exact Model Inference • Algorithms through an Example • Experiments • Conclusions
LearnBBM: One-Pass Counting Find Rj
An Example 0 3 2 1 • Compute • Count vector for R4 0 0 0 0 0 0 1 2 N4, r, d 1 0 N4 1
LearnBBM on MapReduce • Map: emit((q,u), idx) • Reduce: construct the count vector
Example on MapReduce Map Map Map (U1, 0) (U2, 4) (U3, 0) (U1, 1) (U3, 0) (U4, 7) (U1, 1) (U3, 0) (U4, 0) Reduce (U1, 0, 1, 1) (U2, 4) (U3, 0, 0, 0) (U4, 0, 7)
Road Map • Exact Model Inference • Algorithms through an Example • Experiments • Conclusions
Experiments • Compare with the User Browsing Model (Dupret and Piwowarski, SIGIR’08) • The same dependence structure • But point-estimation of document relevance rather than Bayesian • Approximate inference through iterations • Data: • Collected from Aug and Sept 2008 • 10 algorithmic results only • Split to training/test sets according to time stamps for each query • 51 million search instances of 1.15 million distinct queries, 10X larger than the SIGIR’08 study
Overall Comparison on Log-Likelihood • Experiments in 20 batches • LL Improvement Ratio =
Comparison w.r.t. Frequency • Intuition • Hard to predict clicks for infrequent queries • Easy for frequent ones
Model Comparison on Efficiency 57 times faster
Petabyte-Scale Experiment • Setup: • 8 weeks data, 8 jobs • Job k takes first k-week data • Experiment platform • SCOPE: Easy and Efficient Parallel Processing of Massive Data Sets [Chaiken et al, VLDB’08]
Scalability of BBM • Increasing computation load • more queries, more urls, more impressions • Near-constant elapse time • 3 hours • Scan 265 terabyte data • Full posteriors for 1.15 billion (query, url) pairs Elapse Time on SCOPE Computation Overload
Road Map • Exact Model Inference • Algorithms through an Example • Experiments • Conclusions
Conclusions • Bayesian Browsing Model for Search streams • Exact Bayesian inference • Joint posterior in closed form • A single pass suffices • Map-Reducible for Parallelism • Admissible to incremental updates • Perfect for mining click streams • Models for other stream data • Browsing, twittering, Web 2.0, etc?