230 likes | 374 Views
Time-dependent Similarity Measure of Queries Using Historical Click-through Data. Qiankun Zhao*, Steven C. H. Hoi*, Tie-Yan Liu, et al. Presented by: Tie-Yan Liu * This work was done when Zhao and Hoi were interns at Microsoft Research Asia. Outline. Background Observations and Motivation
E N D
Time-dependent Similarity Measure of Queries Using Historical Click-through Data Qiankun Zhao*, Steven C. H. Hoi*, Tie-Yan Liu, et al. Presented by: Tie-Yan Liu * This work was done when Zhao and Hoi were interns at Microsoft Research Asia
Outline • Background • Observations and Motivation • Our approach • Empirical study • Future work
Background • A dilemma for Web search engines • Very short queries ~2.5 • Inconsistency of term usages • The Web is not well-organized • Users express queries with their own vocabulary
Background (cont’d) • Solution: query expansion • Document term based expansion (KDD00, SIGIR05) • a query can be expanded with top keywords in the top-k relevant documents • Query term based expansion (WWW02, CIKM04) • a query can be expanded with similar queries (queries are similar if they lead to similar pages, pages are similar if they are visited by issuing similar queries) • Click-though data were used for query expansion in many previous work.
Background (cont’d) • Click-through data • Log data about the interactions between users and Web search engines • Typical Click-through data representation
Observation 1 • Accuracy of query similarity Calculated from all the click-through data before that time point Calculated only from the click-through data in that time interval. (month)
“firework + market" and “firework + show" become popular and reach their peaks a few days before July 4th “firework + injuries" and “firework + picture“ have a little delay in terms of the number of times being issued and visited. “firework + injuries" and “firework + picture“ have a little delay in terms of the number of times being issued and visited. Observation 2 the keyword “firework” and related pages are becoming more popular one week before the event and reach the peak on July 4th • Event driven and dynamic character of query similarity
Motivations • Exploit the click-through data for semantic similarity of queries by incorporating temporal information • To combine explicit content similarity and implicit semantic similarity
Time-Dependent Concepts • Calendar schema and pattern • Example • Calendar schema <day, month, year> • Calendar pattern <15, *,*> • <15, 1, 2002> is contained in the pattern <15, *,*>
Time-Dependent Concepts • Click-Through Subgroup • Example • Based on the schema <day, week>, and the pattern <1,*>, <2,*>,…,<7,*>, we can partition the data into 7 groups, which correspond to Sun, Mon, Tue, …, Sat.
Similarity Measure • For efficiency and simplicity, we measure the query similarity in a certain time slot only based on the click-through data. • Vector representation of queries with respect to clicked documents. • wi is defined by Page Frequency (PF) and Inverted Query Frequency (IQF)
Similarity Measure • Query similarity measures • Cosine function • Marginalized kernel • By introducing query clusters, one can model the query similarity in a more semantic way.
Empirical Evaluation • Dataset • Click-through log of a commercial search engine: • June 16, 2005 to July 17,2005 • Total size of 22GB • Only queries from US • Calendar schema and pattern • <hour, day, month>, <1, *, *>, <2, *, *>, … • Divide the data into 24 subgroups • Average subgroup size: 59,400,000 query-page pairs
Empirical Examples • Kids+toy, map+route Incremented daily similarity Time-dependent daily similarity
Empirical Examples • weather + forecast, fox + news Incremented daily similarity Time-dependent daily similarity
Quality Evaluation • Experimental Settings • Partition 32-day dataset into two parts • First part for model construction • Second part for model evaluation • Accuracy is defined as the percentage of difference between the actual similarity and the model-based prediction • 1000 representative query pairs, similarity larger than 0.3 using the entire dataset • Half of them are top queries of the month • Half are selected manually related to real world events such as “hurricane”.
Experimental Results Here “distance” is the time difference between the first test data record and the last model construction data record. For example, when the distance is 1 and the training data size is 10, we summarize all the accuracy values that use the I to 10+i days as training and use the 10+1+i as testing.
Conclusion • Presented a preliminary study of the dynamic nature of query similarity using click-through data • Observed and verified that query similarity are dynamic and event driven with real data • Proposed an time-dependent model • For our future work, we will investigate an adaptive way to determine the most suitable time granularity for two given queries.
Thanks! tyliu@microsoft.com http://research.microsoft.com/users/tyliu