1 / 16

Click Evidence Signals and Tasks

Click Evidence Signals and Tasks. Vishwa Vinay Microsoft Research, Cambridge. Introduction. Signals Explicit Vs Implicit Evidence Of what? From where? Used how? Tasks Ranking, Evaluation & many more things search. Clicks as Input . Task = Relevance Ranking

reia
Download Presentation

Click Evidence Signals and Tasks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge

  2. Introduction • Signals • Explicit VsImplicit • Evidence • Of what? • From where? • Used how? • Tasks • Ranking, Evaluation & many more things search

  3. Clicks as Input • Task = Relevance Ranking • Feature in relevance ranking function • Signal • selectURL, count(*)asDocFeature fromHistorical_Clicksgroupby URL • selectQuery,URL, count(*)asQueryDocFeature fromHistorical_Clicksgroupby Query, URL

  4. Clicks as Input • Feature in relevance ranking function • Static feature (popularity) • Dynamic feature (for this query-doc pair) • “Query Expansion using Associated Queries”, Billerbeck et al, CIKM 2003 • “Improving Web Search Ranking by Incorporating User Behaviour”, Agichtein et al, SIGIR 2006 • ‘Document Expansion’ • Signal bleeds to similar queries

  5. Clicks as Output • Task = Relevance Ranking • Result Page = Ranked list of documents • Ranked list = Documents sorted based on Score • Score = Probability that this result will be clicked • Signal • Did my prediction agree with the user’s action? • “Web-Scale Bayesian Click-through rate Prediction for Sponsored Search Advertising in Microsoft’s Bing Search Engine”, Graepel et al, ICML 2010

  6. Clicks as Output • Calibration: Merging results from different sources (comparable scores) • “Adaptation of Offline Vertical Selection Predictions in the Presence of User Feedback”, Diaz et al, SIGIR 2009 • Onsite Adaptation of ranking function • “A Decision Theoretic Framework for Ranking using Implicit Feedback”, Zoeter et al, SIGIR 2008

  7. Clicks for Training • Task = Learning a ranking function • Signal Query=“Search Solutions 2010” Absolute: Relevant={Doc1, Doc3}, NotRelevant={Doc2} Preferences: {Doc2 Doc1}, {Doc2 Doc3}

  8. Clicks for Training • Preferences from Query-> {URL, Click} events • Rank bias & Lock-in • Randomisation & Exploration • “Accurately Interpreting Clickthrough Data as Implicit Feedback”, Joachims et al, SIGIR 2005 • Preference Observations into Relevance Labels • “Generating Labels from Clicks”, Agrawal et al, WSDM 2010

  9. Clicks for Evaluation • Task = Evaluating a ranking function • Signal • Engagement and Usage metrics Query=“Search Solutions 2010” Controlled experiments for A/B Testing

  10. Clicks for Evaluation • Disentangling relevance from other effects • “An experimental comparison of click position-bias models”, Craswell et al, WSDM 2008 • Label-free evaluation of retrieval systems (‘Interleaving’) • “How Does Clickthrough Data Reflect Retrieval Quality?”, Radlinski et al, CIKM 2008

  11. Personalisation with Clicks • Task = Separate out Individual preferences from aggregates • Signal : {User, Query, URL, Click} tuples Query=“Search Solutions 2010”

  12. Personalisation with Clicks • Click event as a rating • “Matchbox: Large Scale Bayesian Recommendations”, Stern et al, WWW 2009 • Sparsity - collapse using user groups (groupisation) “Discovering and Using Groups to Improve Personalized Search”, Teevan et al, WSDM 2009 - collapse using doc structure

  13. Miscellaneous • Using co-clicking for query suggestions • “Random Walks on the Click Graph”, Craswell et al, SIGIR 2007 • User behaviour models for • Ranked lists: “Click chain model in Web Search”, Guo et al, WWW 2009 • Whole page: “Inferring Search Behaviors Using Partially Observable Markov Model”, Wang et al, WSDM 2010 • User activity away from the result page • “BrowseRank: Letting Web Users Vote for Page Importance”, Liu et al, SIGIR 2008

  14. Additional Thoughts • Impressions & Examinations • Raw click counts versus normalised ratios • Query=“Search Solutions 2010” • All clicks are not created equal • - Skip Click LastClick OnlyClick

  15. Clicks and Enterprise Search • Relying on the click signal • Machine learning and non-click features • Performance Out-Of-the-Box • Shipping a shrink-wrapped product • The self-aware adapting system • Good OOB • Gets better with use • Knows when things go wrong

  16. Thank you vvinay@microsoft.com

More Related