420 likes | 434 Views
Search Results Need to be Diverse. Mark Sanderson University of Sheffield. Mark Sanderson University of Sheffield. How to have fun while running an evaluation campaign. Aim. Tell you about our test collection work in Sheffield How we’ve been having fun building test collections.
E N D
Search Results Need to be Diverse Mark Sanderson University of Sheffield
Mark Sanderson University of Sheffield How to have fun while running an evaluation campaign
Aim • Tell you about our test collection work in Sheffield • How we’ve been having fun building test collections
Organising this is hard • TREC • Donna, Ellen • CLEF • Carol • NTCIR • Noriko • Make sure you enjoy it
ImageCLEF • Cross language image retrieval • Running for 6 years • Photo • Medical • And other tasks • Imageclef.org
How do we do it? • Organise and conduct research • imageCLEFPhoto 2008 • Study diversity in search results • Diversity?
Operational search engine • Ambiguous queries • What is correct interpretation? • Don’t know • Serve as diverse a range as possible
Diversity is studied • Carbonell, J. and Goldstein, J. (1998) The use of MMR, diversity-based reranking for reordering documents and producing summaries. In ACM SIGIR, 335-336. • Zhai, C. (2002) Risk Minimization and Language Modeling in Text Retrieval, PhD thesis, Carnegie Mellon University. • Chen, H. and Karger, D. R. (2006) Less is more: probabilistic models for retrieving fewer relevant documents. In ACM SIGIR, 429-436.
Cluster hypothesis • “closely associated documents tend to be relevant to the same requests” • Van Rijsbergen (1979)
Most test collections • Focussed topic • Relevance judgments • Who says what is relevant? • (almost always) one person • Consideration of interpretations • Little or none • Gap between test and operation
Few test collections • Hersh, W. R. and Over, P. (1999) Trec-8 interactive track report. TREC-8 • Over P. (1997) TREC-5 Interactive Track Report. TREC-5, 29-56 • Clarke, C. L., Kolla, M., Cormack, G. V., Vechtomova, O., Ashkan, A., Büttcher, S., and MacKinnon, I. (2008) Novelty and diversity in information retrieval evaluation. In ACM SIGIR.
Study diversity • What sorts of diversity is there? • Ambiguous query words • How often is it a feature of search? • How often are queries ambiguous? • How can we add it into test collections?
Extent of diversity? • “Ambiguous queries: test collections need more sense”, SIGIR 2008 • How do you define ambiguity? • Wikipedia • WordNet
Wikipedia stats • enwiki-20071018-pages-articles.xml • (12.7Gb) • Disambiguation pages easy to spot • “_(disambiguation)” in title Chicago • “{{disambig}}” template George_bush
Conventional source • Downloaded WordNet v3.0 • 88K words
Conclusions • Ambiguity is a problem • Ambiguity is present in query logs • Not just Web search • Ambiguity present? • Need for IR systems to produce diverse results
Test collections • Don’t test for diversity • Do search systems deal with it?
ImageCLEFPhoto • Build a test collection • Encourage the study of diversity • Study how others deal with diversity • Have some fun
Collection • IAPR TC-12 • 20,000 travel photographs • Text captions • 60 existing topics • Used in two previous studies • 39 used for diversity study
Diversity needs in topic • “Images of typical Australian animals”
Types of diversity • 22 geographical • “Churches in Brazil” • 17 other • “Australian animals”
Relevance judgments • Clustered existing qrels • Multiple assessors • Good level of agreement on clusters
Evaluation • Precision at 20 • P(20) • Fraction of relevant in top 20 • Cluster recall at 20 • CR(20) • Fraction of different clusters in top 20
Track was popular • 24 groups • 200 runs in total
Compare with past years • Same 39 topics used in 2006, 2007 • But without clustering • Compare cluster recall on past runs • Based on identical P(20) • Cluster recall increased • Substantially • Significantly
Meta-analysis • This was fun • We experimented on participants outputs • Not by design • Lucky accident
Not first to think of this • Buckley and Voorhees • SIGIR 2000, 2002 • Use submitted runs to generate new research
Conduct user experiment • Do users prefer diversity? • Experiment • Build a system to do this • Show users • your system • Baseline system • Measure users
Why bother… • …when others have done the work for you • Pair up randomly sampled runs • High CR(20) • Low CR(20) • Show to users
Numbers • 25 topics • 31 users • 775 result pairs compared
User preferences • 54.6% more diversified; • 19.7% less diversified; • 17.4% both were equal; • 8.3% preferred neither.
Conclusions • Diversity appears to be important • System don’t do diversity by default • Users prefer diverse results • Test collections don’t support diversity • But can be adapted
and • Organising evaluation campaigns is rewarding • And can generate novel research