260 likes | 273 Views
A PROMISE for Experimental Evaluation. CLEF 2010, 20th Sept. 2010, Padova. Multilingual and Multimedia Information Access Systems. 2. Challenges for Experimental Evalution. Heterogeneousness and volume of the data much is done to provide realistic document collections
E N D
A PROMISE for Experimental Evaluation CLEF 2010, 20th Sept. 2010, Padova
Challenges for Experimental Evalution • Heterogeneousness and volume of the data • much is done to provide realistic document collections • Diversity of users and tasks • evalution tasks/tracks are often too “monolithic” • Complexity of the systems • system are usually dealt with as “black boxes” 3
Experimental Evaluation Needs • To increase the automation in the evaluation process • reduction of the effort necessary for carrying out evaluation • increase the number of the experiments conducted in order to deeply analyse evolving user habits and tasks • To study systems, component-by-component • better understanding of systems’ behaviour also with respect to different tasks • To increase the usage of the produced experimental data • improving collaboration and user involvment to achieve unforeseen exploitation and enrichment of the experimental data 4
Evaluation: Labs and Metrics Maarten de Rijke UvA
Information access changing • New breeds of users • Performing increasingly broad range of tasks within varying domains • Acting within communities to find information for themselves and to share with others • Re-orientation of methodology and goals in evaluation of information access systems
Mapping the evaluation landscape • Generating ground truth from log files • Generating ground truth from annotations • Alternative retrieval scenarios and metrics • Living labs • Evaluation in the wild • Ranking analysis
Use Cases – a bridge to application Jussi Karlgren SICS
Two legs of evaluation • Benchmarking • Validation (well … at least two) Each with separate craft and practice. How can they communicate?
Use cases – a conduit • To communicate starting points of evaluation practice we suggest the formulation of use cases, based on practice in the field. • Interviews, think tanks, hypothesis-driven as well as empiry driven. • Contact us! Suggest stakeholders!
IP Search Allan Hanbury IRF
IR Evaluation Campaigns today … are mostly based on The TREC organisation model, which is based on The Cranfield Paradigm, which was developed for
You can do a lot with index cards... The Mundaneum Begun in 1919 in Belgium, by April 1934 there were 15 646 346 index cards (cross referenced)
Disadvantages of Evaluation Campaign Approach • Fixed timelines and cyclic nature of events • Evaluation at system-level only • Difficulty in comparing systems and elucidating reasons for their performance • Viewing the campaign as a competition • Are IR Systems getting better? It is not clear from results in published papers that IR systems have improved over the last decade [Fuhr, this morning; Armstrong et al., CIKM 2009]
Search for Innovation Patent Search is an interesting problem because: • Very high recall required, but precision should not be sacrificed • Many types of search done: from narrow to wide • Searches also in non-patent literature • Classification required • Multi-lingual • Non-text information is important • Different style used in different parts of patents
Visual Analytics Giuseppe Santucci Sapienza Università di Roma
Data ! PROMISE has to manage and explore large and/or complex datasets • Topics • Experiment submissions • Creation of pools • Relevance assessment • Log files • Measures • Derived data • Statistics • … And PROMISE foresees a growth of the managed data during the project of about one order of magnitude
Challenges What are the challenges rising from the management of such datasets? • Not the storage (even if it requires an engineered database design) • Not the retrieval (if you just need to retrieve a measure) Challenges come from effectively using such immense wealth of data (without being overloaded). It means: • understanding it • discovering patterns, insights, and trends • making decisions • sharing and reusing results
Rescuing information In different situations people need to exploit and to use hidden information resting in unexplored large data sets • decision-makers • analysts • engineers • emergency response teams • ... Several techniques exist devoted to this aim • Automatic analysis techniques (e.g., data mining, statistics) • Manual analysis techniques (e.g., Information visualization) Large and complex datasets require a joint effort:
A simple Visual Analytics example How to visually compare Jack London and Mark Twainbooks? VA steps • Split the book in several text block (e.g., pages, paragraph) • Measure, for each text block, a relevant feature (e.g., average sentence length, word usage, etc. ) • Associate the relevant feature to a visual attribute (e.g., color) • Visualize it
Jack London vs Mark Twain Average sentence length Long sentences Hapax Legomena (HL) (words appearing only once) Short sentences Many HL (rich vocabulary) Few HL
Visual Analytics@PROMISE ! One of the innovative aspects of PROMISE acknowledged by the European Commission, is the idea of providing Visual Analytics techniques for exploring the available datasets • Specific algorithms • Suitable visualizations • Sharing and collaboration mechanisms
Where next? • What can PROMISE deliver to future CLEF labs? • How will PROMISE contribute to the field as a whole, outside direct CLEF activities? • How can PROMISE provide experimental infrastructure for other projects?