Tutorials

A Formal Approach to Effectiveness Metrics for Information Access: Retrieval, Filtering, and Clustering (half day)

Enrique Amigó, Julio Gonzalo and Stefano Mizzaro

In this tutorial we will present, review, and compare the most popular evaluation metrics for some of the most salient information related tasks, covering: (i) Information Retrieval, (ii) Clustering, and (iii) Filtering. The tutorial will make a special emphasis on the specification of constraints for suitable metrics in each of the three tasks, and on the systematic comparison of metrics according to how they satisfy such constraints. This comparison provides criteria to select the most adequate metric or set of metrics for each specific information access task. The last part of the tutorial will investigate the challenge of combining and weighting metrics.


Measuring Document Retrievability (half day)

Leif Azzopardi

Retrievability is an important and interesting indicator that can be used in a number of ways to analyse Information Retrieval systems and document collections. Rather than focusing totally on relevance, retrievability examines what is retrieved, how often it is retrieved, and whether a user is likely to retrieve it or not. This is important because a document needs to be retrieved, before it can be judged for relevance. In this tutorial, we shall explain the concept of retrievability along with a number of retrievability measures, how it can be estimated and how it can be used for analysis. Since retrieval precedes relevance, we shall also provide an overview of how retrievability relates to effectiveness – describing some of the insights that researchers have discovered thus far. We shall also show how retrievability relates to efficiency, and how the theory of retrievability can be used to improve both effectiveness and efficiency. Then we shall provide an overview of the different applications of retrievability such as Search Engine Bias, Corpus Profiling, etc., before wrapping up with challenges and opportunities. The final session of the day will look at example problems and ways to analyse and apply retrievability to other problems and domains. This tutorial is ideal for: (i) researchers curious about retrievability and wanting to see how it can impact their research, (ii) researchers who would like to expand their set of analysis techniques, and/or (iii) researchers who would like to use retrievability to perform their own analysis.


Visual Analytics for Information Retrieval Evaluation (VAIRË 2015) (full day)

Nicola Ferro and Giuseppe Santucci

Measuring is a key to scientific progress. This is particularly true for research concerning complex systems, whether natural or human-built. Multilingual and multimedia information systems are increasingly complex: they need to satisfy diverse user needs and support challenging tasks. Their development calls for proper and new evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness.

The tutorial will introduce basic and intermediate concepts about laboratory-based evaluation of information retrieval systems, its pitfalls, and shortcomings and it will complement them with a recent and innovative angle to evaluation: the application of methodologies and tools coming from the visual analytics domain for better interacting, understanding, and exploring the experimental results and IR system behaviour.


Join the Living Lab: Evaluating News Recommendations in Real-time (half day)

Frank Hopfgartner and Torben Brodt

Participants of this tutorial learn how to participate in CLEF­-NEWSREEL, a living lab on the evaluation of news recommender algorithms. Various research challenges can be addressed within CLEF-­NEWSREEL, including the development and evaluation of collaborative filtering and content­ based filtering strategies. Satisfying information needs by techniques including preference elicitation, pattern recognition, and prediction, recommender systems connect the research areas information retrieval and machine learning.


Statistical Power Analysis for Sample Size Estimation in Information Retrieval Experiments with Users (half day)

Diane Kelly

One critical decision researchers must make when designing laboratory experiments with users is how many participants to study. In interactive information retrieval (IR), the determination of sample size is often based on heuristics and limited by practical constraints such as time and finances. As a result, many studies are underpowered and it is common to see researchers make statements like “With more participants significance might have been detected,” but what does this mean? What does it mean for a study to be underpowered? How does this effect what we are able to discover, how we interpret study results and how we make choices about what to study next? How does one determine an appropriate sample size? What does it even mean for a sample size to be appropriate? This tutorial addresses these questions by introducing participants to the use of statistical power analysis for sample size estimation in laboratory experiments with users. In discussing this topic, the issues of effect size, Type I and Type II errors and experimental design, including choice of statistical procedures, will also be addressed.