Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems

Sri Devi Ravana, Masumeh Sadat Taheri, Prabha Rajagopal

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)


Purpose – The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach – Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document’s weight, which play the role of the mean average precision (MAP) score of the systems as a significance test’s statics. The experiments were conducted using the TREC 9 Web track collection. Findings – The p-values generated through the two types of significance tests, namely the Student’s t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value – Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.

Original languageEnglish
Pages (from-to)408-421
Number of pages14
JournalAslib Journal of Information Management
Issue number4
Publication statusPublished - 20 Jul 2015
Externally publishedYes


  • Document-based evaluation
  • Information retrieval
  • Information retrieval evaluation
  • Pairwise comparison
  • Significance test

Cite this