Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation
Introduction. Information retrieval systems are vital to meeting daily information needs of users. The effectiveness of these systems has often been evaluated using the test collections approach, despite the high evaluation costs of this approach. Recent methods have been proposed that reduce evalua...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Published: |
Univ Sheffield Dept Information Studies
2022
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/41961/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
id |
my.um.eprints.41961 |
---|---|
record_format |
eprints |
spelling |
my.um.eprints.419612023-10-19T03:37:44Z http://eprints.um.edu.my/41961/ Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation Muwanei, Sinyinda Ravana, Sri Devi Hoo, Wai Lam Kunda, Douglas Library science. Information science Introduction. Information retrieval systems are vital to meeting daily information needs of users. The effectiveness of these systems has often been evaluated using the test collections approach, despite the high evaluation costs of this approach. Recent methods have been proposed that reduce evaluation costs through the prediction of information retrieval performance measures at the higher cut-off depths using other measures computed at the lower cut-off depths. The purpose of this paper is to propose two methods that addresses the challenge of accurately predicting the normalised discounted cumulative gain (nDCG) measure. Method. Data from selected test collections of the Text REtrieval Conference was used. The proposed methods employ the gradient boosting and linear regression models trained with topic scores of measures partitioned by TREC Tracks. Analysis. To evaluate the proposed methods, the coefficient of determination, Kendall's tau and Spearman correlations were used. Results. The proposed methods provide better predictions of the nDCG measure at the higher cut-off depths while using other measures computed at the lower cut-off depths. Conclusions. These proposed methods have shown improvement in the predictions of the nDCG measure while reducing the evaluation costs. Univ Sheffield Dept Information Studies 2022-06 Article PeerReviewed Muwanei, Sinyinda and Ravana, Sri Devi and Hoo, Wai Lam and Kunda, Douglas (2022) Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation. Information Research-An International Electronic Journal, 27 (2). ISSN 1368-1613, DOI https://doi.org/10.47989/irpaper928 <https://doi.org/10.47989/irpaper928>. 10.47989/irpaper928 |
institution |
Universiti Malaya |
building |
UM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Malaya |
content_source |
UM Research Repository |
url_provider |
http://eprints.um.edu.my/ |
topic |
Library science. Information science |
spellingShingle |
Library science. Information science Muwanei, Sinyinda Ravana, Sri Devi Hoo, Wai Lam Kunda, Douglas Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
description |
Introduction. Information retrieval systems are vital to meeting daily information needs of users. The effectiveness of these systems has often been evaluated using the test collections approach, despite the high evaluation costs of this approach. Recent methods have been proposed that reduce evaluation costs through the prediction of information retrieval performance measures at the higher cut-off depths using other measures computed at the lower cut-off depths. The purpose of this paper is to propose two methods that addresses the challenge of accurately predicting the normalised discounted cumulative gain (nDCG) measure. Method. Data from selected test collections of the Text REtrieval Conference was used. The proposed methods employ the gradient boosting and linear regression models trained with topic scores of measures partitioned by TREC Tracks. Analysis. To evaluate the proposed methods, the coefficient of determination, Kendall's tau and Spearman correlations were used. Results. The proposed methods provide better predictions of the nDCG measure at the higher cut-off depths while using other measures computed at the lower cut-off depths. Conclusions. These proposed methods have shown improvement in the predictions of the nDCG measure while reducing the evaluation costs. |
format |
Article |
author |
Muwanei, Sinyinda Ravana, Sri Devi Hoo, Wai Lam Kunda, Douglas |
author_facet |
Muwanei, Sinyinda Ravana, Sri Devi Hoo, Wai Lam Kunda, Douglas |
author_sort |
Muwanei, Sinyinda |
title |
Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
title_short |
Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
title_full |
Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
title_fullStr |
Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
title_full_unstemmed |
Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation |
title_sort |
prediction of the high-cost normalised discounted cumulative gain (ndcg) measure in information retrieval evaluation |
publisher |
Univ Sheffield Dept Information Studies |
publishDate |
2022 |
url |
http://eprints.um.edu.my/41961/ |
_version_ |
1781704576574947328 |