DaisyRec 2.0: benchmarking recommendation for rigorous evaluation
Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experim...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172177 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-172177 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1721772023-11-28T05:35:32Z DaisyRec 2.0: benchmarking recommendation for rigorous evaluation Sun, Zhu Fang, Hui Yang, Jie Qu, Xinghua Liu, Hongyang Yu, Di Ong, Yew-Soon Zhang, Jie School of Computer Science and Engineering A*STAR Centre for Frontier AI Research Engineering::Computer science and engineering Recommender Systems Reproducible Evaluation Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) Nanyang Technological University This work was supported in part by the National Natural Science Foundation of China under Grant 72192832, in part by the Natural Science Foundation of Shanghai under Grant 21ZR1421900, Delft Design@Scale AI Lab. This work was also supported in part by the A*Star Center for Frontier Artificial Intelligence Research and in part by the Data Science and Artificial Intelligence Research Centre, School of Computer Science and Engineering at the Nanyang Technological University (NTU), Singapore. The work of Zhang Jie was supported in part by the MOE AcRF Tier 1 funding under Grant RG90/20. This work was also supported by Shanghai Rising-Star Program 23QA1403100. 2023-11-28T05:35:31Z 2023-11-28T05:35:31Z 2022 Journal Article Sun, Z., Fang, H., Yang, J., Qu, X., Liu, H., Yu, D., Ong, Y. & Zhang, J. (2022). DaisyRec 2.0: benchmarking recommendation for rigorous evaluation. IEEE Transactions On Pattern Analysis and Machine Intelligence, 45(7), 8206-8226. https://dx.doi.org/10.1109/TPAMI.2022.3231891 0162-8828 https://hdl.handle.net/10356/172177 10.1109/TPAMI.2022.3231891 37015510 2-s2.0-85146238817 7 45 8206 8226 en RG90/20 IEEE Transactions on Pattern Analysis and Machine Intelligence © 2022 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Recommender Systems Reproducible Evaluation |
spellingShingle |
Engineering::Computer science and engineering Recommender Systems Reproducible Evaluation Sun, Zhu Fang, Hui Yang, Jie Qu, Xinghua Liu, Hongyang Yu, Di Ong, Yew-Soon Zhang, Jie DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
description |
Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Sun, Zhu Fang, Hui Yang, Jie Qu, Xinghua Liu, Hongyang Yu, Di Ong, Yew-Soon Zhang, Jie |
format |
Article |
author |
Sun, Zhu Fang, Hui Yang, Jie Qu, Xinghua Liu, Hongyang Yu, Di Ong, Yew-Soon Zhang, Jie |
author_sort |
Sun, Zhu |
title |
DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
title_short |
DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
title_full |
DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
title_fullStr |
DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
title_full_unstemmed |
DaisyRec 2.0: benchmarking recommendation for rigorous evaluation |
title_sort |
daisyrec 2.0: benchmarking recommendation for rigorous evaluation |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/172177 |
_version_ |
1783955579863040000 |