Automating dataset updates towards reliable and timely evaluation of Large Language Models

Large language models (LLMs) have achieved impressive performance across various natural language benchmarks, prompting a continual need to curate more difficult datasets for larger LLMs, which is costly and time-consuming. In this paper, we propose to automate dataset updating and provide systemati...

Full description

Saved in:
Bibliographic Details
Main Authors: YING, Jiahao, CAO, Yixin, BAI, Yushi, SUN, Qianru, WANG, Bo, TANG, Wei, DING, Zhaojun, YANG, Yizhe, HUANG, Xuanjing, YAN, Shuicheng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
LLM
Online Access:https://ink.library.smu.edu.sg/sis_research/9439
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10439
record_format dspace
spelling sg-smu-ink.sis_research-104392024-10-24T09:48:03Z Automating dataset updates towards reliable and timely evaluation of Large Language Models YING, Jiahao CAO, Yixin BAI, Yushi SUN, Qianru WANG, Bo TANG, Wei DING, Zhaojun YANG, Yizhe HUANG, Xuanjing YAN, Shuicheng Large language models (LLMs) have achieved impressive performance across various natural language benchmarks, prompting a continual need to curate more difficult datasets for larger LLMs, which is costly and time-consuming. In this paper, we propose to automate dataset updating and provide systematic analysis regarding its effectiveness in dealing with benchmark leakage issue, difficulty control, and stability. Thus, once the current benchmark has been mastered or leaked, we can update it for timely and reliable evaluation. There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives. Extensive experiments on updated MMLU and BIG-Bench demonstrate the stability of the proposed strategies and find that the mimicking strategy can effectively alleviate issues of overestimation from benchmark leakage. In cases where the efficient mimicking strategy fails, our extending strategy still shows promising results. Additionally, by controlling the difficulty, we can better discern the models' performance and enable fine-grained analysis neither too difficult nor too easy an exam can fairly judge students' learning status. To the best of our knowledge, we are the first to automate updating benchmarks for reliable and timely evaluation. 2024-12-10T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/9439 info:doi/doi.org/10.48550/arXiv.2402.11894 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Large language models LLM Dataset update Benchmark update Automation Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Large language models
LLM
Dataset update
Benchmark update
Automation
Artificial Intelligence and Robotics
spellingShingle Large language models
LLM
Dataset update
Benchmark update
Automation
Artificial Intelligence and Robotics
YING, Jiahao
CAO, Yixin
BAI, Yushi
SUN, Qianru
WANG, Bo
TANG, Wei
DING, Zhaojun
YANG, Yizhe
HUANG, Xuanjing
YAN, Shuicheng
Automating dataset updates towards reliable and timely evaluation of Large Language Models
description Large language models (LLMs) have achieved impressive performance across various natural language benchmarks, prompting a continual need to curate more difficult datasets for larger LLMs, which is costly and time-consuming. In this paper, we propose to automate dataset updating and provide systematic analysis regarding its effectiveness in dealing with benchmark leakage issue, difficulty control, and stability. Thus, once the current benchmark has been mastered or leaked, we can update it for timely and reliable evaluation. There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives. Extensive experiments on updated MMLU and BIG-Bench demonstrate the stability of the proposed strategies and find that the mimicking strategy can effectively alleviate issues of overestimation from benchmark leakage. In cases where the efficient mimicking strategy fails, our extending strategy still shows promising results. Additionally, by controlling the difficulty, we can better discern the models' performance and enable fine-grained analysis neither too difficult nor too easy an exam can fairly judge students' learning status. To the best of our knowledge, we are the first to automate updating benchmarks for reliable and timely evaluation.
format text
author YING, Jiahao
CAO, Yixin
BAI, Yushi
SUN, Qianru
WANG, Bo
TANG, Wei
DING, Zhaojun
YANG, Yizhe
HUANG, Xuanjing
YAN, Shuicheng
author_facet YING, Jiahao
CAO, Yixin
BAI, Yushi
SUN, Qianru
WANG, Bo
TANG, Wei
DING, Zhaojun
YANG, Yizhe
HUANG, Xuanjing
YAN, Shuicheng
author_sort YING, Jiahao
title Automating dataset updates towards reliable and timely evaluation of Large Language Models
title_short Automating dataset updates towards reliable and timely evaluation of Large Language Models
title_full Automating dataset updates towards reliable and timely evaluation of Large Language Models
title_fullStr Automating dataset updates towards reliable and timely evaluation of Large Language Models
title_full_unstemmed Automating dataset updates towards reliable and timely evaluation of Large Language Models
title_sort automating dataset updates towards reliable and timely evaluation of large language models
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9439
_version_ 1814777852253437952