Reinforcement tuning for detecting stances and debunking rumors jointly with large language models

Learning multi-task models for jointly detecting stance and verifying rumors poses challenges due to the need for training data of stance at post level and rumor veracity at claim level, which are difficult to obtain. To address this issue, we leverage large language models (LLMs) as the foundation...

Full description

Saved in:
Bibliographic Details
Main Authors: YANG, Ruichao, GAO, Wei, MA, Jing, LING, Hongzhan, WANG, Bo
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9866
https://ink.library.smu.edu.sg/context/sis_research/article/10866/viewcontent/2024.findings_acl.796.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10866
record_format dspace
spelling sg-smu-ink.sis_research-108662025-01-02T09:21:39Z Reinforcement tuning for detecting stances and debunking rumors jointly with large language models YANG, Ruichao GAO, Wei MA, Jing LING, Hongzhan WANG, Bo Learning multi-task models for jointly detecting stance and verifying rumors poses challenges due to the need for training data of stance at post level and rumor veracity at claim level, which are difficult to obtain. To address this issue, we leverage large language models (LLMs) as the foundation annotators for the joint stance detection (SD) and rumor verification (RV) tasks, dubbed as JSDRV. We introduce a novel reinforcement tuning framework to enhance the joint predictive capabilities of LLM-based SD and RV components. Specifically, we devise a policy for selecting LLM-annotated data at the two levels, employing a hybrid reward mechanism to choose high-quality labels for effective LLM fine-tuning on both tasks. Results demonstrate that JSDRV improves the capabilities of LLMs in the joint tasks, not only outperforming state-of-the-art methods but also generalizing to non-LLMs accommodated as task models. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9866 info:doi/10.18653/v1/2024.findings-acl.796 https://ink.library.smu.edu.sg/context/sis_research/article/10866/viewcontent/2024.findings_acl.796.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Programming Languages and Compilers
spellingShingle Databases and Information Systems
Programming Languages and Compilers
YANG, Ruichao
GAO, Wei
MA, Jing
LING, Hongzhan
WANG, Bo
Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
description Learning multi-task models for jointly detecting stance and verifying rumors poses challenges due to the need for training data of stance at post level and rumor veracity at claim level, which are difficult to obtain. To address this issue, we leverage large language models (LLMs) as the foundation annotators for the joint stance detection (SD) and rumor verification (RV) tasks, dubbed as JSDRV. We introduce a novel reinforcement tuning framework to enhance the joint predictive capabilities of LLM-based SD and RV components. Specifically, we devise a policy for selecting LLM-annotated data at the two levels, employing a hybrid reward mechanism to choose high-quality labels for effective LLM fine-tuning on both tasks. Results demonstrate that JSDRV improves the capabilities of LLMs in the joint tasks, not only outperforming state-of-the-art methods but also generalizing to non-LLMs accommodated as task models.
format text
author YANG, Ruichao
GAO, Wei
MA, Jing
LING, Hongzhan
WANG, Bo
author_facet YANG, Ruichao
GAO, Wei
MA, Jing
LING, Hongzhan
WANG, Bo
author_sort YANG, Ruichao
title Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
title_short Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
title_full Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
title_fullStr Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
title_full_unstemmed Reinforcement tuning for detecting stances and debunking rumors jointly with large language models
title_sort reinforcement tuning for detecting stances and debunking rumors jointly with large language models
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9866
https://ink.library.smu.edu.sg/context/sis_research/article/10866/viewcontent/2024.findings_acl.796.pdf
_version_ 1821237255759462400