Towards LLM-based fact verification on news claims with a hierarchical step-by-step prompting method

While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still underexplored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstra...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Xuan, GAO, Wei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8453
https://ink.library.smu.edu.sg/context/sis_research/article/9456/viewcontent/Towards_LLM_based_Fact_Verification_on_News_Claims_with_a_Hierarchical_Step_by_Step_Prompting_Method.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still underexplored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstration examples, the performance of several prompting methods can be comparable with previous supervised models. To further boost performance, we introduce a Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to separate a claim into several subclaims and then verify each of them via multiple questionsanswering steps progressively. Experiment results on two public misinformation datasets show that HiSS prompting outperforms stateof-the-art fully-supervised approach and strong few-shot ICL-enabled baselines.