JustiLM: Few-shot justification generation for explainable fact-checking of real-world claims
Justification is an explanation that supports the verdict assigned to a claim in fact-checking. However, the task of justification generation is previously oversimplified as summarization of fact-check article authored by professional checkers. In this work, we propose a realistic approach to genera...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2004
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9441 https://ink.library.smu.edu.sg/context/sis_research/article/10441/viewcontent/2024.tacl_1.19_pvoa_cc_by.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Justification is an explanation that supports the verdict assigned to a claim in fact-checking. However, the task of justification generation is previously oversimplified as summarization of fact-check article authored by professional checkers. In this work, we propose a realistic approach to generate justification based on retrieved evidence. We present a new benchmark dataset called ExClaim for Explainable Claim verification, and introduce JustiLM, a novel few-shot retrieval-augmented language model to learn justification generation by leveraging fact-check articles as auxiliary resource during training. Our results show that JustiLM outperforms in-context learning (ICL)-enabled LMs including Flan-T5 and Llama2, and the retrieval-augmented model Atlas in few-shot setting. JustiLM also shows promising performance compared to GPT-4. Extending JustiLM for joint verdict prediction and justification generation improves verdict prediction with large margins. |
---|