Towards explainable and semantically coherent claim extraction for an automated fact-checker
Misinformation and fake news spread everywhere through online social media platforms. Although Automatic Fact-Checkers and LLMs like ChatGPT have become popular and seem to be a promising solution to detect fake news, these models still have some limitations regarding their reliance on pre-existing...
Saved in:
Main Author: | Yoswara, Jocelyn Valencia |
---|---|
Other Authors: | Erry Gunawan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/176481 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Reinforcement retrieval leveraging fine-grained feedback for fact checking news claims with Black-Box LLM
by: ZHANG, Xuan, et al.
Published: (2023) -
PAT 3: An extensible architecture for building multi-domain model checkers
by: Liu, Y., et al.
Published: (2013) -
Finding an optimal solution for the game of checkers
by: Delena, Raymund E., et al.
Published: (1996) -
Can consumers' scepticism be mitigated by claim objectivity and claim extremity?
by: Soo, J.T.
Published: (2013) -
SYNTACS: A spelling and grammar checker geared towards computer science
by: Ang, Judy, et al.
Published: (1993)