Believing the bot: examining what makes us trust large language models (LLMs) for political information
Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural langua...
Saved in:
Main Authors: | Deng, Nicholas Yi Dar, Ong, Faith Jia Xuan, Lau, Dora Zi Cheng |
---|---|
Other Authors: | Saifuddin Ahmed |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174384 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Intelligent recruitment system using deep learning with ChatGPT
by: Lim, Timothy Zhong Zheng
Published: (2024) -
Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning
by: Yeow Huat Jonathan Sim
Published: (2023) -
Demystifying faulty code: Step-by-step reasoning for explainable fault localization
by: WIDYASARI, Ratnadira, et al.
Published: (2024) -
ChatGPT and its robustness, fairness, trustworthiness and impact
by: Muhammad Akmal Bin Rahmat
Published: (2024) -
A cross-era discourse on ChatGPT’s influence in higher education through the lens of John Dewey and Benjamin Bloom
by: Mandai, Koki, et al.
Published: (2024)