Believing the bot: examining what makes us trust large language models (LLMs) for political information

Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural langua...

Full description

Saved in:
Bibliographic Details
Main Authors: Deng, Nicholas Yi Dar, Ong, Faith Jia Xuan, Lau, Dora Zi Cheng
Other Authors: Saifuddin Ahmed
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
LLM
AI
Online Access:https://hdl.handle.net/10356/174384
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-174384
record_format dspace
spelling sg-ntu-dr.10356-1743842024-03-31T15:35:12Z Believing the bot: examining what makes us trust large language models (LLMs) for political information Deng, Nicholas Yi Dar Ong, Faith Jia Xuan Lau, Dora Zi Cheng Saifuddin Ahmed Wee Kim Wee School of Communication and Information sahmed@ntu.edu.sg Arts and Humanities Transparency Trust LLM ChatGPT Polarisation AI Republican Democrat Justification Politics Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural language, to foster consensus between Republicans and Democrats. Despite their growing usage, academic focus on user engagement with LLMs for political purposes is scarce. Employing an online survey experiment, we exposed participants to stimuli explaining opposing political views and how the chatbot generated responses. Our study measured participants' trust in the chatbot and their levels of affective polarisation. The results suggest that explanations increase trust among weak Democrats but decrease it among weak Republicans and strong Democrats. Transparency only diminished trust among strong Republicans. Notably, perceived bias in ChatGPT was a mediating factor in the relationship between partisanship strength and trust for both parties and between partisanship strength and affective polarisation for Republicans. Additionally, the strength of issue involvement was a significant moderator in the bias-trust relationship. These findings indicate that LLMs are most effective when addressing issues of strong personal relevance and emphasise the chatbot's political neutrality to users. Bachelor's degree 2024-03-28T08:45:39Z 2024-03-28T08:45:39Z 2024 Final Year Project (FYP) Deng, N. Y. D., Ong, F. J. X. & Lau, D. Z. C. (2024). Believing the bot: examining what makes us trust large language models (LLMs) for political information. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174384 https://hdl.handle.net/10356/174384 en CS/23/038 application/pdf application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Arts and Humanities
Transparency
Trust
LLM
ChatGPT
Polarisation
AI
Republican
Democrat
Justification
Politics
spellingShingle Arts and Humanities
Transparency
Trust
LLM
ChatGPT
Polarisation
AI
Republican
Democrat
Justification
Politics
Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
Believing the bot: examining what makes us trust large language models (LLMs) for political information
description Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural language, to foster consensus between Republicans and Democrats. Despite their growing usage, academic focus on user engagement with LLMs for political purposes is scarce. Employing an online survey experiment, we exposed participants to stimuli explaining opposing political views and how the chatbot generated responses. Our study measured participants' trust in the chatbot and their levels of affective polarisation. The results suggest that explanations increase trust among weak Democrats but decrease it among weak Republicans and strong Democrats. Transparency only diminished trust among strong Republicans. Notably, perceived bias in ChatGPT was a mediating factor in the relationship between partisanship strength and trust for both parties and between partisanship strength and affective polarisation for Republicans. Additionally, the strength of issue involvement was a significant moderator in the bias-trust relationship. These findings indicate that LLMs are most effective when addressing issues of strong personal relevance and emphasise the chatbot's political neutrality to users.
author2 Saifuddin Ahmed
author_facet Saifuddin Ahmed
Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
format Final Year Project
author Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
author_sort Deng, Nicholas Yi Dar
title Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_short Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_full Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_fullStr Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_full_unstemmed Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_sort believing the bot: examining what makes us trust large language models (llms) for political information
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/174384
_version_ 1795302127901343744