Believing the bot: examining what makes us trust large language models (LLMs) for political information
Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural langua...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174384 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural language, to foster consensus between Republicans and Democrats. Despite their growing usage, academic focus on user engagement with LLMs for political purposes is scarce. Employing an online survey experiment, we exposed participants to stimuli explaining opposing political views and how the chatbot generated responses. Our study measured participants' trust in the chatbot and their levels of affective polarisation. The results suggest that explanations increase trust among weak Democrats but decrease it among weak Republicans and strong Democrats. Transparency only diminished trust among strong Republicans. Notably, perceived bias in ChatGPT was a mediating factor in the relationship between partisanship strength and trust for both parties and between partisanship strength and affective polarisation for Republicans. Additionally, the strength of issue involvement was a significant moderator in the bias-trust relationship. These findings indicate that LLMs are most effective when addressing issues of strong personal relevance and emphasise the chatbot's political neutrality to users. |
---|