Bias problems in large language models and how to mitigate them
Pretrained Language Models (PLMs) like ChatGPT have become integral to various industries, revolutionising applications from customer service to software development. However, these PLMs are often trained on vast, unmoderated datasets, which may contain social biases that can be propagated in the m...
Saved in:
Main Author: | Ong, Adrian Zhi Ying |
---|---|
Other Authors: | Luu Anh Tuan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181163 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
She elicits requirements and he tests: Software engineering gender bias in large language models
by: TREUDE, Christoph, et al.
Published: (2023) -
Enhancing online safety: leveraging large language models for community moderation in Singlish dialect
by: Goh, Zheng Ying
Published: (2024) -
QuantfolioX: portfolio management application using large language model technology
by: Teo, Charlotte Xuan Qin
Published: (2024) -
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024) -
Punctuation restoration for speech transcripts using large language models
by: Liu, Changsong
Published: (2024)