Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language

In the ever-changing landscape of digital communication, social media has given rise to a vast corpus of user-generated content. This content is uniquely characterised by its informal language, including slang, emojis, and ephemeral expressions. Traditional Natural Language Processing (NLP) models o...

Full description

Saved in:
Bibliographic Details
Main Author: Sneha Ravisankar
Other Authors: Vidya Sudarshan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175379
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175379
record_format dspace
spelling sg-ntu-dr.10356-1753792024-04-26T15:45:08Z Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language Sneha Ravisankar Vidya Sudarshan School of Computer Science and Engineering vidya.sudarshan@ntu.edu.sg Computer and Information Science Deep learning Natural language processing In the ever-changing landscape of digital communication, social media has given rise to a vast corpus of user-generated content. This content is uniquely characterised by its informal language, including slang, emojis, and ephemeral expressions. Traditional Natural Language Processing (NLP) models often fall short in the task of effectively analysing sentiments in this specific domain. This study reveals that advanced transformer models, notably GPT-3.5 Turbo, RoBERTa, and XLM-R, when fine-tuned on relevant datasets have the potential to surpass traditional models in sentiment analysis classification tasks. This paper adapts and evaluates these state-of-the-art models and aims to demonstrate through a comparative analysis that these large language models that leverage sophisticated attention mechanisms and go through extensive pre-training exhibit a remarkable ability to navigate the nuances and context-rich landscape of social media language, leading to significant improvements in sentiment analysis tasks. The implications of the findings of this paper may extend beyond technical advancements as it underscores a critical shift in the NLP field towards adopting models that are inherently more adept at processing the complexity and dynamism of digital communication. Bachelor's degree 2024-04-23T13:09:59Z 2024-04-23T13:09:59Z 2024 Final Year Project (FYP) Sneha Ravisankar (2024). Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175379 https://hdl.handle.net/10356/175379 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Deep learning
Natural language processing
spellingShingle Computer and Information Science
Deep learning
Natural language processing
Sneha Ravisankar
Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
description In the ever-changing landscape of digital communication, social media has given rise to a vast corpus of user-generated content. This content is uniquely characterised by its informal language, including slang, emojis, and ephemeral expressions. Traditional Natural Language Processing (NLP) models often fall short in the task of effectively analysing sentiments in this specific domain. This study reveals that advanced transformer models, notably GPT-3.5 Turbo, RoBERTa, and XLM-R, when fine-tuned on relevant datasets have the potential to surpass traditional models in sentiment analysis classification tasks. This paper adapts and evaluates these state-of-the-art models and aims to demonstrate through a comparative analysis that these large language models that leverage sophisticated attention mechanisms and go through extensive pre-training exhibit a remarkable ability to navigate the nuances and context-rich landscape of social media language, leading to significant improvements in sentiment analysis tasks. The implications of the findings of this paper may extend beyond technical advancements as it underscores a critical shift in the NLP field towards adopting models that are inherently more adept at processing the complexity and dynamism of digital communication.
author2 Vidya Sudarshan
author_facet Vidya Sudarshan
Sneha Ravisankar
format Final Year Project
author Sneha Ravisankar
author_sort Sneha Ravisankar
title Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
title_short Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
title_full Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
title_fullStr Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
title_full_unstemmed Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
title_sort enhancing contextual understanding in nlp: adapting state-of-the-art models for improved sentiment analysis of informal language
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175379
_version_ 1800916291326836736