Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
In the ever-changing landscape of digital communication, social media has given rise to a vast corpus of user-generated content. This content is uniquely characterised by its informal language, including slang, emojis, and ephemeral expressions. Traditional Natural Language Processing (NLP) models o...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175379 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | In the ever-changing landscape of digital communication, social media has given rise to a vast corpus of user-generated content. This content is uniquely characterised by its informal language, including slang, emojis, and ephemeral expressions. Traditional Natural Language Processing (NLP) models often fall short in the task of effectively analysing sentiments in this specific domain. This study reveals that advanced transformer models, notably GPT-3.5 Turbo, RoBERTa, and XLM-R, when fine-tuned on relevant datasets have the potential to surpass traditional models in sentiment analysis classification tasks.
This paper adapts and evaluates these state-of-the-art models and aims to demonstrate through a comparative analysis that these large language models that leverage sophisticated attention mechanisms and go through extensive pre-training exhibit a remarkable ability to navigate the nuances and context-rich landscape of social media language, leading to significant improvements in sentiment analysis tasks.
The implications of the findings of this paper may extend beyond technical advancements as it underscores a critical shift in the NLP field towards adopting models that are inherently more adept at processing the complexity and dynamism of digital communication. |
---|