Knowledge-based BERT word embedding fine-tuning for emotion recognition
Emotion recognition has received considerable attention in recent years, with the popularity of social media. It is noted, however, that the state-of-the-art language models such as Bidirectional Encoder Representations from Transformers (BERT) may not produce the best performance in emotion recogni...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171308 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-171308 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1713082023-10-20T05:30:26Z Knowledge-based BERT word embedding fine-tuning for emotion recognition Zhu, Zixiao Mao, Kezhi School of Electrical and Electronic Engineering Interdisciplinary Graduate School (IGS) Institute of Catastrophe Risk Management Engineering::Computer science and engineering Emotion Recognition BERT Emotion recognition has received considerable attention in recent years, with the popularity of social media. It is noted, however, that the state-of-the-art language models such as Bidirectional Encoder Representations from Transformers (BERT) may not produce the best performance in emotion recognition. We found the main cause of the problem is that the embedding of emotional words from the pre-trained BERT model may not exhibit high between-class difference and within-class similarity. While BERT model fine-tuning is a common practice when it is applied to specific tasks, this may not be practical in emotion recognition because most datasets are small and some texts are short and noisy, without containing much useful contextual information. In this paper, we propose to use the knowledge of emotion vocabulary to fine-tune embedding of emotional words. As a separate module independent of the embedding learning model, the fine-tuning model aims to produce emotional word embedding with improved within-class similarity and between-class difference. By combining the emotionally discriminative fine-tuned embedding with contextual information-rich embedding from pre-trained BERT model, the emotional features underlying the texts could be more effectively captured in the subsequent feature learning module, which in turn leads to improved emotion recognition performance. The knowledge-based word embedding fine-tuning model is tested on five datasets of emotion recognition, and the results and analysis demonstrate the effectiveness of the proposed method. National Research Foundation (NRF) This research is supported by the National Research Foundation Singapore (NRF) under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. 2023-10-20T05:30:26Z 2023-10-20T05:30:26Z 2023 Journal Article Zhu, Z. & Mao, K. (2023). Knowledge-based BERT word embedding fine-tuning for emotion recognition. Neurocomputing, 552, 126488-. https://dx.doi.org/10.1016/j.neucom.2023.126488 0925-2312 https://hdl.handle.net/10356/171308 10.1016/j.neucom.2023.126488 2-s2.0-85165006237 552 126488 en Neurocomputing © 2023 Elsevier B.V. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Emotion Recognition BERT |
spellingShingle |
Engineering::Computer science and engineering Emotion Recognition BERT Zhu, Zixiao Mao, Kezhi Knowledge-based BERT word embedding fine-tuning for emotion recognition |
description |
Emotion recognition has received considerable attention in recent years, with the popularity of social media. It is noted, however, that the state-of-the-art language models such as Bidirectional Encoder Representations from Transformers (BERT) may not produce the best performance in emotion recognition. We found the main cause of the problem is that the embedding of emotional words from the pre-trained BERT model may not exhibit high between-class difference and within-class similarity. While BERT model fine-tuning is a common practice when it is applied to specific tasks, this may not be practical in emotion recognition because most datasets are small and some texts are short and noisy, without containing much useful contextual information. In this paper, we propose to use the knowledge of emotion vocabulary to fine-tune embedding of emotional words. As a separate module independent of the embedding learning model, the fine-tuning model aims to produce emotional word embedding with improved within-class similarity and between-class difference. By combining the emotionally discriminative fine-tuned embedding with contextual information-rich embedding from pre-trained BERT model, the emotional features underlying the texts could be more effectively captured in the subsequent feature learning module, which in turn leads to improved emotion recognition performance. The knowledge-based word embedding fine-tuning model is tested on five datasets of emotion recognition, and the results and analysis demonstrate the effectiveness of the proposed method. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Zhu, Zixiao Mao, Kezhi |
format |
Article |
author |
Zhu, Zixiao Mao, Kezhi |
author_sort |
Zhu, Zixiao |
title |
Knowledge-based BERT word embedding fine-tuning for emotion recognition |
title_short |
Knowledge-based BERT word embedding fine-tuning for emotion recognition |
title_full |
Knowledge-based BERT word embedding fine-tuning for emotion recognition |
title_fullStr |
Knowledge-based BERT word embedding fine-tuning for emotion recognition |
title_full_unstemmed |
Knowledge-based BERT word embedding fine-tuning for emotion recognition |
title_sort |
knowledge-based bert word embedding fine-tuning for emotion recognition |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/171308 |
_version_ |
1781793879964516352 |