Advanced multimodal emotion recognition for Javanese language using deep learning
This research develops a robust emotion recognition system for the Javanese language using multimodal audio and video datasets, addressing the limited advancements in emotion recognition specific to this language. Three models were explored to enhance emotional feature extraction: the SpectrogramIma...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English English |
Published: |
IAES
2024
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/114892/7/114892_%20Advanced%20multimodal%20emotion.pdf http://irep.iium.edu.my/114892/8/114892_%20Advanced%20multimodal%20emotion_Scopus.pdf http://irep.iium.edu.my/114892/ https://section.iaesonline.com/index.php/IJEEI/article/view/5662 http://dx.doi.org/10.52549/ijeei.v12i3.5662 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Islam Antarabangsa Malaysia |
Language: | English English |
Summary: | This research develops a robust emotion recognition system for the Javanese language using multimodal audio and video datasets, addressing the limited advancements in emotion recognition specific to this language. Three models were explored to enhance emotional feature extraction: the SpectrogramImage Model (Model 1), which converts audio inputs into spectrogram images and integrates them with facial images for emotion labeling; the Convolutional-MFCC Model (Model 2), which leverages convolutional techniques for image processing and Mel-frequency cepstral coefficients for audio; and the Multimodal Feature-Extraction Model (Model 3), which independently processes video and audio features before integrating them for emotion recognition. Comparative analysis shows that the Multimodal Feature-Extraction Model achieves the highest accuracy of 93%, surpassing the Convolutional-MFCC Model at 85% and the Spectrogram-Image Model at 71%. These findings demonstrate that effective multimodal integration, mainly through separate feature extraction, significantly enhances emotion recognition accuracy. This research improves communication systems and offers deeper insights into Javanese emotional expressions, with potential applications in human-computer interaction, healthcare, and cultural studies. Additionally, it contributes to the advancement of sophisticated emotion recognition technologies. |
---|