Advanced multimodal emotion recognition for Javanese language using deep learning

This research develops a robust emotion recognition system for the Javanese language using multimodal audio and video datasets, addressing the limited advancements in emotion recognition specific to this language. Three models were explored to enhance emotional feature extraction: the SpectrogramIma...

Full description

Saved in:
Bibliographic Details
Main Authors: Arifin, Fatchul, Nasuha, Aris, Priambodo, Ardy Seto, Winursito, Anggun, Gunawan, Teddy Surya
Format: Article
Language:English
English
Published: IAES 2024
Subjects:
Online Access:http://irep.iium.edu.my/114892/7/114892_%20Advanced%20multimodal%20emotion.pdf
http://irep.iium.edu.my/114892/8/114892_%20Advanced%20multimodal%20emotion_Scopus.pdf
http://irep.iium.edu.my/114892/
https://section.iaesonline.com/index.php/IJEEI/article/view/5662
http://dx.doi.org/10.52549/ijeei.v12i3.5662
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Islam Antarabangsa Malaysia
Language: English
English
id my.iium.irep.114892
record_format dspace
spelling my.iium.irep.1148922024-10-08T01:05:45Z http://irep.iium.edu.my/114892/ Advanced multimodal emotion recognition for Javanese language using deep learning Arifin, Fatchul Nasuha, Aris Priambodo, Ardy Seto Winursito, Anggun Gunawan, Teddy Surya TK7885 Computer engineering This research develops a robust emotion recognition system for the Javanese language using multimodal audio and video datasets, addressing the limited advancements in emotion recognition specific to this language. Three models were explored to enhance emotional feature extraction: the SpectrogramImage Model (Model 1), which converts audio inputs into spectrogram images and integrates them with facial images for emotion labeling; the Convolutional-MFCC Model (Model 2), which leverages convolutional techniques for image processing and Mel-frequency cepstral coefficients for audio; and the Multimodal Feature-Extraction Model (Model 3), which independently processes video and audio features before integrating them for emotion recognition. Comparative analysis shows that the Multimodal Feature-Extraction Model achieves the highest accuracy of 93%, surpassing the Convolutional-MFCC Model at 85% and the Spectrogram-Image Model at 71%. These findings demonstrate that effective multimodal integration, mainly through separate feature extraction, significantly enhances emotion recognition accuracy. This research improves communication systems and offers deeper insights into Javanese emotional expressions, with potential applications in human-computer interaction, healthcare, and cultural studies. Additionally, it contributes to the advancement of sophisticated emotion recognition technologies. IAES 2024-09 Article PeerReviewed application/pdf en http://irep.iium.edu.my/114892/7/114892_%20Advanced%20multimodal%20emotion.pdf application/pdf en http://irep.iium.edu.my/114892/8/114892_%20Advanced%20multimodal%20emotion_Scopus.pdf Arifin, Fatchul and Nasuha, Aris and Priambodo, Ardy Seto and Winursito, Anggun and Gunawan, Teddy Surya (2024) Advanced multimodal emotion recognition for Javanese language using deep learning. Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 12 (3). pp. 503-515. ISSN 2089-3272 https://section.iaesonline.com/index.php/IJEEI/article/view/5662 http://dx.doi.org/10.52549/ijeei.v12i3.5662
institution Universiti Islam Antarabangsa Malaysia
building IIUM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider International Islamic University Malaysia
content_source IIUM Repository (IREP)
url_provider http://irep.iium.edu.my/
language English
English
topic TK7885 Computer engineering
spellingShingle TK7885 Computer engineering
Arifin, Fatchul
Nasuha, Aris
Priambodo, Ardy Seto
Winursito, Anggun
Gunawan, Teddy Surya
Advanced multimodal emotion recognition for Javanese language using deep learning
description This research develops a robust emotion recognition system for the Javanese language using multimodal audio and video datasets, addressing the limited advancements in emotion recognition specific to this language. Three models were explored to enhance emotional feature extraction: the SpectrogramImage Model (Model 1), which converts audio inputs into spectrogram images and integrates them with facial images for emotion labeling; the Convolutional-MFCC Model (Model 2), which leverages convolutional techniques for image processing and Mel-frequency cepstral coefficients for audio; and the Multimodal Feature-Extraction Model (Model 3), which independently processes video and audio features before integrating them for emotion recognition. Comparative analysis shows that the Multimodal Feature-Extraction Model achieves the highest accuracy of 93%, surpassing the Convolutional-MFCC Model at 85% and the Spectrogram-Image Model at 71%. These findings demonstrate that effective multimodal integration, mainly through separate feature extraction, significantly enhances emotion recognition accuracy. This research improves communication systems and offers deeper insights into Javanese emotional expressions, with potential applications in human-computer interaction, healthcare, and cultural studies. Additionally, it contributes to the advancement of sophisticated emotion recognition technologies.
format Article
author Arifin, Fatchul
Nasuha, Aris
Priambodo, Ardy Seto
Winursito, Anggun
Gunawan, Teddy Surya
author_facet Arifin, Fatchul
Nasuha, Aris
Priambodo, Ardy Seto
Winursito, Anggun
Gunawan, Teddy Surya
author_sort Arifin, Fatchul
title Advanced multimodal emotion recognition for Javanese language using deep learning
title_short Advanced multimodal emotion recognition for Javanese language using deep learning
title_full Advanced multimodal emotion recognition for Javanese language using deep learning
title_fullStr Advanced multimodal emotion recognition for Javanese language using deep learning
title_full_unstemmed Advanced multimodal emotion recognition for Javanese language using deep learning
title_sort advanced multimodal emotion recognition for javanese language using deep learning
publisher IAES
publishDate 2024
url http://irep.iium.edu.my/114892/7/114892_%20Advanced%20multimodal%20emotion.pdf
http://irep.iium.edu.my/114892/8/114892_%20Advanced%20multimodal%20emotion_Scopus.pdf
http://irep.iium.edu.my/114892/
https://section.iaesonline.com/index.php/IJEEI/article/view/5662
http://dx.doi.org/10.52549/ijeei.v12i3.5662
_version_ 1814042717964992512