Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface
Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional inf...
Saved in:
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174702 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174702 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1747022024-04-12T15:47:58Z Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface Lee, Jin Pyo Jang, Hanhyeok Jang, Yeonwoo Song, Hyeonseo Lee, Suwoo Lee, Pooi See Kim, Jiyun School of Materials Science and Engineering Engineering Convolutional neural network Facial recognition Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment. Published version This work was supported by National Research Foundation of Korea (NRF) grants funded by the Korean government, NRF-2020R1A2C2102842, NRF-2021R1A4A3033149, NRF-RS-2023-00302525, the Fundamental Research Program of the Korea Institute of Material Science, PNK7630 and Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0023703, HRD Program for Industrial Innovation). 2024-04-08T02:42:22Z 2024-04-08T02:42:22Z 2024 Journal Article Lee, J. P., Jang, H., Jang, Y., Song, H., Lee, S., Lee, P. S. & Kim, J. (2024). Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface. Nature Communications, 15(1), 530-. https://dx.doi.org/10.1038/s41467-023-44673-2 2041-1723 https://hdl.handle.net/10356/174702 10.1038/s41467-023-44673-2 38225246 2-s2.0-85182473640 1 15 530 en Nature Communications © The Author(s) 2024.Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Convolutional neural network Facial recognition |
spellingShingle |
Engineering Convolutional neural network Facial recognition Lee, Jin Pyo Jang, Hanhyeok Jang, Yeonwoo Song, Hyeonseo Lee, Suwoo Lee, Pooi See Kim, Jiyun Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
description |
Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment. |
author2 |
School of Materials Science and Engineering |
author_facet |
School of Materials Science and Engineering Lee, Jin Pyo Jang, Hanhyeok Jang, Yeonwoo Song, Hyeonseo Lee, Suwoo Lee, Pooi See Kim, Jiyun |
format |
Article |
author |
Lee, Jin Pyo Jang, Hanhyeok Jang, Yeonwoo Song, Hyeonseo Lee, Suwoo Lee, Pooi See Kim, Jiyun |
author_sort |
Lee, Jin Pyo |
title |
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
title_short |
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
title_full |
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
title_fullStr |
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
title_full_unstemmed |
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
title_sort |
encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174702 |
_version_ |
1806059836812558336 |