Facial emotion recognition with noisy multi-task annotations
Human emotions can be inferred from facial expressions. However, the annotations of facial expressions are often highly noisy in common emotion coding models, including categorical and dimensional ones. To reduce human labelling effort on multi-task labels, we introduce a new problem of facial emoti...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6394 https://ink.library.smu.edu.sg/context/sis_research/article/7397/viewcontent/Facial_Emotion_Recognition_with_Noisy_Multi_task_Annotations.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7397 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-73972021-11-23T02:32:20Z Facial emotion recognition with noisy multi-task annotations ZHANG, S. HUANG, Zhiwu PAUDEL, D.P. VAN, Gool L. Human emotions can be inferred from facial expressions. However, the annotations of facial expressions are often highly noisy in common emotion coding models, including categorical and dimensional ones. To reduce human labelling effort on multi-task labels, we introduce a new problem of facial emotion recognition with noisy multitask annotations. For this new problem, we suggest a formulation from the point of joint distribution match view, which aims at learning more reliable correlations among raw facial images and multi-task labels, resulting in the reduction of noise influence. In our formulation, we exploit a new method to enable the emotion prediction and the joint distribution learning in a unified adversarial learning game. Evaluation throughout extensive experiments studies the real setups of the suggested new problem, as well as the clear superiority of the proposed method over the state-of-the-art competing methods on either the synthetic noisy labeled CIFAR-10 or practical noisy multitask labeled RAF and AffectNet 2021-01-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6394 info:doi/10.1109/WACV48630.2021.00007 https://ink.library.smu.edu.sg/context/sis_research/article/7397/viewcontent/Facial_Emotion_Recognition_with_Noisy_Multi_task_Annotations.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems Graphics and Human Computer Interfaces |
spellingShingle |
Databases and Information Systems Graphics and Human Computer Interfaces ZHANG, S. HUANG, Zhiwu PAUDEL, D.P. VAN, Gool L. Facial emotion recognition with noisy multi-task annotations |
description |
Human emotions can be inferred from facial expressions. However, the annotations of facial expressions are often highly noisy in common emotion coding models, including categorical and dimensional ones. To reduce human labelling effort on multi-task labels, we introduce a new problem of facial emotion recognition with noisy multitask annotations. For this new problem, we suggest a formulation from the point of joint distribution match view, which aims at learning more reliable correlations among raw facial images and multi-task labels, resulting in the reduction of noise influence. In our formulation, we exploit a new method to enable the emotion prediction and the joint distribution learning in a unified adversarial learning game. Evaluation throughout extensive experiments studies the real setups of the suggested new problem, as well as the clear superiority of the proposed method over the state-of-the-art competing methods on either the synthetic noisy labeled CIFAR-10 or practical noisy multitask labeled RAF and AffectNet |
format |
text |
author |
ZHANG, S. HUANG, Zhiwu PAUDEL, D.P. VAN, Gool L. |
author_facet |
ZHANG, S. HUANG, Zhiwu PAUDEL, D.P. VAN, Gool L. |
author_sort |
ZHANG, S. |
title |
Facial emotion recognition with noisy multi-task annotations |
title_short |
Facial emotion recognition with noisy multi-task annotations |
title_full |
Facial emotion recognition with noisy multi-task annotations |
title_fullStr |
Facial emotion recognition with noisy multi-task annotations |
title_full_unstemmed |
Facial emotion recognition with noisy multi-task annotations |
title_sort |
facial emotion recognition with noisy multi-task annotations |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/6394 https://ink.library.smu.edu.sg/context/sis_research/article/7397/viewcontent/Facial_Emotion_Recognition_with_Noisy_Multi_task_Annotations.pdf |
_version_ |
1770575952176218112 |