Unconstrained facial action unit detection via latent feature domain

Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propo...

Full description

Saved in:
Bibliographic Details
Main Authors: Shao, Zhiwen, Cai, Jianfei, Cham, Tat-Jen, Lu, Xuequan, Ma, Lizhuang
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172649
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172649
record_format dspace
spelling sg-ntu-dr.10356-1726492024-05-14T07:26:29Z Unconstrained facial action unit detection via latent feature domain Shao, Zhiwen Cai, Jianfei Cham, Tat-Jen Lu, Xuequan Ma, Lizhuang School of Computer Science and Engineering Engineering::Computer science and engineering Unconstrained Facial AU Detection Domain Adaptation Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propose an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconstrained target domain by exploiting labels of AU-related facial landmarks. Specifically, we map a source image with label and a target image without label into a latent feature domain by combining source landmark-related feature with target landmark-free feature. Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance. Moreover, we introduce a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature by treating the adversarial learning as a multi-player minimax game. Our framework can also be naturally extended for use with target-domain pseudo AU labels. Extensive experiments show that our method soundly outperforms lower-bounds and upper-bounds of the basic model, as well as state-of-the-art approaches on the challenging in-the-wild benchmarks. The code is available at https://github.com/ZhiwenShao/ADLD. This work was supported in part by the National Key R&D Program of China under Grant 2019YFC1521104, in part by the National Natural Science Foundation of China under Grant 61972157, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20201346, in part by the Six Talent Peaks Project in Jiangsu Province under Grant 2015-DZXX-010, in part by the Zhejiang Lab under Grant 2020NB0AB01, in part by the Data Science & Artificial Intelligence Research Centre@NTU (DSAIR), in part by the Monash FIT Start-up Grant, and in part by the Fundamental Research Funds for the Central Universities under Grant 2021QN1072. 2023-12-19T02:07:00Z 2023-12-19T02:07:00Z 2021 Journal Article Shao, Z., Cai, J., Cham, T., Lu, X. & Ma, L. (2021). Unconstrained facial action unit detection via latent feature domain. IEEE Transactions On Affective Computing, 13(2), 1111-1126. https://dx.doi.org/10.1109/TAFFC.2021.3091331 1949-3045 https://hdl.handle.net/10356/172649 10.1109/TAFFC.2021.3091331 2-s2.0-85112400477 2 13 1111 1126 en IEEE Transactions on Affective Computing © 2021 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Unconstrained Facial AU Detection
Domain Adaptation
spellingShingle Engineering::Computer science and engineering
Unconstrained Facial AU Detection
Domain Adaptation
Shao, Zhiwen
Cai, Jianfei
Cham, Tat-Jen
Lu, Xuequan
Ma, Lizhuang
Unconstrained facial action unit detection via latent feature domain
description Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propose an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconstrained target domain by exploiting labels of AU-related facial landmarks. Specifically, we map a source image with label and a target image without label into a latent feature domain by combining source landmark-related feature with target landmark-free feature. Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance. Moreover, we introduce a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature by treating the adversarial learning as a multi-player minimax game. Our framework can also be naturally extended for use with target-domain pseudo AU labels. Extensive experiments show that our method soundly outperforms lower-bounds and upper-bounds of the basic model, as well as state-of-the-art approaches on the challenging in-the-wild benchmarks. The code is available at https://github.com/ZhiwenShao/ADLD.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Shao, Zhiwen
Cai, Jianfei
Cham, Tat-Jen
Lu, Xuequan
Ma, Lizhuang
format Article
author Shao, Zhiwen
Cai, Jianfei
Cham, Tat-Jen
Lu, Xuequan
Ma, Lizhuang
author_sort Shao, Zhiwen
title Unconstrained facial action unit detection via latent feature domain
title_short Unconstrained facial action unit detection via latent feature domain
title_full Unconstrained facial action unit detection via latent feature domain
title_fullStr Unconstrained facial action unit detection via latent feature domain
title_full_unstemmed Unconstrained facial action unit detection via latent feature domain
title_sort unconstrained facial action unit detection via latent feature domain
publishDate 2023
url https://hdl.handle.net/10356/172649
_version_ 1814047013500616704