Facial expression retargeting from human to avatar made easy

Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Juyong, Chen, Keyu, Zheng, Jianmin
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162762
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-162762
record_format dspace
spelling sg-ntu-dr.10356-1627622022-11-08T05:32:15Z Facial expression retargeting from human to avatar made easy Zhang, Juyong Chen, Keyu Zheng, Jianmin School of Computer Science and Engineering Engineering::Computer science and engineering Facial Expression Retargeting Variational Autoencoder Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the performance relies on the modelers' experience. In this article, we propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation. We first build low-dimensional latent spaces for the human and avatar facial expressions with variational autoencoder. Then we construct correspondences between the two latent spaces guided by geometric and perceptual constraints. Specifically, we design geometric correspondences to reflect geometric matching and utilize a triplet data structure to express users' perceptual preference of avatar expressions. A user-friendly method is proposed to automatically generate triplets for a system allowing users to easily and efficiently annotate the correspondences. Using both geometric and perceptual correspondences, we trained a network for expression domain translation from human to avatar. Extensive experimental results and user studies demonstrate that even nonprofessional users can apply our method to generate high-quality facial expression retargeting results with less time and effort. Ministry of Education (MOE) Nanyang Technological University This research was supported in part by the National Natural Science Foundation of China (No. 61672481), Youth Innovation Promotion Association CAS (No. 2018495), Zhejiang Lab (NO. 2019NB0AB03), NTU Data Science and Artificial Intelligence Research Center (DSAIR) (No. 04INS000518C130), and the Ministry of Education, Singapore, under its MoE Tier-2 Grant (MoE 2017-T2-1- 076). 2022-11-08T05:32:15Z 2022-11-08T05:32:15Z 2020 Journal Article Zhang, J., Chen, K. & Zheng, J. (2020). Facial expression retargeting from human to avatar made easy. IEEE Transactions On Visualization and Computer Graphics, 28(2), 1274-1287. https://dx.doi.org/10.1109/TVCG.2020.3013876 1077-2626 https://hdl.handle.net/10356/162762 10.1109/TVCG.2020.3013876 32746288 2-s2.0-85122431287 2 28 1274 1287 en 04INS000518C130 MOE 2017-T2-1- 076 IEEE Transactions on Visualization and Computer Graphics © 2020 IEEE. All rights reserved
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Facial Expression Retargeting
Variational Autoencoder
spellingShingle Engineering::Computer science and engineering
Facial Expression Retargeting
Variational Autoencoder
Zhang, Juyong
Chen, Keyu
Zheng, Jianmin
Facial expression retargeting from human to avatar made easy
description Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the performance relies on the modelers' experience. In this article, we propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation. We first build low-dimensional latent spaces for the human and avatar facial expressions with variational autoencoder. Then we construct correspondences between the two latent spaces guided by geometric and perceptual constraints. Specifically, we design geometric correspondences to reflect geometric matching and utilize a triplet data structure to express users' perceptual preference of avatar expressions. A user-friendly method is proposed to automatically generate triplets for a system allowing users to easily and efficiently annotate the correspondences. Using both geometric and perceptual correspondences, we trained a network for expression domain translation from human to avatar. Extensive experimental results and user studies demonstrate that even nonprofessional users can apply our method to generate high-quality facial expression retargeting results with less time and effort.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhang, Juyong
Chen, Keyu
Zheng, Jianmin
format Article
author Zhang, Juyong
Chen, Keyu
Zheng, Jianmin
author_sort Zhang, Juyong
title Facial expression retargeting from human to avatar made easy
title_short Facial expression retargeting from human to avatar made easy
title_full Facial expression retargeting from human to avatar made easy
title_fullStr Facial expression retargeting from human to avatar made easy
title_full_unstemmed Facial expression retargeting from human to avatar made easy
title_sort facial expression retargeting from human to avatar made easy
publishDate 2022
url https://hdl.handle.net/10356/162762
_version_ 1749179240624947200