Cascade EF-GAN : progressive facial expression editing with local focuses

Recent advances in Generative Adversarial Nets (GANs) have shown remarkable improvements for facial expression editing. However, current methods are still prone to generate artifacts and blurs around expression-intensive regions, and often introduce undesired overlapping artifacts while handling lar...

Full description

Saved in:
Bibliographic Details
Main Authors: Wu, Rongliang, Zhang, Gongjie, Lu, Shijian, Chen, Tao
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146680
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-146680
record_format dspace
spelling sg-ntu-dr.10356-1466802021-03-04T08:40:00Z Cascade EF-GAN : progressive facial expression editing with local focuses Wu, Rongliang Zhang, Gongjie Lu, Shijian Chen, Tao School of Computer Science and Engineering 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Data Science and Artificial Intelligence Research Centre Engineering Computer Vision Generative Adversarial Nets (GANs) Recent advances in Generative Adversarial Nets (GANs) have shown remarkable improvements for facial expression editing. However, current methods are still prone to generate artifacts and blurs around expression-intensive regions, and often introduce undesired overlapping artifacts while handling large-gap expression transformations such as transformation from furious to laughing. To address these limitations, we propose Cascade Expression Focal GAN (Cascade EF-GAN), a novel network that performs progressive facial expression editing with local expression focuses. The introduction of the local focus enables the Cascade EF-GAN to better preserve identity-related features and details around eyes, noses and mouths, which further helps reduce artifacts and blurs within the generated facial images. In addition, an innovative cascade transformation strategy is designed by dividing a large facial expression transformation into multiple small ones in cascade, which helps suppress overlapping artifacts and produce more realistic editing while dealing with large-gap expression transformations. Extensive experiments over two publicly available facial expression datasets show that our proposed Cascade EF-GAN achieves superior performance for facial expression editing. Nanyang Technological University Accepted version This work is supported by Data Science & Artificial Intelligence Research Centre, NTU Singapore. 2021-03-04T08:40:00Z 2021-03-04T08:40:00Z 2020 Conference Paper Wu, R., Zhang, G., Lu, S., & Chen, T. (2020). Cascade EF-GAN : progressive facial expression editing with local focuses. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1, 5020-5029. doi:10.1109/CVPR42600.2020.00507 978-1-7281-7168-5 2575-7075 https://hdl.handle.net/10356/146680 10.1109/CVPR42600.2020.00507 2-s2.0-85093095701 1 5020 5029 en #001531-00001 © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work is available at: https://doi.org/10.1109/CVPR42600.2020.00507 application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Computer Vision
Generative Adversarial Nets (GANs)
spellingShingle Engineering
Computer Vision
Generative Adversarial Nets (GANs)
Wu, Rongliang
Zhang, Gongjie
Lu, Shijian
Chen, Tao
Cascade EF-GAN : progressive facial expression editing with local focuses
description Recent advances in Generative Adversarial Nets (GANs) have shown remarkable improvements for facial expression editing. However, current methods are still prone to generate artifacts and blurs around expression-intensive regions, and often introduce undesired overlapping artifacts while handling large-gap expression transformations such as transformation from furious to laughing. To address these limitations, we propose Cascade Expression Focal GAN (Cascade EF-GAN), a novel network that performs progressive facial expression editing with local expression focuses. The introduction of the local focus enables the Cascade EF-GAN to better preserve identity-related features and details around eyes, noses and mouths, which further helps reduce artifacts and blurs within the generated facial images. In addition, an innovative cascade transformation strategy is designed by dividing a large facial expression transformation into multiple small ones in cascade, which helps suppress overlapping artifacts and produce more realistic editing while dealing with large-gap expression transformations. Extensive experiments over two publicly available facial expression datasets show that our proposed Cascade EF-GAN achieves superior performance for facial expression editing.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wu, Rongliang
Zhang, Gongjie
Lu, Shijian
Chen, Tao
format Conference or Workshop Item
author Wu, Rongliang
Zhang, Gongjie
Lu, Shijian
Chen, Tao
author_sort Wu, Rongliang
title Cascade EF-GAN : progressive facial expression editing with local focuses
title_short Cascade EF-GAN : progressive facial expression editing with local focuses
title_full Cascade EF-GAN : progressive facial expression editing with local focuses
title_fullStr Cascade EF-GAN : progressive facial expression editing with local focuses
title_full_unstemmed Cascade EF-GAN : progressive facial expression editing with local focuses
title_sort cascade ef-gan : progressive facial expression editing with local focuses
publishDate 2021
url https://hdl.handle.net/10356/146680
_version_ 1696984346921009152