LEED : label-free expression editing via disentanglement

Recent studies on facial expression editing have obtained very promising progress. On the other hand, existing methods face the constraint of requiring a large amount of expression labels which are often expensive and time-consuming to collect. This paper presents an innovative label-free expression...

Full description

Saved in:
Bibliographic Details
Main Authors: Wu, Rongliang, Lu, Shijian
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146194
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-146194
record_format dspace
spelling sg-ntu-dr.10356-1461942021-02-01T06:04:51Z LEED : label-free expression editing via disentanglement Wu, Rongliang Lu, Shijian School of Computer Science and Engineering 2020 European Conference on Computer Vision (ECCV) Data Science and Artificial Intelligence Research Centre Engineering Computer Vision Image Synthesis Recent studies on facial expression editing have obtained very promising progress. On the other hand, existing methods face the constraint of requiring a large amount of expression labels which are often expensive and time-consuming to collect. This paper presents an innovative label-free expression editing via disentanglement (LEED) framework that is capable of editing the expression of both frontal and profile facial images without requiring any expression label. The idea is to disentangle the identity and expression of a facial image in the expression manifold, where the neutral face captures the identity attribute and the displacement between the neutral image and the expressive image captures the expression attribute. Two novel losses are designed for optimal expression disentanglement and consistent synthesis, including a mutual expression information loss that aims to extract pure expression-related features and a siamese loss that aims to enhance the expression similarity between the synthesized image and the reference image. Extensive experiments over two public facial expression datasets show that LEED achieves superior facial expression editing qualitatively and quantitatively. Nanyang Technological University Accepted version This work is supported by Data Science & Artificial Intelligence Research Centre, NTU Singapore. 2021-02-01T06:04:51Z 2021-02-01T06:04:51Z 2020 Conference Paper Wu, R., & Lu, S. (2020). LEED : label-free expression editing via disentanglement. Proceedings of the European Conference on Computer Vision, 12357 LNCS, 781-798. doi:10.1007/978-3-030-58610-2_46 9783030586096 https://hdl.handle.net/10356/146194 10.1007/978-3-030-58610-2_46 2-s2.0-85093112998 12357 LNCS 781 798 en #001531-00001 © 2020 Springer Nature Switzerland AG. This is a post-peer-review, pre-copyedit version of a conference paper published in European Conference on Computer Vision (ECCV). The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-58610-2_46 application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Computer Vision
Image Synthesis
spellingShingle Engineering
Computer Vision
Image Synthesis
Wu, Rongliang
Lu, Shijian
LEED : label-free expression editing via disentanglement
description Recent studies on facial expression editing have obtained very promising progress. On the other hand, existing methods face the constraint of requiring a large amount of expression labels which are often expensive and time-consuming to collect. This paper presents an innovative label-free expression editing via disentanglement (LEED) framework that is capable of editing the expression of both frontal and profile facial images without requiring any expression label. The idea is to disentangle the identity and expression of a facial image in the expression manifold, where the neutral face captures the identity attribute and the displacement between the neutral image and the expressive image captures the expression attribute. Two novel losses are designed for optimal expression disentanglement and consistent synthesis, including a mutual expression information loss that aims to extract pure expression-related features and a siamese loss that aims to enhance the expression similarity between the synthesized image and the reference image. Extensive experiments over two public facial expression datasets show that LEED achieves superior facial expression editing qualitatively and quantitatively.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wu, Rongliang
Lu, Shijian
format Conference or Workshop Item
author Wu, Rongliang
Lu, Shijian
author_sort Wu, Rongliang
title LEED : label-free expression editing via disentanglement
title_short LEED : label-free expression editing via disentanglement
title_full LEED : label-free expression editing via disentanglement
title_fullStr LEED : label-free expression editing via disentanglement
title_full_unstemmed LEED : label-free expression editing via disentanglement
title_sort leed : label-free expression editing via disentanglement
publishDate 2021
url https://hdl.handle.net/10356/146194
_version_ 1692012965734121472