RIGID: Recurrent GAN inversion and editing of real face videos

GAN inversion is indispensable for applying the powerful editability of GAN to real images. However, existing methods invert video frames individually often leading to undesired inconsistent results over time. In this paper, we propose a unified recurrent framework, named Recurrent vIdeo GAN Inversi...

Full description

Saved in:
Bibliographic Details
Main Authors: XU, Yangyang, HE, Shengfeng, WONG, Kwan-Yee K., LUO, Pingluo
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8534
https://ink.library.smu.edu.sg/context/sis_research/article/9537/viewcontent/Xu_RIGID_Recurrent_GAN_Inversion_and_Editing_of_Real_Face_Videos_ICCV_2023_paper__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9537
record_format dspace
spelling sg-smu-ink.sis_research-95372024-04-15T07:59:44Z RIGID: Recurrent GAN inversion and editing of real face videos XU, Yangyang HE, Shengfeng WONG, Kwan-Yee K. LUO, Pingluo GAN inversion is indispensable for applying the powerful editability of GAN to real images. However, existing methods invert video frames individually often leading to undesired inconsistent results over time. In this paper, we propose a unified recurrent framework, named Recurrent vIdeo GAN Inversion and eDiting (RIGID), to explicitly and simultaneously enforce temporally coherent GAN inversion and facial editing of real videos. Our approach models the temporal relations between current and previous frames from three aspects. To enable a faithful real video reconstruction, we first maximize the inversion fidelity and consistency by learning a temporal compensated latent code. Second, we observe incoherent noises lie in the high-frequency domain that can be disentangled from the latent space. Third, to remove the inconsistency after attribute manipulation, we propose an in-between frame composition constraint such that the arbitrary frame must be a direct composite of its neighboring frames. Our unified framework learns the inherent coherence between input frames in an end-to-end manner, and therefore it is agnostic to a specific attribute and can be applied to arbitrary editing of the same video without re-training. Extensive experiments demonstrate that RIGID outperforms state-of-the-art methods qualitatively and quantitatively in both inversion and editing tasks. The deliverables can be found in https://cnnlstm.github.io/RIGID. 2023-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8534 info:doi/10.1109/ICCV51070.2023.01259 https://ink.library.smu.edu.sg/context/sis_research/article/9537/viewcontent/Xu_RIGID_Recurrent_GAN_Inversion_and_Editing_of_Real_Face_Videos_ICCV_2023_paper__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computer Sciences Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Computer Sciences
Graphics and Human Computer Interfaces
spellingShingle Computer Sciences
Graphics and Human Computer Interfaces
XU, Yangyang
HE, Shengfeng
WONG, Kwan-Yee K.
LUO, Pingluo
RIGID: Recurrent GAN inversion and editing of real face videos
description GAN inversion is indispensable for applying the powerful editability of GAN to real images. However, existing methods invert video frames individually often leading to undesired inconsistent results over time. In this paper, we propose a unified recurrent framework, named Recurrent vIdeo GAN Inversion and eDiting (RIGID), to explicitly and simultaneously enforce temporally coherent GAN inversion and facial editing of real videos. Our approach models the temporal relations between current and previous frames from three aspects. To enable a faithful real video reconstruction, we first maximize the inversion fidelity and consistency by learning a temporal compensated latent code. Second, we observe incoherent noises lie in the high-frequency domain that can be disentangled from the latent space. Third, to remove the inconsistency after attribute manipulation, we propose an in-between frame composition constraint such that the arbitrary frame must be a direct composite of its neighboring frames. Our unified framework learns the inherent coherence between input frames in an end-to-end manner, and therefore it is agnostic to a specific attribute and can be applied to arbitrary editing of the same video without re-training. Extensive experiments demonstrate that RIGID outperforms state-of-the-art methods qualitatively and quantitatively in both inversion and editing tasks. The deliverables can be found in https://cnnlstm.github.io/RIGID.
format text
author XU, Yangyang
HE, Shengfeng
WONG, Kwan-Yee K.
LUO, Pingluo
author_facet XU, Yangyang
HE, Shengfeng
WONG, Kwan-Yee K.
LUO, Pingluo
author_sort XU, Yangyang
title RIGID: Recurrent GAN inversion and editing of real face videos
title_short RIGID: Recurrent GAN inversion and editing of real face videos
title_full RIGID: Recurrent GAN inversion and editing of real face videos
title_fullStr RIGID: Recurrent GAN inversion and editing of real face videos
title_full_unstemmed RIGID: Recurrent GAN inversion and editing of real face videos
title_sort rigid: recurrent gan inversion and editing of real face videos
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8534
https://ink.library.smu.edu.sg/context/sis_research/article/9537/viewcontent/Xu_RIGID_Recurrent_GAN_Inversion_and_Editing_of_Real_Face_Videos_ICCV_2023_paper__1_.pdf
_version_ 1814047468790218752