Self-supervised matting-specific portrait enhancement and generation

We resolve the ill-posed alpha matting problem from a completely different perspective. Given an input portrait image, instead of estimating the corresponding alpha matte, we focus on the other end, to subtly enhance this input so that the alpha matte can be easily estimated by any existing matting...

Full description

Saved in:
Bibliographic Details
Main Authors: XU, Yangyang, ZHOU, Zeyang, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7880
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8883
record_format dspace
spelling sg-smu-ink.sis_research-88832023-06-15T09:00:05Z Self-supervised matting-specific portrait enhancement and generation XU, Yangyang ZHOU, Zeyang HE, Shengfeng We resolve the ill-posed alpha matting problem from a completely different perspective. Given an input portrait image, instead of estimating the corresponding alpha matte, we focus on the other end, to subtly enhance this input so that the alpha matte can be easily estimated by any existing matting models. This is accomplished by exploring the latent space of GAN models. It is demonstrated that interpretable directions can be found in the latent space and they correspond to semantic image transformations. We further explore this property in alpha matting. Particularly, we invert an input portrait into the latent code of StyleGAN, and our aim is to discover whether there is an enhanced version in the latent space which is more compatible with a reference matting model. We optimize multi-scale latent vectors in the latent spaces under four tailored losses, ensuring matting-specificity and subtle modifications on the portrait. We demonstrate that the proposed method can refine real portrait images for arbitrary matting models, boosting the performance of automatic alpha matting by a large margin. In addition, we leverage the generative property of StyleGAN, and propose to generate enhanced portrait data which can be treated as the pseudo GT. It addresses the problem of expensive alpha matte annotation, further augmenting the matting performance of existing models. 2022-01-01T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/7880 info:doi/10.1109/TIP.2022.3194711 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Space exploration Codes Data models Semantics Entropy Predictive models Generative adversarial networks Alpha matting latent space generative model Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Space exploration
Codes
Data models
Semantics
Entropy
Predictive models
Generative adversarial networks
Alpha matting
latent space
generative model
Information Security
spellingShingle Space exploration
Codes
Data models
Semantics
Entropy
Predictive models
Generative adversarial networks
Alpha matting
latent space
generative model
Information Security
XU, Yangyang
ZHOU, Zeyang
HE, Shengfeng
Self-supervised matting-specific portrait enhancement and generation
description We resolve the ill-posed alpha matting problem from a completely different perspective. Given an input portrait image, instead of estimating the corresponding alpha matte, we focus on the other end, to subtly enhance this input so that the alpha matte can be easily estimated by any existing matting models. This is accomplished by exploring the latent space of GAN models. It is demonstrated that interpretable directions can be found in the latent space and they correspond to semantic image transformations. We further explore this property in alpha matting. Particularly, we invert an input portrait into the latent code of StyleGAN, and our aim is to discover whether there is an enhanced version in the latent space which is more compatible with a reference matting model. We optimize multi-scale latent vectors in the latent spaces under four tailored losses, ensuring matting-specificity and subtle modifications on the portrait. We demonstrate that the proposed method can refine real portrait images for arbitrary matting models, boosting the performance of automatic alpha matting by a large margin. In addition, we leverage the generative property of StyleGAN, and propose to generate enhanced portrait data which can be treated as the pseudo GT. It addresses the problem of expensive alpha matte annotation, further augmenting the matting performance of existing models.
format text
author XU, Yangyang
ZHOU, Zeyang
HE, Shengfeng
author_facet XU, Yangyang
ZHOU, Zeyang
HE, Shengfeng
author_sort XU, Yangyang
title Self-supervised matting-specific portrait enhancement and generation
title_short Self-supervised matting-specific portrait enhancement and generation
title_full Self-supervised matting-specific portrait enhancement and generation
title_fullStr Self-supervised matting-specific portrait enhancement and generation
title_full_unstemmed Self-supervised matting-specific portrait enhancement and generation
title_sort self-supervised matting-specific portrait enhancement and generation
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7880
_version_ 1770576575164579840