Background preservation for text-guided image editing
The text-guided image editing task aims to manipulate the given image according to another text description while preserving the color, texture and structure information of the text-irrelevant parts of the image. With the development of deep learning and Generative Adversarial Networks (GAN), many G...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166140 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The text-guided image editing task aims to manipulate the given image according to another text description while preserving the color, texture and structure information of the text-irrelevant parts of the image. With the development of deep learning and Generative Adversarial Networks (GAN), many GAN-based methodologies have been proposed to produce very fine-grained high-quality manipulated images according to the text prompt.
However, some state-of-the-art GAN-based methodologies, such as ManiGAN, could not preserve the text-irrelevant backgrounds well. Thus, the objective of this project is to apply the background loss proposed in this paper to improve the background preserving ability of ManiGAN, which is used as the baseline of this project. |
---|