Unsupervised cartoon face generation via styleGAN2 network
Image-to-image translation has caught eyes of many scientists, and it has var ious applications, like image edition and image synthesis. Impressive results have been achieved in recent research of image-to-image translation. However, there are still some problems in existing work, like data imbalanc...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/168182 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Image-to-image translation has caught eyes of many scientists, and it has var ious applications, like image edition and image synthesis. Impressive results have been achieved in recent research of image-to-image translation. However, there are still some problems in existing work, like data imbalance, change of
the structure of images, and resource limitation. To solve these problems, we propose an unsupervised image-to-image translation method to generate cartoon face images. The main idea of our method is fine-tuning the pre-trained style GAN2. During this process, we freeze the style vectors and some layers of the
generator to protect the structure of images, and apply an interpolation method to control the status of generated cartoon face images. In addition, we enable people to edit the generated cartoon face images in a text-driven way which means that only a line of instruction text is needed to manipulate the input images. Both qualitative and quantitative evaluations were conducted to show the performance of our framework. |
---|