Unsupervised cartoon face generation via styleGAN2 network
Image-to-image translation has caught eyes of many scientists, and it has var ious applications, like image edition and image synthesis. Impressive results have been achieved in recent research of image-to-image translation. However, there are still some problems in existing work, like data imbalanc...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Thesis-Master by Coursework |
語言: | English |
出版: |
Nanyang Technological University
2023
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/168182 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | Image-to-image translation has caught eyes of many scientists, and it has var ious applications, like image edition and image synthesis. Impressive results have been achieved in recent research of image-to-image translation. However, there are still some problems in existing work, like data imbalance, change of
the structure of images, and resource limitation. To solve these problems, we propose an unsupervised image-to-image translation method to generate cartoon face images. The main idea of our method is fine-tuning the pre-trained style GAN2. During this process, we freeze the style vectors and some layers of the
generator to protect the structure of images, and apply an interpolation method to control the status of generated cartoon face images. In addition, we enable people to edit the generated cartoon face images in a text-driven way which means that only a line of instruction text is needed to manipulate the input images. Both qualitative and quantitative evaluations were conducted to show the performance of our framework. |
---|