Multi-domain anime image generation and editing

Generative models such as text-to-image and image-to-image have been very successful to date. Some successful models include OpenAI's DALLE-2, Google's Imagen, and Parti. However, these state-of-the-art (SOTA) Diffusion models are hard to train, and finetuning them requires resources...

全面介紹

Saved in:
書目詳細資料
主要作者: Aravind S/O Sivakumaran
其他作者: Lu Shijian
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/162940
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Generative models such as text-to-image and image-to-image have been very successful to date. Some successful models include OpenAI's DALLE-2, Google's Imagen, and Parti. However, these state-of-the-art (SOTA) Diffusion models are hard to train, and finetuning them requires resources many may not have, unlike GANs. GANs, unlike Diffusion Models, have a faster inference process and could better integrate into production workflows with tight deadlines. Therefore, we propose training GAN models using our end-to-end framework while extending existing GANs networks to multi-domains to enable integration into existing training workflows. We aim to introduce text-to-image multimodal generation for existing StyleGAN2 networks that can be used for editing while allowing extension to different style domains using StyleGAN-NADA. Additionally, as part of our model editing workflow, existing StyleGAN2 network outputs can be passed to a Diffusion Model such as Stable Diffusion for image-to-image translation for image editing purposes. Finally, we can explore slimming down the StyleGAN2 network for faster inference on edge devices, as StyleGAN2 is computationally intensive for edge devices to handle. Keywords: Anime, StyleGAN, Generative Adversarial Networks, Image-to-Image translation, Text-to-Image translation, Image Editing, Model Compression, Multi Domain, Diffusion Models, CLIP