Generating human faces by generative adversarial networks
Video Style transfer is the process of merging the content of one video with the style of another to create a stylized video. In this report, I first study various style transfer techniques such as Adaptive Instance Normalisation (AdaIN), AnimeGAN and GAN N’ Roses. After the various approaches are s...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163273 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Video Style transfer is the process of merging the content of one video with the style of another to create a stylized video. In this report, I first study various style transfer techniques such as Adaptive Instance Normalisation (AdaIN), AnimeGAN and GAN N’ Roses. After the various approaches are studied, I then understand the first order motion model of its driving video motion sequences. Finally, study the state-of-the-art StyleGAN and the Toonification algorithm in detail. Furthermore, this report proposes to reimplement state-of-the-art methodologies, investigate the impact of relevant hyperparameters, and offer analysis of these hyperparameters. I expand the existing StyleGAN-based Image Toonification models to Video Toonification. I collect datasets in a total of five styles for the process of style transfer. Finally, I conclude by discussing potential directions for further development. |
---|