Real-time arbitrary style transfer via deep learning
Neural style transfer is the process of merging the content of one image with the style of another to create a new image. Many applications have recently exploited style transfer to create highly popular content on social media. Existing methods typically face limitations such as a small number of t...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/147930 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Neural style transfer is the process of merging the content of one image with the style of another to create a new image. Many applications have recently exploited style transfer to create highly popular content on social media. Existing methods typically face limitations such as a small number of transferable styles and a sluggish image generation speed. In this work, we discuss two approaches, AdaIN and MUNIT, to achieve real-time arbitrary style transfer and apply it to videos. The AdaIN method can produce aesthetically pleasing stylized images by changing the content-style weight ratio. It is found that the AdaIN method can be sped up by eliminating convolutional layers from the decoder. The refined decoder of AdaIN achieves a large speed boost without compromising image quality of style transfer. The MUNIT method has advantages when training on a small dataset that style and content samples are from two specific domains. We analyze these two methods and derive possible theoretical reasons behind them. Since the refined AdaIN method only needs to be trained once and can produce stylized images at real-time speed, its application can be extended to perform real-time arbitrary video style transfer. Finally, we conclude with more discussions about several future improvement directions. |
---|