Image-to-image translation based on generative models

Image-to-image translation tasks have become a widely studied topic in computer vision. Image-to-image translation aims at finding a model that is fed with the input image and generating desired output image correspondingly. Previous studies that are based on deep neural networks were mostly built u...

Full description

Saved in:
Bibliographic Details
Main Author: Tang, Mengxiao
Other Authors: Ponnuthurai Nagaratnam Suganthan
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/154672
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Image-to-image translation tasks have become a widely studied topic in computer vision. Image-to-image translation aims at finding a model that is fed with the input image and generating desired output image correspondingly. Previous studies that are based on deep neural networks were mostly built upon encoder-decoder architecture, where a direct mapping from input to target output is learned, without exploring the distribution of images. In this thesis, generative models are used to capture the distribution of images, and the potentials of generative models on the image-to-image translation tasks are explored. Specifically, an improved CycleGAN is proposed to conduct the style transfer task and a DDPM-based conditional generative model is used for image colorization. Empirical results show that the generative models can achieve competitive results in image-to-image translation tasks.