Single-shot image generation and stylisation via cross-domain correspondance

The advent of generative adversarial networks has led to many state-of-the-art methodologies in the field of image generation and stylisation. Among the most popular methods to generate new and diverse images is cross-domain correspondence, where the generated output would be a mix of the stylistic...

Full description

Saved in:
Bibliographic Details
Main Author: Kalyan, Harikishan
Other Authors: Lin Guosheng
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163314
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The advent of generative adversarial networks has led to many state-of-the-art methodologies in the field of image generation and stylisation. Among the most popular methods to generate new and diverse images is cross-domain correspondence, where the generated output would be a mix of the stylistic elements and attributes of a source dataset and a target dataset. This method, however, can be resource intensive due to the need for massive datasets. Existing methodologies from Ojha et al and Mind the Gap have attempted to address this issue by requiring only a few images for domain adaptation, they are prone to overfitting concerns due to a limited dataset. To counter these problems, a CLIP guided domain adaptation approach is proposed where only a single image is needed for the model to generate diverse images of various styles