Automated image generation
Image translation techniques have gained significant attention in recent years, particularly CycleGAN. Traditionally, building image-to-image translation models requires the collection of extensive datasets with paired examples, which can be complicated and costly. However, CycleGAN’s automatic t...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171948 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Image translation techniques have gained significant attention in recent years, particularly
CycleGAN. Traditionally, building image-to-image translation models requires the collection
of extensive datasets with paired examples, which can be complicated and costly. However,
CycleGAN’s automatic training approach eliminates the need for such paired samples, thus
simplifying the training process while enhancing the potential of image translation, allowing
for imaginative and lifelike adjustments. For instance, CycleGAN can effortlessly transform
styles like cats into dogs and vice versa, extending to practical domains like art, fashion, and
medical imaging.
Nevertheless, CycleGAN's applicability in real-world scenarios is limited by its current
constraint to a relatively small set of available styles. This compels us to explore more
practical alternatives. This study introduces new styles into the framework, assessing their
practical effectiveness and addressing concerns about potential loss in image quality. Results
show the promising potential of these improved CycleGAN variants for various domains and
applications.
Keywords: CycleGAN, Style Transfer, Image Translation, Diverse Aesthetics, Creative
Applications, Content Preservation |
---|