Cycle-consistent inverse GAN for text-to-image synthesis
This paper investigates an open research task of text-to-image synthesis for automatically generating or manipulating images from text descriptions. Prevailing methods mainly take the textual descriptions as the conditional input for the GAN generation, and need to train different models for the tex...
Saved in:
Main Authors: | Wang, Hao, Lin, Guosheng, Hoi, Steven C. H., Miao, Chunyan |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156034 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Paired cross-modal data augmentation for fine-grained image-to-text retrieval
by: Wang, Hao, et al.
Published: (2023) -
DEEP LEARNING APPROACHES FOR ATTRIBUTE MANIPULATION AND TEXT-TO-IMAGE SYNTHESIS
by: KENAN EMIR AK
Published: (2020) -
Structure-aware generation network for recipe generation from images
by: Wang, Hao, et al.
Published: (2021) -
Feature-aware conditional GAN for category text generation
by: Li, Xinze, et al.
Published: (2023) -
Decomposing generation networks with structure prediction for recipe generation
by: Wang, Hao, et al.
Published: (2022)