Text2Human: text-driven controllable human image generation
Generating high-quality and diverse human images is an important yet challenging task in vision and graphics. However, existing generative models often fall short under the high diversity of clothing shapes and textures. Furthermore, the generation process is even desired to be intuitively controlla...
Saved in:
Main Authors: | Jiang, Yuming, Yang, Shuai, Qju, Haonan, Wu, Wayne, Loy, Chen Change, Liu, Ziwei |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163319 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Text-driven video prediction
by: SONG, Xue, et al.
Published: (2024) -
Feature-aware conditional GAN for category text generation
by: Li, Xinze, et al.
Published: (2023) -
Cocktail: mixing multi-modality controls for text-conditional image generation
by: Hu, Minghui, et al.
Published: (2023) -
DEEP LEARNING APPROACHES FOR ATTRIBUTE MANIPULATION AND TEXT-TO-IMAGE SYNTHESIS
by: KENAN EMIR AK
Published: (2020) -
A Generative Model for category text generation
by: Li, Yang, et al.
Published: (2020)