Text2Human: text-driven controllable human image generation
Generating high-quality and diverse human images is an important yet challenging task in vision and graphics. However, existing generative models often fall short under the high diversity of clothing shapes and textures. Furthermore, the generation process is even desired to be intuitively controlla...
Saved in:
Main Authors: | Jiang, Yuming, Yang, Shuai, Qju, Haonan, Wu, Wayne, Loy, Chen Change, Liu, Ziwei |
---|---|
其他作者: | School of Computer Science and Engineering |
格式: | Article |
語言: | English |
出版: |
2022
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/163319 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Text-driven video prediction
由: SONG, Xue, et al.
出版: (2024) -
Feature-aware conditional GAN for category text generation
由: Li, Xinze, et al.
出版: (2023) -
Cocktail: mixing multi-modality controls for text-conditional image generation
由: Hu, Minghui, et al.
出版: (2023) -
DEEP LEARNING APPROACHES FOR ATTRIBUTE MANIPULATION AND TEXT-TO-IMAGE SYNTHESIS
由: KENAN EMIR AK
出版: (2020) -
A Generative Model for category text generation
由: Li, Yang, et al.
出版: (2020)