Customized image synthesis using diffusion models
Recently, diffusion models have become a powerful mainstream method for image generation. Text-to-image diffusion models, in particular, have been widely used to convert a natural language description (e.g., ‘an orange cat’) to photorealistic images (e.g., a photo of an orange cat). These pre-tra...
Saved in:
主要作者: | Fu, Guanqiao |
---|---|
其他作者: | Liu Ziwei |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175199 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
相似書籍
-
Coherent visual story generation using diffusion models
由: Jiang, Jiaxi
出版: (2024) -
Evolving storytelling: benchmarks and methods for new character customization with diffusion models
由: Wang, Xiyu, et al.
出版: (2024) -
Exploiting diffusion prior for real-world image super-resolution
由: Wang, Jianyi, et al.
出版: (2024) -
LaVie: high-quality video generation with cascaded latent diffusion models
由: Wang, Yaohui, et al.
出版: (2025) -
In-the-wild image quality assessment with diffusion priors
由: Fu, Honghao
出版: (2024)