Coherent visual story generation using diffusion models
Recent years, the advent of diffusion models has unlocked new possibilities in generative tasks, particularly in the realm of text-to-image generation. State-of-art models can create exquisite images that both satisfy users’ requirements and contain lots of details. In the last few years, some works...
Saved in:
Main Author: | Jiang, Jiaxi |
---|---|
Other Authors: | Liu Ziwei |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175145 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Customized image synthesis using diffusion models
by: Fu, Guanqiao
Published: (2024) -
MACE: mass concept erasure in diffusion models
by: Lu, Shilin, et al.
Published: (2024) -
Exemplar based image colourization using diffusion models
by: Rahul, George
Published: (2024) -
UniD3: unified discrete diffusion for simultaneous vision-language generation
by: Hu, Minghui, et al.
Published: (2023) -
From noise to information: discriminative tasks based on randomized neural networks and generative tasks based on diffusion models
by: Hu, Minghui
Published: (2024)