Temporal consistent video editing using diffusion models
In recent years, the field of generative AI has seen unprecedented interest worldwide. Beginning with text-to-text generation, this field has garnered much media attention. While text-to-image generation in the form of DALL-E and Stable Diffusion amongst many others have achieved remarkable results,...
Saved in:
Main Author: | Bai, Shun Yao |
---|---|
Other Authors: | Lin Guosheng |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175740 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Analogies based video editing
by: Yan, W.-Q., et al.
Published: (2013) -
An efficient real-time concurrency control protocol for guaranteeing temporal consistency
by: Xiao, Y.Y., et al.
Published: (2013) -
Non linear video editing as a career in Metro Manila
by: Ozaeta, Marilou
Published: (2006) -
Exploiting self-adaptive posture-based focus estimation for lecture video editing
by: WANG, Feng, et al.
Published: (2005) -
Lecture video enhancement and editing by integrating posture, gesture, and text
by: WANG, Feng, et al.
Published: (2007)