Temporal consistent video editing using diffusion models

In recent years, the field of generative AI has seen unprecedented interest worldwide. Beginning with text-to-text generation, this field has garnered much media attention. While text-to-image generation in the form of DALL-E and Stable Diffusion amongst many others have achieved remarkable results,...

Full description

Saved in:
Bibliographic Details
Main Author: Bai, Shun Yao
Other Authors: Lin Guosheng
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175740
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In recent years, the field of generative AI has seen unprecedented interest worldwide. Beginning with text-to-text generation, this field has garnered much media attention. While text-to-image generation in the form of DALL-E and Stable Diffusion amongst many others have achieved remarkable results, video generation still remains a challenge. Key challenges in this domain relate to the need for high-quality training data, need to generate a sheer number of frames for a video of meaningful length as well as the need to maintain temporal consistency across frames. This project aims to explore approaches to replicate the success of image generation models in the video domain, in particular relating to the problem of achieving temporal consistency. It extends the work Rerender-A-Video done by Yang et al. to include flexibility in the frames sampled in the generation phase. Beyond extending the codebase to accept custom selection of frames, the project offers two dynamic ways of automated frame selection: firstly by selecting the frame with the most common keypoints within individual bins, and secondly a dynamic programming approach. While the binning method did not surpass the original constant interval selection, dynamic programming was able to achieve limited success depending on input video properties. Hence, some proposed extensions for future work include alternative approaches in formulating the dynamic programming problem, which should be trivial to integrate given the work in this paper to adapt underlying Rerender steps.