Text-driven video prediction

Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image and text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. Howe...

Full description

Saved in:
Bibliographic Details
Main Authors: SONG, Xue, CHEN, Jingjing, ZHU, Bin, JIANG, Yu-gang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9356
https://ink.library.smu.edu.sg/context/sis_research/article/10356/viewcontent/Text_drivenVideoPrediction_sv__2_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10356
record_format dspace
spelling sg-smu-ink.sis_research-103562024-10-18T05:47:04Z Text-driven video prediction SONG, Xue CHEN, Jingjing ZHU, Bin JIANG, Yu-gang Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image and text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. However, this generation pattern lacks deterministic constraints for both appearance and motion, leading to uncontrollable and undesirable outcomes. To this end, we propose a new task called Text-driven Video Prediction (TVP). Taking the first frame and text caption as inputs, this task aims to synthesize the following frames. Specifically, appearance and motion components are provided by the image and caption separately. The key to addressing the TVP task depends on fully exploring the underlying motion information in text descriptions, thus facilitating plausible video generation. In fact, this task is intrinsically a cause-and-effect problem, as the text content directly influences the motion changes of frames. To investigate the capability of text in causal inference for progressive motion information, our TVP framework contains a Text Inference Module (TIM), producing stepwise embeddings to regulate motion inference for subsequent frames. In particular, a refinement mechanism incorporating global motion semantics guarantees coherent generation. Extensive experiments are conducted on Something-Something V2 and Single Moving MNIST datasets. Experimental results demonstrate that our model achieves better results over other baselines, verifying the effectiveness of the proposed framework. 2024-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9356 info:doi/10.1145/3675171 https://ink.library.smu.edu.sg/context/sis_research/article/10356/viewcontent/Text_drivenVideoPrediction_sv__2_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Text-driven Video Prediction controllable video generation motion inference Artificial Intelligence and Robotics Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Text-driven Video Prediction
controllable video generation
motion inference
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
spellingShingle Text-driven Video Prediction
controllable video generation
motion inference
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
SONG, Xue
CHEN, Jingjing
ZHU, Bin
JIANG, Yu-gang
Text-driven video prediction
description Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image and text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. However, this generation pattern lacks deterministic constraints for both appearance and motion, leading to uncontrollable and undesirable outcomes. To this end, we propose a new task called Text-driven Video Prediction (TVP). Taking the first frame and text caption as inputs, this task aims to synthesize the following frames. Specifically, appearance and motion components are provided by the image and caption separately. The key to addressing the TVP task depends on fully exploring the underlying motion information in text descriptions, thus facilitating plausible video generation. In fact, this task is intrinsically a cause-and-effect problem, as the text content directly influences the motion changes of frames. To investigate the capability of text in causal inference for progressive motion information, our TVP framework contains a Text Inference Module (TIM), producing stepwise embeddings to regulate motion inference for subsequent frames. In particular, a refinement mechanism incorporating global motion semantics guarantees coherent generation. Extensive experiments are conducted on Something-Something V2 and Single Moving MNIST datasets. Experimental results demonstrate that our model achieves better results over other baselines, verifying the effectiveness of the proposed framework.
format text
author SONG, Xue
CHEN, Jingjing
ZHU, Bin
JIANG, Yu-gang
author_facet SONG, Xue
CHEN, Jingjing
ZHU, Bin
JIANG, Yu-gang
author_sort SONG, Xue
title Text-driven video prediction
title_short Text-driven video prediction
title_full Text-driven video prediction
title_fullStr Text-driven video prediction
title_full_unstemmed Text-driven video prediction
title_sort text-driven video prediction
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9356
https://ink.library.smu.edu.sg/context/sis_research/article/10356/viewcontent/Text_drivenVideoPrediction_sv__2_.pdf
_version_ 1814047932838576128