Text-driven video prediction
Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image and text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. Howe...
Saved in:
Main Authors: | SONG, Xue, CHEN, Jingjing, ZHU, Bin, JIANG, Yu-gang |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9356 https://ink.library.smu.edu.sg/context/sis_research/article/10356/viewcontent/Text_drivenVideoPrediction_sv__2_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Synchronization of lecture videos and electronic slides by video text analysis
by: WANG, Feng, et al.
Published: (2003) -
Video event detection using motion relativity and visual relatedness
by: WANG, Feng, et al.
Published: (2008) -
Serendipity-driven celebrity video hyperlinking
by: YANG, Shujun, et al.
Published: (2016) -
Video text detection and segmentation for optical character recognition
by: NGO, Chong-wah, et al.
Published: (2005) -
Lecture video enhancement and editing by integrating posture, gesture, and text
by: WANG, Feng, et al.
Published: (2007)