Interactive video search with multi-modal LLM video captioning
Cross-modal representation learning is essential for interactive text-to-video search tasks. However, the representation learning is limited by the size and quality of video-caption pairs. To improve the search accuracy, we propose to enlarge the size of available video-caption pairs by leveraging m...
Saved in:
Main Authors: | CHENG, Yu-Tong, WU, Jiaxin, MA, Zhixin, HE, Jiangshan, WEI, Xiao-Yong, NGO, Chong-wah |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2025
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/10105 https://ink.library.smu.edu.sg/context/sis_research/article/11105/viewcontent/InteractiveVideo_LLM_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Cross-modal graph with meta concepts for video captioning
by: Wang, Hao, et al.
Published: (2022) -
PERSONALIZED VISUAL INFORMATION CAPTIONING
by: WU SHUANG
Published: (2023) -
A Fine-Grained Spatial-Temporal Attention Model for Video Captioning
by: Liu, A.-A., et al.
Published: (2021) -
Towards semantic, debiased and moment video retrieval
by: Satar, Burak
Published: (2025) -
Semantic-filtered Soft-Split-Aware video captioning with audio-augmented feature
by: Xu, Yuecong, et al.
Published: (2021)