Temporal sentence grounding in videos: a survey and future directions
Temporal sentence grounding in videos (TSGV), a.k.a., natural language video localization (NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, TSGV has dr...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172187 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-172187 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1721872023-11-28T08:48:12Z Temporal sentence grounding in videos: a survey and future directions Zhang, Hao Sun, Aixin Jing, Wei Zhou, Joey Tianyi School of Computer Science and Engineering S-Lab Engineering::Computer science and engineering Cross-Modal Video Retrieval Multimodal Learning Temporal sentence grounding in videos (TSGV), a.k.a., natural language video localization (NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, TSGV has drawn significant attention from researchers in both communities. This survey attempts to provide a summary of fundamental concepts in TSGV and current research status, as well as future research directions. As the background, we present a common structure of functional components in TSGV, in a tutorial style: from feature extraction from raw video and language query, to answer prediction of the target moment. Then we review the techniques for multimodal understanding and interaction, which is the key focus of TSGV for effective alignment between the two modalities. We construct a taxonomy of TSGV techniques and elaborate the methods in different categories with their strengths and weaknesses. Lastly, we discuss issues with the current TSGV research and share our insights about promising research directions. This work was supported in part by the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s), in part by the SERC Science and Engineering Research Council Central Research Fund (Use-Inspired Basic Research), and in part by the Singapore Government’s Research, and Innovation and Enterprise 2020 Plan Advanced Manufacturing and Engineering Domain under Grant A18A1b0045. 2023-11-28T08:06:49Z 2023-11-28T08:06:49Z 2023 Journal Article Zhang, H., Sun, A., Jing, W. & Zhou, J. T. (2023). Temporal sentence grounding in videos: a survey and future directions. IEEE Transactions On Pattern Analysis and Machine Intelligence, 45(8), 10443-10465. https://dx.doi.org/10.1109/TPAMI.2023.3258628 0162-8828 https://hdl.handle.net/10356/172187 10.1109/TPAMI.2023.3258628 37030852 2-s2.0-85151518122 8 45 10443 10465 en A18A1b0045 IEEE Transactions on Pattern Analysis and Machine Intelligence © 2023 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Cross-Modal Video Retrieval Multimodal Learning |
spellingShingle |
Engineering::Computer science and engineering Cross-Modal Video Retrieval Multimodal Learning Zhang, Hao Sun, Aixin Jing, Wei Zhou, Joey Tianyi Temporal sentence grounding in videos: a survey and future directions |
description |
Temporal sentence grounding in videos (TSGV), a.k.a., natural language video localization (NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, TSGV has drawn significant attention from researchers in both communities. This survey attempts to provide a summary of fundamental concepts in TSGV and current research status, as well as future research directions. As the background, we present a common structure of functional components in TSGV, in a tutorial style: from feature extraction from raw video and language query, to answer prediction of the target moment. Then we review the techniques for multimodal understanding and interaction, which is the key focus of TSGV for effective alignment between the two modalities. We construct a taxonomy of TSGV techniques and elaborate the methods in different categories with their strengths and weaknesses. Lastly, we discuss issues with the current TSGV research and share our insights about promising research directions. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Zhang, Hao Sun, Aixin Jing, Wei Zhou, Joey Tianyi |
format |
Article |
author |
Zhang, Hao Sun, Aixin Jing, Wei Zhou, Joey Tianyi |
author_sort |
Zhang, Hao |
title |
Temporal sentence grounding in videos: a survey and future directions |
title_short |
Temporal sentence grounding in videos: a survey and future directions |
title_full |
Temporal sentence grounding in videos: a survey and future directions |
title_fullStr |
Temporal sentence grounding in videos: a survey and future directions |
title_full_unstemmed |
Temporal sentence grounding in videos: a survey and future directions |
title_sort |
temporal sentence grounding in videos: a survey and future directions |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/172187 |
_version_ |
1783955505553604608 |