Cross-modal graph with meta concepts for video captioning

Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Hao, Lin, Guosheng, Hoi, Steven C. H., Miao, Chunyan
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162546
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-162546
record_format dspace
spelling sg-ntu-dr.10356-1625462023-05-26T15:36:30Z Cross-modal graph with meta concepts for video captioning Wang, Hao Lin, Guosheng Hoi, Steven C. H. Miao, Chunyan School of Computer Science and Engineering Engineering::Computer science and engineering Video Captioning Vision-and-Language Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets. Ministry of Education (MOE) Ministry of Health (MOH) National Research Foundation (NRF) Submitted/Accepted version This work was supported in part by the National Research Foundation (NRF), Singapore, through the AI Singapore Program (AISG) under Award AISG-GC-2019-003 and Award AISG-RP-2018-003 and through the NRF Investigatorship Program (NRFI) under Award NRF-NRFI05-2019-0002; in part by the Singapore Ministry of Health under its National Innovation Challenge on Active and Confident Ageing (NIC) under Project MOH/NIC/HAIG03/2017; and in part by the Ministry of Education (MOE), Singapore, Academic Research Fund (AcRF) Tier-1 Research under Grant RG95/20. 2022-10-31T05:34:15Z 2022-10-31T05:34:15Z 2022 Journal Article Wang, H., Lin, G., Hoi, S. C. H. & Miao, C. (2022). Cross-modal graph with meta concepts for video captioning. IEEE Transactions On Image Processing, 31, 5150-5162. https://dx.doi.org/10.1109/TIP.2022.3192709 1057-7149 https://hdl.handle.net/10356/162546 10.1109/TIP.2022.3192709 35901005 2-s2.0-85135596562 31 5150 5162 en AISG-GC-2019-003 AISG-RP-2018-003 NRF-NRFI05-2019-0002 MOH/NIC/HAIG03/2017 RG95/20 IEEE Transactions on Image Processing © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TIP.2022.3192709. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Video Captioning
Vision-and-Language
spellingShingle Engineering::Computer science and engineering
Video Captioning
Vision-and-Language
Wang, Hao
Lin, Guosheng
Hoi, Steven C. H.
Miao, Chunyan
Cross-modal graph with meta concepts for video captioning
description Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wang, Hao
Lin, Guosheng
Hoi, Steven C. H.
Miao, Chunyan
format Article
author Wang, Hao
Lin, Guosheng
Hoi, Steven C. H.
Miao, Chunyan
author_sort Wang, Hao
title Cross-modal graph with meta concepts for video captioning
title_short Cross-modal graph with meta concepts for video captioning
title_full Cross-modal graph with meta concepts for video captioning
title_fullStr Cross-modal graph with meta concepts for video captioning
title_full_unstemmed Cross-modal graph with meta concepts for video captioning
title_sort cross-modal graph with meta concepts for video captioning
publishDate 2022
url https://hdl.handle.net/10356/162546
_version_ 1772826453559214080