Cross-modal graph with meta concepts for video captioning

Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Hao, LIN, Guosheng, HOI, Steven C. H., MIAO, Chunyan
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7245
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8248
record_format dspace
spelling sg-smu-ink.sis_research-82482022-09-02T06:06:02Z Cross-modal graph with meta concepts for video captioning WANG, Hao LIN, Guosheng HOI, Steven C. H. MIAO, Chunyan Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets. 2022-01-01T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/7245 info:doi/10.1109/TIP.2022.3192709 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Semantics Visualization Feature extraction Predictive models Task analysis Computational modeling Location awareness Video captioning vision-and-language Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Semantics
Visualization
Feature extraction
Predictive models
Task analysis
Computational modeling
Location awareness
Video captioning
vision-and-language
Databases and Information Systems
spellingShingle Semantics
Visualization
Feature extraction
Predictive models
Task analysis
Computational modeling
Location awareness
Video captioning
vision-and-language
Databases and Information Systems
WANG, Hao
LIN, Guosheng
HOI, Steven C. H.
MIAO, Chunyan
Cross-modal graph with meta concepts for video captioning
description Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets.
format text
author WANG, Hao
LIN, Guosheng
HOI, Steven C. H.
MIAO, Chunyan
author_facet WANG, Hao
LIN, Guosheng
HOI, Steven C. H.
MIAO, Chunyan
author_sort WANG, Hao
title Cross-modal graph with meta concepts for video captioning
title_short Cross-modal graph with meta concepts for video captioning
title_full Cross-modal graph with meta concepts for video captioning
title_fullStr Cross-modal graph with meta concepts for video captioning
title_full_unstemmed Cross-modal graph with meta concepts for video captioning
title_sort cross-modal graph with meta concepts for video captioning
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7245
_version_ 1770576290054668288