Learning to collocate Visual-Linguistic Neural Modules for image captioning

Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike t...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Xu, Zhang, Hanwang, Gao, Chongyang, Cai, Jianfei
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170425
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170425
record_format dspace
spelling sg-ntu-dr.10356-1704252023-09-12T02:17:34Z Learning to collocate Visual-Linguistic Neural Modules for image captioning Yang, Xu Zhang, Hanwang Gao, Chongyang Cai, Jianfei School of Computer Science and Engineering Engineering::Computer science and engineering Mage Captioning Distinguishable Neural Modules Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike the widely used neural module networks in VQA, where the language (i.e., question) is fully observable, the task of collocating visual-linguistic modules is more challenging. This is because the language is only partially observable, for which we need to dynamically collocate the modules during the process of image captioning. To sum up, we make the following technical contributions to design and train our CVLNM: (1) distinguishable module design—four modules in the encoder including one linguistic module for function words and three visual modules for different content words (i.e., noun, adjective, and verb) and another linguistic one in the decoder for commonsense reasoning, (2) a self-attention based module controller for robustifying the visual reasoning, (3) a part-of-speech based syntax loss imposed on the module controller for further regularizing the training of our CVLNM. Extensive experiments on the MS-COCO dataset show that our CVLNM is more effective, e.g., achieving a new state-of-the-art 129.5 CIDEr-D, and more robust, e.g., being less likely to overfit to dataset bias and suffering less when fewer training samples are available. Codes are available at https://github.com/GCYZSL/CVLMN. 2023-09-12T02:17:34Z 2023-09-12T02:17:34Z 2023 Journal Article Yang, X., Zhang, H., Gao, C. & Cai, J. (2023). Learning to collocate Visual-Linguistic Neural Modules for image captioning. International Journal of Computer Vision, 131(1), 82-100. https://dx.doi.org/10.1007/s11263-022-01692-8 0920-5691 https://hdl.handle.net/10356/170425 10.1007/s11263-022-01692-8 2-s2.0-85139549679 1 131 82 100 en International Journal of Computer Vision © 2022 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Mage Captioning
Distinguishable Neural Modules
spellingShingle Engineering::Computer science and engineering
Mage Captioning
Distinguishable Neural Modules
Yang, Xu
Zhang, Hanwang
Gao, Chongyang
Cai, Jianfei
Learning to collocate Visual-Linguistic Neural Modules for image captioning
description Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike the widely used neural module networks in VQA, where the language (i.e., question) is fully observable, the task of collocating visual-linguistic modules is more challenging. This is because the language is only partially observable, for which we need to dynamically collocate the modules during the process of image captioning. To sum up, we make the following technical contributions to design and train our CVLNM: (1) distinguishable module design—four modules in the encoder including one linguistic module for function words and three visual modules for different content words (i.e., noun, adjective, and verb) and another linguistic one in the decoder for commonsense reasoning, (2) a self-attention based module controller for robustifying the visual reasoning, (3) a part-of-speech based syntax loss imposed on the module controller for further regularizing the training of our CVLNM. Extensive experiments on the MS-COCO dataset show that our CVLNM is more effective, e.g., achieving a new state-of-the-art 129.5 CIDEr-D, and more robust, e.g., being less likely to overfit to dataset bias and suffering less when fewer training samples are available. Codes are available at https://github.com/GCYZSL/CVLMN.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Yang, Xu
Zhang, Hanwang
Gao, Chongyang
Cai, Jianfei
format Article
author Yang, Xu
Zhang, Hanwang
Gao, Chongyang
Cai, Jianfei
author_sort Yang, Xu
title Learning to collocate Visual-Linguistic Neural Modules for image captioning
title_short Learning to collocate Visual-Linguistic Neural Modules for image captioning
title_full Learning to collocate Visual-Linguistic Neural Modules for image captioning
title_fullStr Learning to collocate Visual-Linguistic Neural Modules for image captioning
title_full_unstemmed Learning to collocate Visual-Linguistic Neural Modules for image captioning
title_sort learning to collocate visual-linguistic neural modules for image captioning
publishDate 2023
url https://hdl.handle.net/10356/170425
_version_ 1779156674507964416