Learning to collocate Visual-Linguistic Neural Modules for image captioning
Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike t...
Saved in:
Main Authors: | Yang, Xu, Zhang, Hanwang, Gao, Chongyang, Cai, Jianfei |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170425 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Deconfounded image captioning: a causal retrospect
by: Yang, Xu, et al.
Published: (2022) -
PERSONALIZED VISUAL INFORMATION CAPTIONING
by: WU SHUANG
Published: (2023) -
Stack-VS : stacked visual-semantic attention for image caption generation
by: Cheng, Ling, et al.
Published: (2021) -
Context-aware visual policy network for fine-grained image captioning
by: Zha, Zheng-Jun, et al.
Published: (2022) -
Learning transferable perturbations for image captioning
by: WU, Hanjie, et al.
Published: (2022)