Image captioning via semantic element embedding

Image caption approaches that use the global Convolutional Neural Network (CNN) features are not able to represent and describe all the important elements in complex scenes. In this paper, we propose to enrich the semantic representations of images and update the language model by proposing semantic...

全面介紹

Saved in:
書目詳細資料
Main Authors: ZHANG, Xiaodan, HE, Shengfeng, SONG, Xinhang, LAU, Rynson W.H., JIAO, Jianbin, YE, Qixiang
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2020
主題:
CNN
在線閱讀:https://ink.library.smu.edu.sg/sis_research/7863
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
實物特徵
總結:Image caption approaches that use the global Convolutional Neural Network (CNN) features are not able to represent and describe all the important elements in complex scenes. In this paper, we propose to enrich the semantic representations of images and update the language model by proposing semantic element embedding. For the semantic element discovery, an object detection module is used to predict regions of the image, and a captioning model, Long Short-Term Memory (LSTM), is employed to generate local descriptions for these regions. The predicted descriptions and categories are used to generate the semantic feature, which not only contains detailed information but also shares a word space with descriptions, and thus bridges the modality gap between visual images and semantic captions. We further integrate the CNN feature with the semantic feature into the proposed Element Embedding LSTM (EE-LSTM) model to predict a language description. Experiments on MS COCO datasets demonstrate that the proposed approach outperforms conventional caption methods and is flexible to combine with baseline models to achieve superior performance. (C) 2019 Published by Elsevier B.V.