Visual relationship detection with contextual information

Understanding an image goes beyond recognizing and locating the objects in it, the relationships between objects also very important in image understanding. Most previous methods have focused on recognizing local predictions of the relationships. But real-world image relationships often determined b...

全面介紹

Saved in:
書目詳細資料
Main Authors: Li, Yugang, Wang, Yongbin, Chen, Zhe, Zhu, Yuting
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2021
主題:
在線閱讀:https://hdl.handle.net/10356/146883
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Understanding an image goes beyond recognizing and locating the objects in it, the relationships between objects also very important in image understanding. Most previous methods have focused on recognizing local predictions of the relationships. But real-world image relationships often determined by the surrounding objects and other contextual information. In this work, we employ this insight to propose a novel framework to deal with the problem of visual relationship detection. The core of the framework is a relationship inference network, which is a recurrent structure designed for combining the global contextual information of the object to infer the relationship of the image. Experimental results on Stanford VRD and Visual Genome demonstrate that the proposed method achieves a good performance both in efficiency and accuracy. Finally, we demonstrate the value of visual relationship on two computer vision tasks: image retrieval and scene graph generation.