Graph edit distance reward : learning to edit scene graph

Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in o...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Lichang, Lin, Guosheng, Wang, Shijie, Wu, Qingyao
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/144419
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-144419
record_format dspace
spelling sg-ntu-dr.10356-1444192020-11-04T06:47:42Z Graph edit distance reward : learning to edit scene graph Chen, Lichang Lin, Guosheng Wang, Shijie Wu, Qingyao School of Computer Science and Engineering European Conference on Computer Vision (ECCV) 2020 Engineering::Computer science and engineering Scene Graph Editing Policy Gradient Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward, which is based on the Policy Gradient and Graph Matching algorithm, to optimize neural symbolic model. In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset. Besides, CRIR is a new synthetic dataset generated by us, which we will publish it soon for future use. AI Singapore Ministry of Education (MOE) National Research Foundation (NRF) Accepted version This research was supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2018-003) and the MOE Tier-1 research grants: RG28/18 (S) and RG22/19 (S). Q. Wu’s participation was supported by NSFC 61876208, KeyArea Research and Development Program of Guangdong 2018B010108002. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. 2020-11-04T06:47:42Z 2020-11-04T06:47:42Z 2020 Conference Paper Chen, L., Lin, G., Wang, S., & Wu, Q. (2020). Graph edit distance reward : learning to edit scene graph. Proceedings of the European Conference on Computer Vision (ECCV) 2020. https://hdl.handle.net/10356/144419 en AISG-RP-2018-003 RG28/18 (S) RG22/19 (S) © 2020 Springer Nature Switzerland AG. This is a post-peer-review, pre-copyedit version of an article published in European Conference on Computer Vision (ECCV) 2020. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Scene Graph Editing
Policy Gradient
spellingShingle Engineering::Computer science and engineering
Scene Graph Editing
Policy Gradient
Chen, Lichang
Lin, Guosheng
Wang, Shijie
Wu, Qingyao
Graph edit distance reward : learning to edit scene graph
description Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward, which is based on the Policy Gradient and Graph Matching algorithm, to optimize neural symbolic model. In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset. Besides, CRIR is a new synthetic dataset generated by us, which we will publish it soon for future use.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Chen, Lichang
Lin, Guosheng
Wang, Shijie
Wu, Qingyao
format Conference or Workshop Item
author Chen, Lichang
Lin, Guosheng
Wang, Shijie
Wu, Qingyao
author_sort Chen, Lichang
title Graph edit distance reward : learning to edit scene graph
title_short Graph edit distance reward : learning to edit scene graph
title_full Graph edit distance reward : learning to edit scene graph
title_fullStr Graph edit distance reward : learning to edit scene graph
title_full_unstemmed Graph edit distance reward : learning to edit scene graph
title_sort graph edit distance reward : learning to edit scene graph
publishDate 2020
url https://hdl.handle.net/10356/144419
_version_ 1688665659648507904