Learning long-term structural dependencies for video salient object detection

Existing video salient object detection (VSOD) methods focus on exploring either short-term or long-term temporal information. However, temporal information is exploited in a global frame-level or regular grid structure, neglecting inter-frame structural dependencies. In this article, we propose to...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Bo, LIU, Wenxi, HAN, Guoqiang, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7871
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Existing video salient object detection (VSOD) methods focus on exploring either short-term or long-term temporal information. However, temporal information is exploited in a global frame-level or regular grid structure, neglecting inter-frame structural dependencies. In this article, we propose to learn long-term structural dependencies with a structure-evolving graph convolutional network (GCN). Particularly, we construct a graph for the entire video using a fast supervoxel segmentation method, in which each node is connected according to spatio-temporal structural similarity. We infer the inter-frame structural dependencies of salient object using convolutional operations on the graph. To prune redundant connections in the graph and better adapt to the moving salient object, we present an adaptive graph pooling to evolve the structure of the graph by dynamically merging similar nodes, learning better hierarchical representations of the graph. Experiments on six public datasets show that our method outperforms all other state-of-the-art methods. Furthermore, We also demonstrate that our proposed adaptive graph pooling can effectively improve the supervoxel algorithm in the term of segmentation accuracy.