Spatiotemporal saliency detection via sparse representation

Multimedia applications like retrieval, copy detection etc. can gain from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatiotemporal saliency framework for videos based...

Full description

Saved in:
Bibliographic Details
Main Authors: Ren, Zhixiang, Gao, Shenghua, Rajan, Deepu, Chia, Clement Liang-Tien, Huang, Yun
Other Authors: School of Computer Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/99619
http://hdl.handle.net/10220/13023
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Multimedia applications like retrieval, copy detection etc. can gain from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatiotemporal saliency framework for videos based on sparse representation. For temporal saliency, we model the movement of the target patch as a reconstruction process, and the overlapping patches in neighboring frames are used to reconstruct the target patch. The learned coefficients encode the positions of the matched patches, which are able to represent the motion trajectory of the target patch. We also introduce a smoothing term into our sparse coding framework to learn coherent motion trajectories. Based on the psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, sparsity regularizer, and local trajectory contrast to measure the motion saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined by agreement to favor the salient regions with high confidence. Experimental results on a human fixation video dataset show our method achieved the best performance over five state-of-the-art approaches.