Gaze prediction based on long short-term memory convolution with associated features of video frames

Gaze prediction is a key issue for visual perception research. It can be used to infer important regions in videos to reduce the amount of computation in learning and inference of various analysis tasks. Vanilla methods for dynamic video unable to extract valid features, and the motion information a...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiao, Limei, Zhu, Zizhong, Liu, Hao, Li, Ce, Fu, Wenhao
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172061
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Gaze prediction is a key issue for visual perception research. It can be used to infer important regions in videos to reduce the amount of computation in learning and inference of various analysis tasks. Vanilla methods for dynamic video unable to extract valid features, and the motion information among dynamic video frames are ignored, which lead to poor prediction results. We propose a gaze prediction based on LSTM convolution with associated features of video frames (LSTM-CVFAF). Firstly, by adding learnable central prior knowledge, the proposed method can effectively and accurately extract the spatial information of each frame. Secondly, the LSTM is deployed to get temporal motion gaze features. Finally, the spatial and temporal motion information is fused to generate the gaze prediction maps of the dynamic video. Compared with the state-of-art models on DHF1K dataset, the CC, AUC-j, sAUC, NSS are separately increased by 5.1%, 0.6%, 38.2% and 0.5%.