Efficient video object co-localization with co-saliency activated tracklets

Video object co-localization is the task of jointly localizing common visual objects across videos. Due to the large variations both across the videos and within each video, it is quite challenging to identify and track the common objects jointly. Unlike the previous joint frameworks that use a larg...

Full description

Saved in:
Bibliographic Details
Main Authors: Jerripothula, Koteswar Rao, Cai, Jianfei, Yuan, Junsong
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/142175
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Video object co-localization is the task of jointly localizing common visual objects across videos. Due to the large variations both across the videos and within each video, it is quite challenging to identify and track the common objects jointly. Unlike the previous joint frameworks that use a large number of bounding box proposals to attack the problem, we propose to leverage co-saliency activated tracklets to efficiently address the problem. To highlight the common object regions, we first explore inter-video commonness, intra-video commonness, and motion saliency to generate the co-saliency maps for a small number of selected key frames at regular intervals. Object proposals of high objectness and co-saliency scores in those frames are tracked across each interval to build tracklets. Finally, the best tube for a video is obtained through selecting the optimal tracklet from each interval with the help of confidence and smoothness constraints. Experimental results on the benchmark YouTube-objects dataset show that the proposed method outperforms the state-of-the-art methods in terms of accuracy and speed under both weakly supervised and unsupervised settings. Moreover, by noticing the existing benchmark dataset lacks of sufficient annotations for object localization (only one annotated frame per video), we further annotate more than 15k frames of the YouTube videos and develop a new benchmark dataset for video co-localization.