Model uncertainty guides visual object tracking
Model object trackers largely rely on the online learning of a discriminative classifier from potentially diverse sample frames. However, noisy or insufficient amounts of samples can deteriorate the classifiers' performance and cause tracking drift. Furthermore, alterations such as occlusion an...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7204 https://ink.library.smu.edu.sg/context/sis_research/article/8207/viewcontent/16473_Article_Text_19967_1_2_20210518.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8207 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-82072022-08-04T08:47:40Z Model uncertainty guides visual object tracking ZHOU, Lijun LEDENT, Antoine HU, Qintao LIU, Ting ZHANG, Jianlin KLOFT, Marius Model object trackers largely rely on the online learning of a discriminative classifier from potentially diverse sample frames. However, noisy or insufficient amounts of samples can deteriorate the classifiers' performance and cause tracking drift. Furthermore, alterations such as occlusion and blurring can cause the target to be lost. In this paper, we make several improvements aimed at tackling uncertainty and improving robustness in object tracking. Our first and most important contribution is to propose a sampling method for the online learning of object trackers based on uncertainty adjustment: our method effectively selects representative sample frames to feed the discriminative branch of the tracker, while filtering out noise samples. Furthermore, to improve the robustness of the tracker to various challenging scenarios, we propose a novel data augmentation procedure, together with a specific improved backbone architecture. All our improvements fit together in one model, which we refer to as the Uncertainty Adjusted Tracker (UATracker), and can be trained in a joint and end-to-end fashion. Experiments on the LaSOT, UAV123, OTB100 and VOT2018 benchmarks demonstrate that our UATracker outperforms state-of-the-art real-time trackers by significant margins. 2021-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7204 https://ink.library.smu.edu.sg/context/sis_research/article/8207/viewcontent/16473_Article_Text_19967_1_2_20210518.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Object Tracking Computer Vision Machine Learning. Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Object Tracking Computer Vision Machine Learning. Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing |
spellingShingle |
Object Tracking Computer Vision Machine Learning. Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing ZHOU, Lijun LEDENT, Antoine HU, Qintao LIU, Ting ZHANG, Jianlin KLOFT, Marius Model uncertainty guides visual object tracking |
description |
Model object trackers largely rely on the online learning of a discriminative classifier from potentially diverse sample frames. However, noisy or insufficient amounts of samples can deteriorate the classifiers' performance and cause tracking drift. Furthermore, alterations such as occlusion and blurring can cause the target to be lost. In this paper, we make several improvements aimed at tackling uncertainty and improving robustness in object tracking. Our first and most important contribution is to propose a sampling method for the online learning of object trackers based on uncertainty adjustment: our method effectively selects representative sample frames to feed the discriminative branch of the tracker, while filtering out noise samples. Furthermore, to improve the robustness of the tracker to various challenging scenarios, we propose a novel data augmentation procedure, together with a specific improved backbone architecture. All our improvements fit together in one model, which we refer to as the Uncertainty Adjusted Tracker (UATracker), and can be trained in a joint and end-to-end fashion. Experiments on the LaSOT, UAV123, OTB100 and VOT2018 benchmarks demonstrate that our UATracker outperforms state-of-the-art real-time trackers by significant margins. |
format |
text |
author |
ZHOU, Lijun LEDENT, Antoine HU, Qintao LIU, Ting ZHANG, Jianlin KLOFT, Marius |
author_facet |
ZHOU, Lijun LEDENT, Antoine HU, Qintao LIU, Ting ZHANG, Jianlin KLOFT, Marius |
author_sort |
ZHOU, Lijun |
title |
Model uncertainty guides visual object tracking |
title_short |
Model uncertainty guides visual object tracking |
title_full |
Model uncertainty guides visual object tracking |
title_fullStr |
Model uncertainty guides visual object tracking |
title_full_unstemmed |
Model uncertainty guides visual object tracking |
title_sort |
model uncertainty guides visual object tracking |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/7204 https://ink.library.smu.edu.sg/context/sis_research/article/8207/viewcontent/16473_Article_Text_19967_1_2_20210518.pdf |
_version_ |
1770576269455392768 |