Multi-modality fusion in multiple object tracking
Tracking multiple individuals in unconstrained videos, especially in challenging scenarios like crowded environments or situations with visually similar people, presents substantial difficulties. Existing tracking methods heavily depend on sophisticated detectors and task-specific data association t...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173003 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Tracking multiple individuals in unconstrained videos, especially in challenging scenarios like crowded environments or situations with visually similar people, presents substantial difficulties. Existing tracking methods heavily depend on sophisticated detectors and task-specific data association techniques, but they often neglect the utilization of multi-modality information in human tracking. Nonetheless, the integration of multi-modality information presents substantial potential for enhancing tracking performance. To address this gap, we leverage multi-modality information, from both a single sensor and multi sensors, in object tracking. In the context of single-sensor modality fusion, we have introduced an innovative shared network in conjunction with a cascaded data association method specifically designed for multi-object tracking. In the context of modality fusion involving multiple sensors, we have identified the necessity for fusion based on the extraction of congruent semantic information from diverse modalities. This insight has led to the proposal of our RFID-assisted multiple object tracking method. Our experimental results affirm that our modality fusion tracking approach surpasses baseline methods in terms of tracking performance, particularly when confronted with challenging scenarios. In such scenarios, our method consistently exhibits robustness and accuracy in object tracking. |
---|