Training-free attentive-patch selection for visual place recognition
Visual Place Recognition (VPR) utilizing patch descriptors from Convolutional Neural Networks (CNNs) has shown impressive performance in recent years. Existing works either perform exhaustive matching of all patch descriptors, or employ complex networks to select good candidate patches for further g...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/178533 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Visual Place Recognition (VPR) utilizing patch descriptors from Convolutional Neural Networks (CNNs) has shown impressive performance in recent years. Existing works either perform exhaustive matching of all patch descriptors, or employ complex networks to select good candidate patches for further geometric verification. In this work, we develop a novel two-step training-free patch selection method that is fast, while being robust to large occlusions and extreme viewpoint variations. In the first step, a self-attention mechanism is used to select sparse and evenly distributed discriminative patches in the query image. Next, a novel spatial-matching method is used to rapidly select corresponding patches with high similar appearances between the query and each reference image. The proposed method is inspired by how humans perform place recognition by first identifying prominent regions in the query image, and then relying on back-and-forth visual inspection of the query and reference image to attentively identify similar regions while ignoring dissimilar ones. Extensive experiment results show that our proposed method outperforms state-of-the-art (SOTA) methods in both place recognition precision and runtime, on various challenging conditions. |
---|