Accelerating learned descriptor generation for visual localization

Visual SLAM systems use traditional feature extractor to retrieve features, a pair consisting of a keypoint and descriptor, from images. These features can then be matched to estimate the camera pose. However, these traditional feature extractors are surpassed by newer deep learning-based feature ex...

Full description

Saved in:
Bibliographic Details
Main Author: Liu, Woon Kit
Other Authors: Lam Siew Kei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175279
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175279
record_format dspace
spelling sg-ntu-dr.10356-1752792024-04-26T15:44:00Z Accelerating learned descriptor generation for visual localization Liu, Woon Kit Lam Siew Kei School of Computer Science and Engineering ASSKLam@ntu.edu.sg Computer and Information Science Visual SLAM systems use traditional feature extractor to retrieve features, a pair consisting of a keypoint and descriptor, from images. These features can then be matched to estimate the camera pose. However, these traditional feature extractors are surpassed by newer deep learning-based feature extractor in the presence of imaging noise, illumination, or viewpoint changes. However, such AI models may suffer performance issues when deployed to embedded devices, which prioritises low-powered consumption. This report investigates the potential of deep learning accelerator libraries to accelerate feature extractor models for application in visual SLAM systems, particularly on embedded devices. TensorRT, is such a library that this can help achieve a significant speedup compared to traditional feature extraction methods. Bachelor's degree 2024-04-23T02:06:41Z 2024-04-23T02:06:41Z 2024 Final Year Project (FYP) Liu, W. K. (2024). Accelerating learned descriptor generation for visual localization. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175279 https://hdl.handle.net/10356/175279 en SCSE23-0143 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
spellingShingle Computer and Information Science
Liu, Woon Kit
Accelerating learned descriptor generation for visual localization
description Visual SLAM systems use traditional feature extractor to retrieve features, a pair consisting of a keypoint and descriptor, from images. These features can then be matched to estimate the camera pose. However, these traditional feature extractors are surpassed by newer deep learning-based feature extractor in the presence of imaging noise, illumination, or viewpoint changes. However, such AI models may suffer performance issues when deployed to embedded devices, which prioritises low-powered consumption. This report investigates the potential of deep learning accelerator libraries to accelerate feature extractor models for application in visual SLAM systems, particularly on embedded devices. TensorRT, is such a library that this can help achieve a significant speedup compared to traditional feature extraction methods.
author2 Lam Siew Kei
author_facet Lam Siew Kei
Liu, Woon Kit
format Final Year Project
author Liu, Woon Kit
author_sort Liu, Woon Kit
title Accelerating learned descriptor generation for visual localization
title_short Accelerating learned descriptor generation for visual localization
title_full Accelerating learned descriptor generation for visual localization
title_fullStr Accelerating learned descriptor generation for visual localization
title_full_unstemmed Accelerating learned descriptor generation for visual localization
title_sort accelerating learned descriptor generation for visual localization
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175279
_version_ 1800916158167121920