Visual place recognition for autonomous robots using deep learning

Visual place recognition has become a challenging and attractive field in computer vision and robotics because it involves many methods to recognize the appearance of natural scenes that may be different. In autonomous and unmanned aerial vehicles, visual location recognition helps to detect actu...

Full description

Saved in:
Bibliographic Details
Main Author: Huang, Yifeng
Other Authors: Wang Dan Wei
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/153856
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-153856
record_format dspace
spelling sg-ntu-dr.10356-1538562023-07-04T16:43:06Z Visual place recognition for autonomous robots using deep learning Huang, Yifeng Wang Dan Wei School of Electrical and Electronic Engineering EDWWANG@ntu.edu.sg Engineering::Electrical and electronic engineering::Computer hardware, software and systems Visual place recognition has become a challenging and attractive field in computer vision and robotics because it involves many methods to recognize the appearance of natural scenes that may be different. In autonomous and unmanned aerial vehicles, visual location recognition helps to detect actual destinations and locations In order to match standard features from different angles of a scene, we propose a global feature matching method, which is separate from the current popular local feature matching method. We build attention maps from two dimensions to refine the calculation methods of residuals in VLAD, channel and spatial, and get better results than the original NetVLAD model. In addition, self-driving cars require precise, light and comfortable shapes to operate in order to achieve their portability. Therefore, we propose a triple distillation method by using the weakly supervised triple ordering loss as the standard in global feature matching. It uses the student network to learn from the teacher network from three directions to reduce the huge model (teacher model we Combines two loss method functions, the first uses the results of the teacher model to extract the positive results and stay away from the negative results, and the other method fits the vector to the point). Combining these two methods makes our method better than the original lightweight model. This distillation method of global feature matching can reduce the difference between the results of the learning model and the teaching model, and can also be used to enhance the generalization ability. Master of Science (Computer Control and Automation) 2021-12-13T03:39:47Z 2021-12-13T03:39:47Z 2021 Thesis-Master by Coursework Huang, Y. (2021). Visual place recognition for autonomous robots using deep learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153856 https://hdl.handle.net/10356/153856 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Computer hardware, software and systems
spellingShingle Engineering::Electrical and electronic engineering::Computer hardware, software and systems
Huang, Yifeng
Visual place recognition for autonomous robots using deep learning
description Visual place recognition has become a challenging and attractive field in computer vision and robotics because it involves many methods to recognize the appearance of natural scenes that may be different. In autonomous and unmanned aerial vehicles, visual location recognition helps to detect actual destinations and locations In order to match standard features from different angles of a scene, we propose a global feature matching method, which is separate from the current popular local feature matching method. We build attention maps from two dimensions to refine the calculation methods of residuals in VLAD, channel and spatial, and get better results than the original NetVLAD model. In addition, self-driving cars require precise, light and comfortable shapes to operate in order to achieve their portability. Therefore, we propose a triple distillation method by using the weakly supervised triple ordering loss as the standard in global feature matching. It uses the student network to learn from the teacher network from three directions to reduce the huge model (teacher model we Combines two loss method functions, the first uses the results of the teacher model to extract the positive results and stay away from the negative results, and the other method fits the vector to the point). Combining these two methods makes our method better than the original lightweight model. This distillation method of global feature matching can reduce the difference between the results of the learning model and the teaching model, and can also be used to enhance the generalization ability.
author2 Wang Dan Wei
author_facet Wang Dan Wei
Huang, Yifeng
format Thesis-Master by Coursework
author Huang, Yifeng
author_sort Huang, Yifeng
title Visual place recognition for autonomous robots using deep learning
title_short Visual place recognition for autonomous robots using deep learning
title_full Visual place recognition for autonomous robots using deep learning
title_fullStr Visual place recognition for autonomous robots using deep learning
title_full_unstemmed Visual place recognition for autonomous robots using deep learning
title_sort visual place recognition for autonomous robots using deep learning
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/153856
_version_ 1772829041386061824