LiDAR relocalization on edge devices
Simultaneous localization and mapping (SLAM), with the aid of cheap cameras, has many applications today. In niche situations where cameras are inappropriate, such as for privacy or security reasons, LiDARs have stepped up to fill the gap. However, much of the literature studying LiDAR SLAM have foc...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148615 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Simultaneous localization and mapping (SLAM), with the aid of cheap cameras, has many applications today. In niche situations where cameras are inappropriate, such as for privacy or security reasons, LiDARs have stepped up to fill the gap. However, much of the literature studying LiDAR SLAM have focused on autonomous vehicles with powerful spindle-type LiDAR sensors. There is a potential for LiDAR SLAM to be incorporated into many other areas such as home electronics, mobile devices, and even wearable technology. This paper explores the possibility of extending LiDAR relocalization methods to edge devices, by assuming edge devices do not have access to powerful LiDAR sensors, and also have limited processing capabilities.
This is accomplished through two novel techniques. First, point clouds captured by a Velodyne HDL-64E are down-sampled using a roughness score instead of the industry practice of random down-sampling. This allows up to 90% of points to be removed while retaining the most salient of points, saving both downstream computation and storage costs. Second, ground truth overlap percentages can be calculated using the polygon approximation method introduced in this paper, which approximates the area of overlap between two circular sectors at any angle. This allows ground truth overlaps to be calculated for LiDAR scans that have a field of view (FOV) of less than 360 degrees.
The performance of OverlapNet on the KITTI odometry dataset using the full original data is compared against data that is truncated through down-sampling and restriction of FOV. The results show that even with an information reduction rate of approximately 96.67%, the model is still able to perform well, with a slight increase of accuracy by 3.28%, and a slight drop of F1 score by 1.8%. This proves that it is possible to adapt a model that has been trained for LiDAR relocalization on a car to any device, even with limited hardware restrictions. |
---|