3D spatial perception for underwater robots using point cloud data from orthogonal multibeam sonars fusion
Enhancing 3D spatial perception for underwater robots is crucial in improving their capability to carry out complex operations. However, this is a challenging problem due to the severe visibility for optical and sparse spatial data for acoustic imaging sensors underwater. To address these problems,...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172750 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Enhancing 3D spatial perception for underwater robots is crucial in improving their capability to carry out complex operations. However, this is a challenging problem due to the severe visibility for optical and sparse spatial data for acoustic imaging sensors underwater. To address these problems, Orthogonal Multibeam Sonar Fusion (OMSF) was previously developed in the literature, producing 3D point cloud data (PCD) using fusion from a pair of multibeam forward-looking sonars (MFLS) orthogonal to each other. Orthogonal orientation refers to a configuration where the image planes between a ‘horizontal’ and ‘vertical’ sensor pair are oriented perpendicularly (90 degrees) to each other. Forward-oriented MFLS creates 2D images of the environment based on intensities of multiple beamformed acoustic signals reflected from objects. While there exists research on OMSF for environment mapping, no prior controlled testing to determine OMSF accuracy and sensitivity to operational and environmental factors, was published. Methodology 3D data for perception beyond mapping and object scanning has also not been explored. This thesis presents several works to address these research gaps. First, controlled simulation and pool tests are performed to determine OMSF sensitivity and accuracy for factors of sensor frequency, object scale, object shape, and relative sensor rotation. Test results show the method has on average 38.09% higher accuracy using high frequency sensors on larger scaled objects and is up to 43% more accurate for objects with solid surfaces than hollow frames. This work also shows that with proper compensation, OMSF accuracy is robust against relative sensor rotations as long as the line-of-sight of the target is maintained. To the best of our knowledge, these results present the first known controlled documentation of OMSF sensitivity, and confirm that this method carries over and preserves known properties of singular MFLS sonars. This work will be useful for applications with known target objects, such as automated garage docking, allowing assessment of method suitability and result optimization.
Next, an application scenario of automated garage docking for an Autonomous Underwater Vehicle (AUVs) was devised to assess the feasibility of OMSF for classification and pose estimation, using simulation and pool tests. First, a PCD-based classification technique was developed by integrating OMSF PCDs as input to a binary classifier based on PointNet++, and trained on low cost offline object PCD scans. To address data sparsity, an OMSF-based volumetric filtering method was developed to re-include dense points from raw sonar features into the input. Pool test results show higher efficiency for low granularity sampling, and the volumetric filtering enables classification to achieve 25% and 37% respectively higher success rate and confidence, compared to using inputs from raw 3D projection of sonar features. Furthermore, an MFLS-based pose estimation technique was proposed, consisting of object width normalization to address limited training data for pose regression, a deterministic bounding box regression using Orthogonal Feature Matching (OFM), and an image-based relative pose regression. Pool test results show that OFM bounding box regression produces 4.28% higher mean Intersection over Union (IoU) and a 10% increase for the (> 25%) IoU metric compared to methods based on standard MFLS filtering, and that pose regression using Convolutional Neural Network achieves the highest success rate compared to other tested methods. Results also show the proposed end-to-end technique having < 10^o error, comparable to those of existing optical-based methods. In conclusion, these works present novel use cases of OMSF for perception-based applications, resulting in
PCD-based classification with low training cost and achieving better performance compared to using naive 3D sonar feature projection, along with MFLS-based pose estimation that has comparable performance to existing optical-based methods and inherently more robust in turbid waters. |
---|