Object detection for mobile robots in adverse conditions
In recent times, there has been a rapid advancement in the field of mobile robots such as autonomous vehicles, driven by the need to decrease the occurrence of fatalities stemming from severe accidents. Object detection algorithms, a crucial component of autonomous driving perception systems, are re...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177372 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In recent times, there has been a rapid advancement in the field of mobile robots such as autonomous vehicles, driven by the need to decrease the occurrence of fatalities stemming from severe accidents. Object detection algorithms, a crucial component of autonomous driving perception systems, are receiving increasing attention. However, adverse conditions like rainy nights can significantly impair pure vision-based object detection techniques, leading to an increased occurrence of missed and erroneous detections, especially for objects at a distance. At the same time, lidar sensors demonstrate a higher resistance to rain compared to camera sensors. Therefore, a study is conducted focusing on object detection under adverse conditions, encompassing research on vision-based 2D object detection and multi-modal 3D object detection.
There are two main parts to this study. First, a dataset for detection in adverse conditions was established using monitoring images from NTU. YOLOv7, a popular vision-based 2D object detection algorithm, is used to verify the challenge of weather effects, and the performance improvement validates the importance of the dataset. After retraining on our dataset, an enhanced model that performs better in adverse conditions was obtained. Meanwhile, challenges in detecting distant objects were noticed. Secondly, the current state-of-the-art 3D object detection network, Transfusion, was studied on the multi-modal dataset nuScenes. It was enhanced by distance-weighted loss function and temporal training strategies. Through experiments, the enhanced network demonstrated better detection performance for objects at the target distances.
Overall, the results show that retraining for adverse scenarios can improve the object detection methods’ robustness, and the weighted loss function and temporal training strategies can enhance the detection performance of distant objects. We also put forward some suggestions to improve in the last section such as building a larger scale dataset in adverse scenarios, trying other training parameters, and enhancing the image before input. |
---|