Multimodal data fusion for object detection under rainy conditions

In recent years, autonomous driving technology has developed rapidly because there is demand to reduce number of the death caused by serious accident. As an important part of the autonomous driving perception algorithm, object detection algorithms have been given more and more attention and impressi...

Full description

Saved in:
Bibliographic Details
Main Author: Liu, Ting Tao
Other Authors: Soong Boon Hee
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157948
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-157948
record_format dspace
spelling sg-ntu-dr.10356-1579482023-07-07T19:17:49Z Multimodal data fusion for object detection under rainy conditions Liu, Ting Tao Soong Boon Hee School of Electrical and Electronic Engineering EBHSOONG@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision In recent years, autonomous driving technology has developed rapidly because there is demand to reduce number of the death caused by serious accident. As an important part of the autonomous driving perception algorithm, object detection algorithms have been given more and more attention and impressive progress has achieved. However, under rainy conditions, pure vision-based object detection methods can be severely affected, resulting in a large number of missed and wrong detections. At the same time, radar sensors are more robust to rain than camera sensors. Therefore, this project aims to implement a Multimodal Object Detection that combines radar and camera data for object detection in rainy weather conditions. Firstly, because of the lack of public multimodal rainy datasets, we generated our own rainy dataset based on the nuScenes dataset with Cycle GAN and some of our own recorded rainy images. The rainy dataset was then used to train a Rainy Image Classifier to give a score for the rainy degree of each data frame. Then the data stream was weighted according to the scores, so that the data generator would focus more on the radar data for rainy frames. This kind of generator is the proposed Adaptive Data Generator in this project. Based on the Adaptive Data Generator, we proposed the Multimodal CRF-Net and compared it with a purely visual-based approach and CRF-Net on the rainy and non-rainy datasets. Finally, we presented and discussed the experimental results. Overall, the results show that the Multimodal CRF-Net proposed in this project performs better than pure visual-based method and CRF-Net on our generated rainy dataset. However, it should also be noted that the results of this project have some limitations and shortcomings: the overall mAP is not high enough, the epochs are all set to 10 may not be enough, the dataset may not be large enough, etc. We recommend some approaches to improve in the future work section such as building a multimodal rainy dataset, trying other training parameters, enhancing the image visibility in rain and fusing fata from more sensors. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-25T02:35:33Z 2022-05-25T02:35:33Z 2022 Final Year Project (FYP) Liu, T. T. (2022). Multimodal data fusion for object detection under rainy conditions. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157948 https://hdl.handle.net/10356/157948 en W3357-212 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Liu, Ting Tao
Multimodal data fusion for object detection under rainy conditions
description In recent years, autonomous driving technology has developed rapidly because there is demand to reduce number of the death caused by serious accident. As an important part of the autonomous driving perception algorithm, object detection algorithms have been given more and more attention and impressive progress has achieved. However, under rainy conditions, pure vision-based object detection methods can be severely affected, resulting in a large number of missed and wrong detections. At the same time, radar sensors are more robust to rain than camera sensors. Therefore, this project aims to implement a Multimodal Object Detection that combines radar and camera data for object detection in rainy weather conditions. Firstly, because of the lack of public multimodal rainy datasets, we generated our own rainy dataset based on the nuScenes dataset with Cycle GAN and some of our own recorded rainy images. The rainy dataset was then used to train a Rainy Image Classifier to give a score for the rainy degree of each data frame. Then the data stream was weighted according to the scores, so that the data generator would focus more on the radar data for rainy frames. This kind of generator is the proposed Adaptive Data Generator in this project. Based on the Adaptive Data Generator, we proposed the Multimodal CRF-Net and compared it with a purely visual-based approach and CRF-Net on the rainy and non-rainy datasets. Finally, we presented and discussed the experimental results. Overall, the results show that the Multimodal CRF-Net proposed in this project performs better than pure visual-based method and CRF-Net on our generated rainy dataset. However, it should also be noted that the results of this project have some limitations and shortcomings: the overall mAP is not high enough, the epochs are all set to 10 may not be enough, the dataset may not be large enough, etc. We recommend some approaches to improve in the future work section such as building a multimodal rainy dataset, trying other training parameters, enhancing the image visibility in rain and fusing fata from more sensors.
author2 Soong Boon Hee
author_facet Soong Boon Hee
Liu, Ting Tao
format Final Year Project
author Liu, Ting Tao
author_sort Liu, Ting Tao
title Multimodal data fusion for object detection under rainy conditions
title_short Multimodal data fusion for object detection under rainy conditions
title_full Multimodal data fusion for object detection under rainy conditions
title_fullStr Multimodal data fusion for object detection under rainy conditions
title_full_unstemmed Multimodal data fusion for object detection under rainy conditions
title_sort multimodal data fusion for object detection under rainy conditions
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/157948
_version_ 1772827518347247616