CNN based enhanced perception for mobile robots in rainy environments

Image enhancement and robot perception are hot research areas in recent years. With the state-of-art algorithms and technologies employed, the unmanned ground vehicles (UGVs) can cope with daily tasks in normal environments. For example, many latest cars are carrying some half-unmanned driving techn...

Full description

Saved in:
Bibliographic Details
Main Author: Lan, Xi
Other Authors: Wang Dan Wei
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149627
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Image enhancement and robot perception are hot research areas in recent years. With the state-of-art algorithms and technologies employed, the unmanned ground vehicles (UGVs) can cope with daily tasks in normal environments. For example, many latest cars are carrying some half-unmanned driving techniques implemented by Li-DAR or other distance detecting devices. However, most perception tasks may fail facing the challenging situations such as rainy or foggy weather. Therefore, focusing on binocular images under heavy rain and foggy circumstances, an end-to-end disparity estimation network is proposed in this paper. Though rain removal (derain) methods based on CNN are constantly emerging, most of them are trained with synthetic rain images that photographed in many different scenarios and faked with ideal raindrops. Besides, the depth information collected by Li-DAR could be deteriorated by raindrops. Hence, we composed a more authentic rainy driving binocular dataset which is used for training. To get a better result, the training is set to be 2-stage. In the first stage, derain part is trained to get a pretrained model, while in the second stage, the entire network is trained for the derain refinement and getting disparity map.