CNN based enhanced perception for mobile robots in rainy environments

Image enhancement and robot perception are hot research areas in recent years. With the state-of-art algorithms and technologies employed, the unmanned ground vehicles (UGVs) can cope with daily tasks in normal environments. For example, many latest cars are carrying some half-unmanned driving techn...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Lan, Xi
مؤلفون آخرون: Wang Dan Wei
التنسيق: Thesis-Master by Coursework
اللغة:English
منشور في: Nanyang Technological University 2021
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/149627
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Image enhancement and robot perception are hot research areas in recent years. With the state-of-art algorithms and technologies employed, the unmanned ground vehicles (UGVs) can cope with daily tasks in normal environments. For example, many latest cars are carrying some half-unmanned driving techniques implemented by Li-DAR or other distance detecting devices. However, most perception tasks may fail facing the challenging situations such as rainy or foggy weather. Therefore, focusing on binocular images under heavy rain and foggy circumstances, an end-to-end disparity estimation network is proposed in this paper. Though rain removal (derain) methods based on CNN are constantly emerging, most of them are trained with synthetic rain images that photographed in many different scenarios and faked with ideal raindrops. Besides, the depth information collected by Li-DAR could be deteriorated by raindrops. Hence, we composed a more authentic rainy driving binocular dataset which is used for training. To get a better result, the training is set to be 2-stage. In the first stage, derain part is trained to get a pretrained model, while in the second stage, the entire network is trained for the derain refinement and getting disparity map.