AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS

Autonomous vehicles have great potential to improve transportation safety and efficiency. However, the current capabilities of autonomous vehicle perception systems are still limited in dealing with extreme weather conditions such as heavy rain, thick fog, or snowstorms. Major sensors such as cam...

Full description

Saved in:
Bibliographic Details
Main Author: Wibowo, Ari
Format: Dissertations
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/86650
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
Description
Summary:Autonomous vehicles have great potential to improve transportation safety and efficiency. However, the current capabilities of autonomous vehicle perception systems are still limited in dealing with extreme weather conditions such as heavy rain, thick fog, or snowstorms. Major sensors such as cameras, lidar, and radar experience significant performance degradation in adverse weather. This results in reduced visibility, inaccurate object detection, and improper environmental classification, endangering the safety of drivers and passengers. The object detection performance of deep learning models in autonomous vehicles is quite good, but still faces challenges when operating in extreme weather conditions. To address this issue, a new object detection framework called MIRSA+YOLOV7MOD+M3CBAM is proposed, designed for traffic environments in fairly extreme weather conditions (rain, fog, night+rain). The novelty of the research is a framework that is a combination of denoising modules and detection modules, a new architecture of the MIRSA denoising model which is a modification of MIRNet-v2 with the addition of a self-attention (SA) layer, and a new architecture of YOLOv7-MOD with the addition of a deformable convolution (DC) layer and convolution block attention module (CBAM). This research also produces a collection of traffic image datasets (LLD) during rain and low light conditions used for training. Data collection and annotation follow the principles applied to the KITTY dataset, making it a reference for further development. Comparative experiments with the latest methods visually and quantitatively confirm the effectiveness of the proposed model, demonstrating its ability to refine images, resulting in clearer recognition. This method achieves the highest scores across all fog concentration categories, with the highest mAP values of 75.24%, 83.91%, and 90.74% for high, medium, and light fog categories, respectively. Meanwhile, in tests during rainy conditions and night+rain conditions with low lighting, there were improvements of 3.88%, 3.74% and 2.70% respectively compared to previous methods. The difference in detection accuracy related to the use of the MIRSA and MIRNet-v2 models is around 2-3% for each condition. This reinforces that the developed model can be relied upon for autonomous vehicle perception systems in fairly extreme weather conditions and low lighting environments.