AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS
Autonomous vehicles have great potential to improve transportation safety and efficiency. However, the current capabilities of autonomous vehicle perception systems are still limited in dealing with extreme weather conditions such as heavy rain, thick fog, or snowstorms. Major sensors such as cam...
Saved in:
Main Author: | |
---|---|
Format: | Dissertations |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/86650 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:86650 |
---|---|
spelling |
id-itb.:866502024-12-12T09:43:50ZAUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS Wibowo, Ari Indonesia Dissertations YOLO, self-attention, deformable convolution, CBAM, object detection, denoising, autonomous vehicles INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/86650 Autonomous vehicles have great potential to improve transportation safety and efficiency. However, the current capabilities of autonomous vehicle perception systems are still limited in dealing with extreme weather conditions such as heavy rain, thick fog, or snowstorms. Major sensors such as cameras, lidar, and radar experience significant performance degradation in adverse weather. This results in reduced visibility, inaccurate object detection, and improper environmental classification, endangering the safety of drivers and passengers. The object detection performance of deep learning models in autonomous vehicles is quite good, but still faces challenges when operating in extreme weather conditions. To address this issue, a new object detection framework called MIRSA+YOLOV7MOD+M3CBAM is proposed, designed for traffic environments in fairly extreme weather conditions (rain, fog, night+rain). The novelty of the research is a framework that is a combination of denoising modules and detection modules, a new architecture of the MIRSA denoising model which is a modification of MIRNet-v2 with the addition of a self-attention (SA) layer, and a new architecture of YOLOv7-MOD with the addition of a deformable convolution (DC) layer and convolution block attention module (CBAM). This research also produces a collection of traffic image datasets (LLD) during rain and low light conditions used for training. Data collection and annotation follow the principles applied to the KITTY dataset, making it a reference for further development. Comparative experiments with the latest methods visually and quantitatively confirm the effectiveness of the proposed model, demonstrating its ability to refine images, resulting in clearer recognition. This method achieves the highest scores across all fog concentration categories, with the highest mAP values of 75.24%, 83.91%, and 90.74% for high, medium, and light fog categories, respectively. Meanwhile, in tests during rainy conditions and night+rain conditions with low lighting, there were improvements of 3.88%, 3.74% and 2.70% respectively compared to previous methods. The difference in detection accuracy related to the use of the MIRSA and MIRNet-v2 models is around 2-3% for each condition. This reinforces that the developed model can be relied upon for autonomous vehicle perception systems in fairly extreme weather conditions and low lighting environments. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Autonomous vehicles have great potential to improve transportation safety and efficiency.
However, the current capabilities of autonomous vehicle perception systems are still limited in
dealing with extreme weather conditions such as heavy rain, thick fog, or snowstorms. Major
sensors such as cameras, lidar, and radar experience significant performance degradation in
adverse weather. This results in reduced visibility, inaccurate object detection, and improper
environmental classification, endangering the safety of drivers and passengers. The object
detection performance of deep learning models in autonomous vehicles is quite good, but still
faces challenges when operating in extreme weather conditions. To address this issue, a new
object detection framework called MIRSA+YOLOV7MOD+M3CBAM is proposed, designed
for traffic environments in fairly extreme weather conditions (rain, fog, night+rain).
The novelty of the research is a framework that is a combination of denoising modules and
detection modules, a new architecture of the MIRSA denoising model which is a modification
of MIRNet-v2 with the addition of a self-attention (SA) layer, and a new architecture of
YOLOv7-MOD with the addition of a deformable convolution (DC) layer and convolution
block attention module (CBAM). This research also produces a collection of traffic image
datasets (LLD) during rain and low light conditions used for training. Data collection and
annotation follow the principles applied to the KITTY dataset, making it a reference for further
development. Comparative experiments with the latest methods visually and quantitatively
confirm the effectiveness of the proposed model, demonstrating its ability to refine images,
resulting in clearer recognition. This method achieves the highest scores across all fog
concentration categories, with the highest mAP values of 75.24%, 83.91%, and 90.74% for
high, medium, and light fog categories, respectively. Meanwhile, in tests during rainy
conditions and night+rain conditions with low lighting, there were improvements of 3.88%,
3.74% and 2.70% respectively compared to previous methods. The difference in detection
accuracy related to the use of the MIRSA and MIRNet-v2 models is around 2-3% for each
condition. This reinforces that the developed model can be relied upon for autonomous vehicle
perception systems in fairly extreme weather conditions and low lighting environments. |
format |
Dissertations |
author |
Wibowo, Ari |
spellingShingle |
Wibowo, Ari AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
author_facet |
Wibowo, Ari |
author_sort |
Wibowo, Ari |
title |
AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
title_short |
AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
title_full |
AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
title_fullStr |
AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
title_full_unstemmed |
AUTONOMOUS VEHICLE PERCEPTION SYSTEM FOR MIXED TRAFFIC ENVIRONMENTS IN ADVERSE WEATHER AND LOW LIGHTING CONDITIONS |
title_sort |
autonomous vehicle perception system for mixed traffic environments in adverse weather and low lighting conditions |
url |
https://digilib.itb.ac.id/gdl/view/86650 |
_version_ |
1822999609896599552 |