Haze removal from an image or a video via generative adversarial networks

Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Chen, Zhong Jiang
مؤلفون آخرون: Loke Yuan Ren
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/181155
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with a DenseNet-121 architecture to efficiently dehaze visual inputs. Unlike models that use Tiramisu [5] or depend on two-step pipelines, The modified model ensures the accuracy of structure and clarity of the visual by removing haze by optimizing the generator-discriminator interaction within the GAN framework. The effectiveness of the modified model is demonstrated through a comprehensive experiment on synthetic and real-world data, obtaining competitive results in PSNR, SSIM, and subjective quality measures. This system aims to improve visibility in live-streaming scenarios, such as for vehicles and aircraft, potentially reducing the probability of accidents under low-visibility conditions.