Haze removal from an image or a video via generative adversarial networks

Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with...

Full description

Saved in:
Bibliographic Details
Main Author: Chen, Zhong Jiang
Other Authors: Loke Yuan Ren
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181155
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with a DenseNet-121 architecture to efficiently dehaze visual inputs. Unlike models that use Tiramisu [5] or depend on two-step pipelines, The modified model ensures the accuracy of structure and clarity of the visual by removing haze by optimizing the generator-discriminator interaction within the GAN framework. The effectiveness of the modified model is demonstrated through a comprehensive experiment on synthetic and real-world data, obtaining competitive results in PSNR, SSIM, and subjective quality measures. This system aims to improve visibility in live-streaming scenarios, such as for vehicles and aircraft, potentially reducing the probability of accidents under low-visibility conditions.