Haze removal from an image or a video via generative adversarial networks
Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/181155 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | Low visibility caused by haze and fog is one of the major reasons for traffic and aviation accidents. This paper introduces a more easy-to-access solution to remove haze from a single image, video, and live-streaming. My approach uses a modified conditional Generative Adversarial Network (cGAN) with a DenseNet-121 architecture to efficiently dehaze visual inputs. Unlike models that use Tiramisu [5] or depend on two-step pipelines, The modified model ensures the accuracy of structure and clarity of the visual by removing haze by optimizing the generator-discriminator interaction within the GAN framework. The effectiveness of the modified model is demonstrated through a comprehensive experiment on synthetic and real-world data, obtaining competitive results in PSNR, SSIM, and subjective quality measures. This system aims to improve visibility in live-streaming scenarios, such as for vehicles and aircraft, potentially reducing the probability of accidents under low-visibility conditions. |
---|