Haze removal from an image via generative adversarial networks

The performance of computer vision applications like autonomous vehicles, satellite imaging can get affected by real-world conditions such as haze, smoke and rain particles. Recent works focus on using deep-learning GAN-based and Transformer-based model for image dehazing. However, current methods s...

Full description

Saved in:
Bibliographic Details
Main Author: Cheng, Mun Chew
Other Authors: Loke Yuan Ren
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174985
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The performance of computer vision applications like autonomous vehicles, satellite imaging can get affected by real-world conditions such as haze, smoke and rain particles. Recent works focus on using deep-learning GAN-based and Transformer-based model for image dehazing. However, current methods still stuggle to generate realistic and structurally accurate dehazed images. Hence, this study propose to incorporate structuralsimilarity loss and GAN adversarial loss into the training process to further improve realism and structural accuracy of the dehazed image. As such, an improved version of the Dehazeformer [14] is introduced in this paper by integrating SSIM loss and adversarial loss into the feedback training. Experiments were conducted on the RESIDE Benchmark Dataset [17] and the NTIRE challenge datasets [18-20]. The experiments displayed a 0.62%, improvement in Indoor Images PSNR, 0.1%/0.1% improvement in BRISQUE and NIQE score as well. Object detection experiments also showed my model performed better than the original Dehazeformer by an average of 0.07%.