Haze removal from an image via generative adversarial networks

The performance of computer vision applications like autonomous vehicles, satellite imaging can get affected by real-world conditions such as haze, smoke and rain particles. Recent works focus on using deep-learning GAN-based and Transformer-based model for image dehazing. However, current methods s...

全面介紹

Saved in:
書目詳細資料
主要作者: Cheng, Mun Chew
其他作者: Loke Yuan Ren
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/174985
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:The performance of computer vision applications like autonomous vehicles, satellite imaging can get affected by real-world conditions such as haze, smoke and rain particles. Recent works focus on using deep-learning GAN-based and Transformer-based model for image dehazing. However, current methods still stuggle to generate realistic and structurally accurate dehazed images. Hence, this study propose to incorporate structuralsimilarity loss and GAN adversarial loss into the training process to further improve realism and structural accuracy of the dehazed image. As such, an improved version of the Dehazeformer [14] is introduced in this paper by integrating SSIM loss and adversarial loss into the feedback training. Experiments were conducted on the RESIDE Benchmark Dataset [17] and the NTIRE challenge datasets [18-20]. The experiments displayed a 0.62%, improvement in Indoor Images PSNR, 0.1%/0.1% improvement in BRISQUE and NIQE score as well. Object detection experiments also showed my model performed better than the original Dehazeformer by an average of 0.07%.