A deep learning method for fog removal in image
Fog removal has always been a vital issue in image and video processing. With the development of various vision-based applications (e.g. photography, video surveillance and autonomous driving), images and videos are of essential importance to extract useful scene information. However, real-world sce...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/78419 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Fog removal has always been a vital issue in image and video processing. With the development of various vision-based applications (e.g. photography, video surveillance and autonomous driving), images and videos are of essential importance to extract useful scene information. However, real-world scenes are sometimes obscured by fog, suffering from degraded visibility, color distortion and low contrast. Therefore, fog removal is significant mean in real-world image pre-processing for many vision tasks.
Inspired by the considerable improvement in vision tasks brought by deep learning, we propose a deep learning method for fog removal in image. The method is a novel end-to-end model of convolutional neural networks. We use a densely connected network with pyramid pooling and a U-net to predict the transmission map and atmospheric light respectively, and do fog removal via the atmospheric scattering model. Moreover, we design a jointly-refining module based on generative adversarial network to further strengthen the mutual structural correlation between the fog-removed images and their responding predicted transmission maps.
Both quantitative evaluation on synthetic dataset and qualitative evaluation on synthetic and real-life dataset are conducted to evaluate the results. To better demonstrate the effectiveness, results of several previous methods and ours are displayed together for comparison. The evaluations show our advancement in performance, with better visibility, less distortion and more true-to-nature restoration results. |
---|