RainGAN: unsupervised raindrop removal via decomposition and composition

Adherent raindrops on windshield or camera lens may distort and occlude vision, causing issues for downstream machine vision perception. Most of the existing raindrop removal methods focus on learning the mapping from a raindrop image to its clean content by training with the paired raindrop-clean i...

Full description

Saved in:
Bibliographic Details
Main Author: Xu, Yan
Other Authors: Loke Yuan Ren
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160029
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Adherent raindrops on windshield or camera lens may distort and occlude vision, causing issues for downstream machine vision perception. Most of the existing raindrop removal methods focus on learning the mapping from a raindrop image to its clean content by training with the paired raindrop-clean images. However, the paired real-world images are difficult to collect in practice. This thesis presents a novel framework for raindrop removal that eliminates the need for paired training samples. Based on the assumption that a raindrop image is the composition of a clean image and a raindrop style, the proposed framework decomposes a raindrop image into a clean content image and a raindrop-style latent code and composes a clean content image and a raindrop style code to a raindrop image for data augmentation. The proposed framework introduces a domain-invariant residual block to facilitate the identity mapping for the clean portion of the raindrop image. Extensive experiments on real-world raindrop datasets show that our network can achieve superior performance in raindrop removal to other unpaired image-to-image translation methods, even with comparable performance with state-of-the-art methods that require paired images.