Learning to see in the dark
Low-light image enhancement aims to improve the visibility of images taken in low-light or nighttime conditions. Currently, most deep models are trained using synthetic low-light datasets or manually collected datasets with small sizes, which limits their generalization capability when encounterin...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148091 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Low-light image enhancement aims to improve the visibility of images taken in low-light
or nighttime conditions. Currently, most deep models are trained using synthetic low-light
datasets or manually collected datasets with small sizes, which limits their generalization capability when encountering the low-light images captured in the wild. In this study, a domain
adaptation framework is proposed to translate images between synthetic low-light images
and real low-light images. Meanwhile, we embed a method into the proposed domain adaptation framework to generate low-light images of different brightness levels, which helps
with the training process of low-light enhancement networks via data augmentation. Finally,
an attention-guided U-Net is trained on the augmented dataset. Qualitative and quantitative
evaluations show that our method is comparable to other state-of-the-art methods. |
---|