Lightweight deep learning for image inpainting
This project aims to investigate how the performance of a lightweight image inpainting can be improved while dropping the use of the discriminator and adversarial loss that is very common in most inpainting models. In our project, we made use of a generator model that was adapted from a GAN and h...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174874 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This project aims to investigate how the performance of a lightweight image inpainting
can be improved while dropping the use of the discriminator and adversarial loss that is
very common in most inpainting models. In our project, we made use of a generator
model that was adapted from a GAN and has already been proven that it can accomplish
image inpainting tasks very successfully. We also implemented different loss functions
from different research and even came up with our own loss functions namely convoluted
losses. Experiments were carried out to determine how these loss functions interact with
one another in hopes of improving the performance on image inpainting. Finally, we also
investigated whether a single model that is trained on a dataset with multiple category of
images (faces and landscapes) can perform just as well as models that are trained on only
one category of dataset. Our research ultimately shows that there is a reason why GANs
are still the preferred method in image inpainting task but our loss functions and hybrid
datasets showed some promise in possibly driving new ways of approaching an
inpainting task |
---|