Fusion of visible and infrared images

In the field of computer vision, convolutional neural networks (CNN) have shown great success due to their capability to extract deep features, which is useful in the fusion of images. Recently there are many existing deep learning fusion methods, however majority of them requires training of t...

Full description

Saved in:
Bibliographic Details
Main Author: Wong, Kelvin Wai Leong
Other Authors: Deepu Rajan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162845
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In the field of computer vision, convolutional neural networks (CNN) have shown great success due to their capability to extract deep features, which is useful in the fusion of images. Recently there are many existing deep learning fusion methods, however majority of them requires training of the model, which makes it impractical for real-time use, since it requires a huge amount of data to train. Furthermore, the fused image often suffers from poor contrast and loss of fine detail. To address the problem, I proposed a new fusion method which uses pretrained VGG-19 combined with visual saliency weight map (VSWM) and fast guided filtering (FGF) that aims to preserves more details and improves the contrast of the fused image. In order to evaluate the proposed approach, it will be compared against with three other existing fusion methods based on the quality metrics for images. Finally, we will discuss future work on the proposed method.