Vision transformer as image fusion model

Vision transformers show the state-of-art performance in vision tasks, the self attention block works not only limited to NLP tasks but also perform well in process images. In this report, I investigated whether this performance can be further extended into more detailed tasks on images by combining...

Full description

Saved in:
Bibliographic Details
Main Author: Zhao, Fengye
Other Authors: Zinovi Rabinovich
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166048
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Vision transformers show the state-of-art performance in vision tasks, the self attention block works not only limited to NLP tasks but also perform well in process images. In this report, I investigated whether this performance can be further extended into more detailed tasks on images by combining it with a VAE decoder. I observe that the output from the Vit encoder is able to be reconstructed by the VAE decoder, and with controlling the input patches variability, the model is able to perform image fusion tasks. In addition, it also has the potential to solve other high complexity image processing tasks.