Deep generative model for remote sensing

Synthetic Aperture Radar (SAR) sensors are frequently used for earth monitoring in remote sensing. As SAR sensors can provide robust imagery for earth observation, researchers frequently attempt to apply conventional computer vision techniques to these images to improve monitoring and eliminate the...

Full description

Saved in:
Bibliographic Details
Main Author: Huang, Shiqi
Other Authors: Wen Bihan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167133
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-167133
record_format dspace
spelling sg-ntu-dr.10356-1671332023-07-07T18:02:57Z Deep generative model for remote sensing Huang, Shiqi Wen Bihan School of Electrical and Electronic Engineering bihan.wen@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Synthetic Aperture Radar (SAR) sensors are frequently used for earth monitoring in remote sensing. As SAR sensors can provide robust imagery for earth observation, researchers frequently attempt to apply conventional computer vision techniques to these images to improve monitoring and eliminate the need for manual processing. However, the lack of sufficient SAR images makes it difficult to train deep learning models which limits progress for traditional computer vision tasks in remote sensing. To address this issue, there has been growing interest in using Generative Adversarial Networks (GANs) to generate artificial data for data augmentation. In this project, we aim to use GANs to synthesise SAR images from available optical images. As it is common for real-world scenarios to have paired but not properly aligned images in a dataset, we propose integrating registration networks into GANs to address this misalignment issue. Our results show that the registration network effectively models the noise distribution in the dataset and improves performance compared to models without registration networks. Additionally, to enhance the fidelity of the generated images, we suggest integrating boundary and depth information from LiDAR images into optical images and then performing image translation. Our study demonstrates that this approach can produce more accurate and realistic SAR images than that without LiDAR images. This method has the potential to benefit future research in multi-modal image-to-image translation and remote sensing tasks and is also more cost-effective than traditional data acquisition methods. Finally, we provide an analysis of the proposed method, including its mechanism, advantages, and limitations, to guide future research in this area. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-05-23T07:30:26Z 2023-05-23T07:30:26Z 2023 Final Year Project (FYP) Huang, S. (2023). Deep generative model for remote sensing. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167133 https://hdl.handle.net/10356/167133 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Huang, Shiqi
Deep generative model for remote sensing
description Synthetic Aperture Radar (SAR) sensors are frequently used for earth monitoring in remote sensing. As SAR sensors can provide robust imagery for earth observation, researchers frequently attempt to apply conventional computer vision techniques to these images to improve monitoring and eliminate the need for manual processing. However, the lack of sufficient SAR images makes it difficult to train deep learning models which limits progress for traditional computer vision tasks in remote sensing. To address this issue, there has been growing interest in using Generative Adversarial Networks (GANs) to generate artificial data for data augmentation. In this project, we aim to use GANs to synthesise SAR images from available optical images. As it is common for real-world scenarios to have paired but not properly aligned images in a dataset, we propose integrating registration networks into GANs to address this misalignment issue. Our results show that the registration network effectively models the noise distribution in the dataset and improves performance compared to models without registration networks. Additionally, to enhance the fidelity of the generated images, we suggest integrating boundary and depth information from LiDAR images into optical images and then performing image translation. Our study demonstrates that this approach can produce more accurate and realistic SAR images than that without LiDAR images. This method has the potential to benefit future research in multi-modal image-to-image translation and remote sensing tasks and is also more cost-effective than traditional data acquisition methods. Finally, we provide an analysis of the proposed method, including its mechanism, advantages, and limitations, to guide future research in this area.
author2 Wen Bihan
author_facet Wen Bihan
Huang, Shiqi
format Final Year Project
author Huang, Shiqi
author_sort Huang, Shiqi
title Deep generative model for remote sensing
title_short Deep generative model for remote sensing
title_full Deep generative model for remote sensing
title_fullStr Deep generative model for remote sensing
title_full_unstemmed Deep generative model for remote sensing
title_sort deep generative model for remote sensing
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/167133
_version_ 1772828532993425408