Optical-to-SAR image translation In remote sensing via generative adversarial network

With the remote sensing technology makes great progress, more and more remote sensing applications are used to satisfy rising needs. Satellite images have been widely applied for various fields, such as urban planning, geological exploration, and military object detection. In remote sensing technolo...

Full description

Saved in:
Bibliographic Details
Main Author: Li, Jiahua
Other Authors: Wen Bihan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158390
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With the remote sensing technology makes great progress, more and more remote sensing applications are used to satisfy rising needs. Satellite images have been widely applied for various fields, such as urban planning, geological exploration, and military object detection. In remote sensing technology, Synthetic aperture radar (SAR) are one of the most widely used imaging device. Compared with optical imaging technology, it is harder to acquire numerous SAR images due to its high cost in remote satellite. Consequently, the data annotation of EO-SAR dataset may be partially available. In addition, the lack of paired data severely limits the development of AI in remote sensing. In this project, Artificial Intelligent (AI) technology was used for image generation. Generative Adversarial Network (GAN), one of the most widely used network in AI, were explored for image translation which is from optical to SAR. This research attempted to use two GAN networks, CycleGAN and Pix2Pix, for realizing the image generation. Finally, the feasibility and performance of the two proposed network was confirmed in this project on a dataset including optical images and corresponding SAR images. By translating optical images into SAR images, this article aimed to solve the problem of lacking paired datasets in remote sensing field with different modalities. More efficient data enhancement methods can be used for many large-scale AI applications in remote sensing from this research. In future research, multi-modality fusion translation can be utilized by Lidar images, optical images and SAR images. With more features from different modalities, images translation can be more accurate.