Lessons from applying SRGAN on Sentinel-2 images for LULC classification

Satellite images are commonly used to monitor land use land cover (LULC) changes. Unfortunately, publicly available images often lack the resolution required for detailed urban studies. In this study, we enhanced the resolution of Sentinel-2 (S2) satellite images from 10 meters to 2.5 meters using t...

Full description

Saved in:
Bibliographic Details
Main Authors: Goh, Yun Si, Chua, Wen Qing, Yean, Seanglidet, Lee, Bu-Sung
Other Authors: College of Computing and Data Science
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177692
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-177692
record_format dspace
spelling sg-ntu-dr.10356-1776922024-05-29T02:22:32Z Lessons from applying SRGAN on Sentinel-2 images for LULC classification Goh, Yun Si Chua, Wen Qing Yean, Seanglidet Lee, Bu-Sung College of Computing and Data Science School of Computer Science and Engineering 2023 17th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) Computer and Information Science Classification Deep learning Satellite images are commonly used to monitor land use land cover (LULC) changes. Unfortunately, publicly available images often lack the resolution required for detailed urban studies. In this study, we enhanced the resolution of Sentinel-2 (S2) satellite images from 10 meters to 2.5 meters using two super-resolution models: Real-SR and Real-ESRGAN. We tested the suitability of the enhanced images for LULC classification of an urban city, Singapore. From our results, colors have mostly been preserved and man-made objects have become sharper. However, the enhanced images also exhibit colour change, darkening, and salt-and-pepper effects. At this stage, there is no conclusive evidence that enhanced images can improve LULC classification. In fact, they have worsened classification accuracy by 17 - 30%, and the Kappa coefficient by 0.2 - 0.4. Although our application of super-resolution on LULC classification is not successful, it is a first attempt and could be further improved. National Research Foundation (NRF) Submitted/Accepted version This research/project is supported by the Catalyst: Strategic Fund from Government Funding, administered by the Ministry of Business Innovation & Employment, New Zealand under contract C09X1923, as well as the National Research Foundation, Singapore under its Industry Alignment Fund – Pre-positioning (IAF-PP) Funding Initiative. 2024-05-29T02:22:32Z 2024-05-29T02:22:32Z 2024 Conference Paper Goh, Y. S., Chua, W. Q., Yean, S. & Lee, B. (2024). Lessons from applying SRGAN on Sentinel-2 images for LULC classification. 2023 17th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 107-114. https://dx.doi.org/10.1109/SITIS61268.2023.00025 9798350370911 https://hdl.handle.net/10356/177692 10.1109/SITIS61268.2023.00025 2-s2.0-85190153402 107 114 en © 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/SITIS61268.2023.00025. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Classification
Deep learning
spellingShingle Computer and Information Science
Classification
Deep learning
Goh, Yun Si
Chua, Wen Qing
Yean, Seanglidet
Lee, Bu-Sung
Lessons from applying SRGAN on Sentinel-2 images for LULC classification
description Satellite images are commonly used to monitor land use land cover (LULC) changes. Unfortunately, publicly available images often lack the resolution required for detailed urban studies. In this study, we enhanced the resolution of Sentinel-2 (S2) satellite images from 10 meters to 2.5 meters using two super-resolution models: Real-SR and Real-ESRGAN. We tested the suitability of the enhanced images for LULC classification of an urban city, Singapore. From our results, colors have mostly been preserved and man-made objects have become sharper. However, the enhanced images also exhibit colour change, darkening, and salt-and-pepper effects. At this stage, there is no conclusive evidence that enhanced images can improve LULC classification. In fact, they have worsened classification accuracy by 17 - 30%, and the Kappa coefficient by 0.2 - 0.4. Although our application of super-resolution on LULC classification is not successful, it is a first attempt and could be further improved.
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Goh, Yun Si
Chua, Wen Qing
Yean, Seanglidet
Lee, Bu-Sung
format Conference or Workshop Item
author Goh, Yun Si
Chua, Wen Qing
Yean, Seanglidet
Lee, Bu-Sung
author_sort Goh, Yun Si
title Lessons from applying SRGAN on Sentinel-2 images for LULC classification
title_short Lessons from applying SRGAN on Sentinel-2 images for LULC classification
title_full Lessons from applying SRGAN on Sentinel-2 images for LULC classification
title_fullStr Lessons from applying SRGAN on Sentinel-2 images for LULC classification
title_full_unstemmed Lessons from applying SRGAN on Sentinel-2 images for LULC classification
title_sort lessons from applying srgan on sentinel-2 images for lulc classification
publishDate 2024
url https://hdl.handle.net/10356/177692
_version_ 1806059886904082432