Edge detection guide network for semantic segmentation of remote-sensing images

The acquisition of high-resolution satellite and airborne remote sensing images has been significantly simplified due to the rapid development of sensor technology. Several practical applications of high-resolution remote sensing images (HRRSIs) are based on semantic segmentation. However, single-mo...

Full description

Saved in:
Bibliographic Details
Main Authors: Jin, Jianhui, Zhou, Wujie, Yang, Rongwang, Ye, Lv, Yu, Lu
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170688
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170688
record_format dspace
spelling sg-ntu-dr.10356-1706882023-09-26T02:07:28Z Edge detection guide network for semantic segmentation of remote-sensing images Jin, Jianhui Zhou, Wujie Yang, Rongwang Ye, Lv Yu, Lu School of Computer Science and Engineering Engineering::Computer science and engineering Feature Extraction Semantics The acquisition of high-resolution satellite and airborne remote sensing images has been significantly simplified due to the rapid development of sensor technology. Several practical applications of high-resolution remote sensing images (HRRSIs) are based on semantic segmentation. However, single-modal HRRSIs are difficult to classify accurately in the complex situation of some scene objects; therefore, the semantic segmentation of multi-source information fusion is gaining popularity. The inherent difference between multimodal features and the semantic gap between multi-level features typically affect the performance of existing multi-mode fusion methods. We propose a multimodal fusion network based on edge detection to address these issues. This method aids multimodal information fusion by utilizing spatial information contained in the boundary. An edge detection guide module is included in the feature extraction stage to realize the boundary information through the fusion of details and semantics between high-level and low-level features. The boundary information is extended into the well-designed multimodal adaptive fusion block (MAFB) to obtain the multimodal fusion features. Furthermore, a residual adaptive fusion block (RAFB) and a spatial position module (SPM) in the feature decoding stage have been designed to fuse multi-level features from the standpoint of local and global dependence. We compared our method to several state-of-the-art (SOTA) methods using the International Society for Photogrammetry and Remote Sensing's (ISPRS) Vaihingen and Potsdam datasets. The final results demonstrate that our method achieves excellent performance. This work was supported by the National Natural Science Foundation of China under Grant 61502429. 2023-09-26T02:07:28Z 2023-09-26T02:07:28Z 2023 Journal Article Jin, J., Zhou, W., Yang, R., Ye, L. & Yu, L. (2023). Edge detection guide network for semantic segmentation of remote-sensing images. IEEE Geoscience and Remote Sensing Letters, 20, 5000505-. https://dx.doi.org/10.1109/LGRS.2023.3234257 1545-598X https://hdl.handle.net/10356/170688 10.1109/LGRS.2023.3234257 2-s2.0-85146862629 20 5000505 en IEEE Geoscience and Remote Sensing Letters © 2023 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Feature Extraction
Semantics
spellingShingle Engineering::Computer science and engineering
Feature Extraction
Semantics
Jin, Jianhui
Zhou, Wujie
Yang, Rongwang
Ye, Lv
Yu, Lu
Edge detection guide network for semantic segmentation of remote-sensing images
description The acquisition of high-resolution satellite and airborne remote sensing images has been significantly simplified due to the rapid development of sensor technology. Several practical applications of high-resolution remote sensing images (HRRSIs) are based on semantic segmentation. However, single-modal HRRSIs are difficult to classify accurately in the complex situation of some scene objects; therefore, the semantic segmentation of multi-source information fusion is gaining popularity. The inherent difference between multimodal features and the semantic gap between multi-level features typically affect the performance of existing multi-mode fusion methods. We propose a multimodal fusion network based on edge detection to address these issues. This method aids multimodal information fusion by utilizing spatial information contained in the boundary. An edge detection guide module is included in the feature extraction stage to realize the boundary information through the fusion of details and semantics between high-level and low-level features. The boundary information is extended into the well-designed multimodal adaptive fusion block (MAFB) to obtain the multimodal fusion features. Furthermore, a residual adaptive fusion block (RAFB) and a spatial position module (SPM) in the feature decoding stage have been designed to fuse multi-level features from the standpoint of local and global dependence. We compared our method to several state-of-the-art (SOTA) methods using the International Society for Photogrammetry and Remote Sensing's (ISPRS) Vaihingen and Potsdam datasets. The final results demonstrate that our method achieves excellent performance.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Jin, Jianhui
Zhou, Wujie
Yang, Rongwang
Ye, Lv
Yu, Lu
format Article
author Jin, Jianhui
Zhou, Wujie
Yang, Rongwang
Ye, Lv
Yu, Lu
author_sort Jin, Jianhui
title Edge detection guide network for semantic segmentation of remote-sensing images
title_short Edge detection guide network for semantic segmentation of remote-sensing images
title_full Edge detection guide network for semantic segmentation of remote-sensing images
title_fullStr Edge detection guide network for semantic segmentation of remote-sensing images
title_full_unstemmed Edge detection guide network for semantic segmentation of remote-sensing images
title_sort edge detection guide network for semantic segmentation of remote-sensing images
publishDate 2023
url https://hdl.handle.net/10356/170688
_version_ 1779156367760687104