FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation
Scene understanding based on LiDAR point cloud is an essential task for autonomous cars to drive safely, which often employs spherical projection to map 3D point cloud into multi-channel 2D images for semantic segmentation. Most existing methods simply stack different point attributes/modalities...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162039 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-162039 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1620392022-09-30T08:14:23Z FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation Xiao, Aoran Yang, Xiaofei Lu, Shijian Guan, Dayan Huang, Jiaxing School of Computer Science and Engineering Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU) Engineering::Computer science and engineering Point Cloud Semantic Segmentation Scene understanding based on LiDAR point cloud is an essential task for autonomous cars to drive safely, which often employs spherical projection to map 3D point cloud into multi-channel 2D images for semantic segmentation. Most existing methods simply stack different point attributes/modalities (e.g. coordinates, intensity, depth, etc.) as image channels to increase information capacity, but ignore distinct characteristics of point attributes in different image channels. We design FPS-Net, a convolutional fusion network that exploits the uniqueness and discrepancy among the projected image channels for optimal point cloud segmentation. FPS-Net adopts an encoder-decoder structure. Instead of simply stacking multiple channel images as a single input, we group them into different modalities to first learn modality-specific features separately and then map the learned features into a common high-dimensional feature space for pixel-level fusion and learning. Specifically, we design a residual dense block with multiple receptive fields as a building block in the encoder which preserves detailed information in each modality and learns hierarchical modality-specific and fused features effectively. In the FPS-Net decoder, we use a recurrent convolution block likewise to hierarchically decode fused features into output space for pixel-level classification. Extensive experiments conducted on two widely adopted point cloud datasets show that FPS-Net achieves superior semantic segmentation as compared with state-of-the-art projection-based methods. In addition, the proposed modality fusion idea is compatible with typical projection-based methods and can be incorporated into them with consistent performance improvements. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU) that is funded by the Singapore Government through the Industry Alignment Fund - Industry Collaboration Projects Grant 2022-09-30T08:14:23Z 2022-09-30T08:14:23Z 2021 Journal Article Xiao, A., Yang, X., Lu, S., Guan, D. & Huang, J. (2021). FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing, 176, 237-249. https://dx.doi.org/10.1016/j.isprsjprs.2021.04.011 0924-2716 https://hdl.handle.net/10356/162039 10.1016/j.isprsjprs.2021.04.011 2-s2.0-85105590610 176 237 249 en ISPRS Journal of Photogrammetry and Remote Sensing © 2021 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Point Cloud Semantic Segmentation |
spellingShingle |
Engineering::Computer science and engineering Point Cloud Semantic Segmentation Xiao, Aoran Yang, Xiaofei Lu, Shijian Guan, Dayan Huang, Jiaxing FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
description |
Scene understanding based on LiDAR point cloud is an essential task for
autonomous cars to drive safely, which often employs spherical projection to
map 3D point cloud into multi-channel 2D images for semantic segmentation. Most
existing methods simply stack different point attributes/modalities (e.g.
coordinates, intensity, depth, etc.) as image channels to increase information
capacity, but ignore distinct characteristics of point attributes in different
image channels. We design FPS-Net, a convolutional fusion network that exploits
the uniqueness and discrepancy among the projected image channels for optimal
point cloud segmentation. FPS-Net adopts an encoder-decoder structure. Instead
of simply stacking multiple channel images as a single input, we group them
into different modalities to first learn modality-specific features separately
and then map the learned features into a common high-dimensional feature space
for pixel-level fusion and learning. Specifically, we design a residual dense
block with multiple receptive fields as a building block in the encoder which
preserves detailed information in each modality and learns hierarchical
modality-specific and fused features effectively. In the FPS-Net decoder, we
use a recurrent convolution block likewise to hierarchically decode fused
features into output space for pixel-level classification. Extensive
experiments conducted on two widely adopted point cloud datasets show that
FPS-Net achieves superior semantic segmentation as compared with
state-of-the-art projection-based methods. In addition, the proposed modality
fusion idea is compatible with typical projection-based methods and can be
incorporated into them with consistent performance improvements. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Xiao, Aoran Yang, Xiaofei Lu, Shijian Guan, Dayan Huang, Jiaxing |
format |
Article |
author |
Xiao, Aoran Yang, Xiaofei Lu, Shijian Guan, Dayan Huang, Jiaxing |
author_sort |
Xiao, Aoran |
title |
FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
title_short |
FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
title_full |
FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
title_fullStr |
FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
title_full_unstemmed |
FPS-Net: a convolutional fusion network for large-scale LiDAR point cloud segmentation |
title_sort |
fps-net: a convolutional fusion network for large-scale lidar point cloud segmentation |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/162039 |
_version_ |
1746219674573471744 |