SpSequenceNet : semantic segmentation network on 4D point clouds
Point clouds are useful in many applications like autonomous driving and robotics as they provide natural 3D information of the surrounding environments. While there are extensive research on 3D point clouds, scene understanding on 4D point clouds, a series of consecutive 3D point clouds frames...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/143545 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Point clouds are useful in many applications like autonomous
driving and robotics as they provide natural 3D
information of the surrounding environments. While there
are extensive research on 3D point clouds, scene understanding
on 4D point clouds, a series of consecutive 3D
point clouds frames, is an emerging topic and yet underinvestigated.
With 4D point clouds (3D point cloud videos),
robotic systems could enhance their robustness by leveraging
the temporal information from previous frames. However,
the existing semantic segmentation methods on 4D
point clouds suffer from low precision due to the spatial
and temporal information loss in their network structures.
In this paper, we propose SpSequenceNet to address this
problem. The network is designed based on 3D sparse convolution,
and it includes two novel modules, a cross-frame
global attention module and a cross-frame local interpolation
module, to capture spatial and temporal information
in 4D point clouds. We conduct extensive experiments on
SemanticKITTI, and achieve the state-of-the-art result of
43.1% on mIoU, which is 1.5% higher than the previous
best approach. |
---|