Self-parameterization based multi-resolution mesh convolution networks
This paper addresses the challenges of designing mesh convolution neural networks for 3D mesh dense prediction. While deep learning has achieved remarkable success in image dense prediction tasks, directly applying or extending these methods to irregular graph data, such as 3D surface meshes, is non...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/169922 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This paper addresses the challenges of designing mesh convolution neural networks for 3D mesh dense prediction. While deep learning has achieved remarkable success in image dense prediction tasks, directly applying or extending these methods to irregular graph data, such as 3D surface meshes, is nontrivial due to the non-uniform element distribution and irregular connectivity in surface meshes which make it difficult to adapt downsampling, upsampling, and convolution operations. In addition, commonly used multiresolution networks require repeated high-to-low and then low-to-high processes to boost the performance of recovering rich, high-resolution representations. To address these challenges, this paper proposes a self-parameterization-based multi-resolution convolution network that extends existing image dense prediction architectures to 3D meshes. The novelty of our approach lies in two key aspects. First, we construct a multi-resolution mesh pyramid directly from the high-resolution input data and propose area-aware mesh downsampling/upsampling operations that use sequential bijective inter-surface mappings between different mesh resolutions. The inter-surface mapping redefines the mesh, rather than reshaping it, which thus avoids introducing unnecessary errors. Second, we maintain the high-resolution representation in the multi-resolution convolution network, enabling multi-scale fusions to exchange information across parallel multi-resolution subnetworks, rather than through connections of high-to-low resolution subnetworks in series. These features differentiate our approach from most existing mesh convolution networks and enable more accurate mesh dense predictions, which is confirmed in experiments. |
---|