Real-time volume rendering with octree-based implicit surface representation

Recent breakthroughs in neural radiance fields have significantly advanced the field of novel view synthesis and 3D reconstruction from multi-view images. However, the prevalent neural volume rendering techniques often suffer from long rendering time and require extensive network training. To addres...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Jiaze, Zhang, Luo, Hu, Jiangbei, Zhang, Zhebin, Sun, Hongyu, Song, Gaochao, He, Ying
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/179280
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Recent breakthroughs in neural radiance fields have significantly advanced the field of novel view synthesis and 3D reconstruction from multi-view images. However, the prevalent neural volume rendering techniques often suffer from long rendering time and require extensive network training. To address these limitations, recent initiatives have explored explicit voxel representations of scenes to expedite training. Yet, they often fall short in delivering accurate geometric reconstructions due to a lack of effective 3D representation. In this paper, we propose an octree-based approach for the reconstruction of implicit surfaces from multi-view images. Leveraging an explicit, network-free data structure, our method substantially increases rendering speed, achieving real-time performance. Moreover, our reconstruction technique yields surfaces with quality comparable to state-of-the-art network-based learning methods. The source code and data can be downloaded from https://github.com/LaoChui999/Octree-VolSDF.