Indoor scene generation method using radiance fields and super-resolution

Indoor scene generation in the digital realm has garnered significant attention within the computer vision domain, offering diverse applications ranging from architectural visualization to virtual reality experiences and gaming environments. Traditional methods relying on manual 3D modeling are time...

Full description

Saved in:
Bibliographic Details
Main Author: Yang, Yida
Other Authors: Liu Ziwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175309
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Indoor scene generation in the digital realm has garnered significant attention within the computer vision domain, offering diverse applications ranging from architectural visualization to virtual reality experiences and gaming environments. Traditional methods relying on manual 3D modeling are time-consuming and lack scalability. Recent advancements, particularly the introduction of Neural Radiance Field (NeRF), have shown promise in representing indoor scenes comprehensively and synthesizing novel views. This Final Year Project (FYP) proposes a method combining NeRF-based scene generation with single image super-resolution methods. By leveraging a generative adversarial network (GAN) with radiance field and employing convolutional neural networks (CNNs) for super-resolution, the method aims to enhance the realism and resolution of generated indoor scenes. Experimental results demonstrate improvements over baseline models, although issues regarding consistency are noted and discussed.