3D scene graph generation from synthesis 3D indoor scene

Scene understanding in 3D vision has extended beyond object instance information to include high-level scene information, such as relationships between object instances. Scene graphs are a common representation of object relationships, but the long-tailed distribution of relationship types presents...

Full description

Saved in:
Bibliographic Details
Main Author: Qin, Huaiyuan
Other Authors: Jiang Xudong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167002
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Scene understanding in 3D vision has extended beyond object instance information to include high-level scene information, such as relationships between object instances. Scene graphs are a common representation of object relationships, but the long-tailed distribution of relationship types presents a challenge for accurate scene graph generation. Existing 3D indoor datasets focus mainly on object instance class and segmentation labels, making it difficult to be utilized to scene graph related tasks. In this FYP, we propose a synthesis 3D indoor dataset to collect data from virtual environments with both instance-level and predicate-level annotations. We also introduce a post-processing calibration method to handle the bias of long-tailed distribution in 3D scene graphs. Our experiment results show that the proposed method significantly improves the performance of the baseline model without changing its weights. We evaluate the proposed dataset and benchmark it on two 3D scene graph generation tasks, SGCls, and PredCls. This project contributes to the research in 3D vision and can benefit the fields of AR/VR and robotics.