3D scene graph generation from synthesis 3D indoor scene
Scene understanding in 3D vision has extended beyond object instance information to include high-level scene information, such as relationships between object instances. Scene graphs are a common representation of object relationships, but the long-tailed distribution of relationship types presents...
محفوظ في:
المؤلف الرئيسي: | |
---|---|
مؤلفون آخرون: | |
التنسيق: | Final Year Project |
اللغة: | English |
منشور في: |
Nanyang Technological University
2023
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/167002 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
الملخص: | Scene understanding in 3D vision has extended beyond object instance information to include high-level scene information, such as relationships between object instances. Scene graphs are a common representation of object relationships, but the long-tailed distribution of relationship types presents a challenge for accurate scene graph generation. Existing 3D indoor datasets focus mainly on object instance class and segmentation labels, making it difficult to be utilized to scene graph related tasks. In this FYP, we propose a synthesis 3D indoor dataset to collect data from virtual environments with both instance-level and predicate-level annotations. We also introduce a post-processing calibration method to handle the bias of long-tailed distribution in 3D scene graphs. Our experiment results show that the proposed method significantly improves the performance of the baseline model without changing its weights. We evaluate the proposed dataset and benchmark it on two 3D scene graph generation tasks, SGCls, and PredCls. This project contributes to the research in 3D vision and can benefit the fields of AR/VR and robotics. |
---|