Semantic mapping for articulated objects

Semantic mapping has advanced greatly since its inception as a research field, to now being able to identify poses and segment objects. As an extension of semantic segmentation problem, there is still an unexplored field of identifying joints of an articulated object within an image. In this project...

全面介紹

Saved in:
書目詳細資料
主要作者: Luar, Shui Song
其他作者: Justin Dauwels
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2019
主題:
在線閱讀:https://hdl.handle.net/10356/136536
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
id sg-ntu-dr.10356-136536
record_format dspace
spelling sg-ntu-dr.10356-1365362023-07-07T16:36:58Z Semantic mapping for articulated objects Luar, Shui Song Justin Dauwels School of Electrical and Electronic Engineering JDAUWELS@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Computer graphics Semantic mapping has advanced greatly since its inception as a research field, to now being able to identify poses and segment objects. As an extension of semantic segmentation problem, there is still an unexplored field of identifying joints of an articulated object within an image. In this project, our main contributions are to re-train a semantic segmentation network on a smaller subset of items which can be considered prismatic or revolute. With a DeepLabv3-Inception network with a ResNet101 backbone, we report best pixelwise accuracy of 0.931 and mIOU of 0.606. while training on 2 object classes from the ADE20K dataset. This preliminary result shows the viability of such an approach, and future work might entail exploring different loss functions; different neural network architecture and expanding the definition to encompass more items from the ADE20K dataset. Bachelor of Engineering (Electrical and Electronic Engineering) 2019-12-26T05:26:15Z 2019-12-26T05:26:15Z 2019 Final Year Project (FYP) https://hdl.handle.net/10356/136536 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Computer graphics
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Computer graphics
Luar, Shui Song
Semantic mapping for articulated objects
description Semantic mapping has advanced greatly since its inception as a research field, to now being able to identify poses and segment objects. As an extension of semantic segmentation problem, there is still an unexplored field of identifying joints of an articulated object within an image. In this project, our main contributions are to re-train a semantic segmentation network on a smaller subset of items which can be considered prismatic or revolute. With a DeepLabv3-Inception network with a ResNet101 backbone, we report best pixelwise accuracy of 0.931 and mIOU of 0.606. while training on 2 object classes from the ADE20K dataset. This preliminary result shows the viability of such an approach, and future work might entail exploring different loss functions; different neural network architecture and expanding the definition to encompass more items from the ADE20K dataset.
author2 Justin Dauwels
author_facet Justin Dauwels
Luar, Shui Song
format Final Year Project
author Luar, Shui Song
author_sort Luar, Shui Song
title Semantic mapping for articulated objects
title_short Semantic mapping for articulated objects
title_full Semantic mapping for articulated objects
title_fullStr Semantic mapping for articulated objects
title_full_unstemmed Semantic mapping for articulated objects
title_sort semantic mapping for articulated objects
publisher Nanyang Technological University
publishDate 2019
url https://hdl.handle.net/10356/136536
_version_ 1772828624681959424