3D human reconstruction
The human brain's remarkable ability to transform a two-dimensional image into a vivid three-dimensional representation of a person highlights the brain's remarkable capabilities. Nonetheless, translating this extraordinary human capacity into machine learning models, specifically deep...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171980 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-171980 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1719802023-11-24T15:36:56Z 3D human reconstruction Gucon, Nailah Ginylle Pabilonia Lin Weisi School of Computer Science and Engineering WSLin@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence The human brain's remarkable ability to transform a two-dimensional image into a vivid three-dimensional representation of a person highlights the brain's remarkable capabilities. Nonetheless, translating this extraordinary human capacity into machine learning models, specifically deep neural networks, for the purpose of “3D Human Reconstruction from a Single Image”, poses a substantial and intricate challenge. While recent developments in 3D human reconstruction models have made notable strides towards producing detailed full body representations from single images, there remains a substantial gap in accurate hand representations in the outputs. This research endeavours to bridge this gap by introducing a novel 3D hand reconstruction workflow (3DHRW) to a pioneering 3D human reconstruction model, known as the Pixel-aligned Implicit Function (PIFu) model [1]. These two elements are integrated through an innovative application designed to harness their capabilities and facilitate hand mesh alignment with the body mesh. Additionally, this study explores the development of automatic hand alignment techniques, offering a foundation for future experimentation. The evaluation results demonstrate the effectiveness of the PIFu and 3DHRW integration, both quantitatively and qualitatively. Moreover, the versatility of 3D human reconstruction models spans various domains, including virtual reality, robot navigation, and game production. In this project, a possible real-life utilisation of PIFu is explored through the development of a novel automated character rigging workflow, with the aim to make game development more accessible to a wider audience, regardless of prior experience. Bachelor of Engineering (Computer Science) 2023-11-20T02:58:07Z 2023-11-20T02:58:07Z 2023 Final Year Project (FYP) Gucon, N. G. P. (2023). 3D human reconstruction. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171980 https://hdl.handle.net/10356/171980 en SCSE22-0803 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Gucon, Nailah Ginylle Pabilonia 3D human reconstruction |
description |
The human brain's remarkable ability to transform a two-dimensional image into a vivid
three-dimensional representation of a person highlights the brain's remarkable capabilities.
Nonetheless, translating this extraordinary human capacity into machine learning models,
specifically deep neural networks, for the purpose of “3D Human Reconstruction from a Single
Image”, poses a substantial and intricate challenge.
While recent developments in 3D human reconstruction models have made notable strides
towards producing detailed full body representations from single images, there remains a
substantial gap in accurate hand representations in the outputs. This research endeavours to
bridge this gap by introducing a novel 3D hand reconstruction workflow (3DHRW) to a
pioneering 3D human reconstruction model, known as the Pixel-aligned Implicit Function (PIFu)
model [1]. These two elements are integrated through an innovative application designed to
harness their capabilities and facilitate hand mesh alignment with the body mesh. Additionally,
this study explores the development of automatic hand alignment techniques, offering a
foundation for future experimentation. The evaluation results demonstrate the effectiveness of
the PIFu and 3DHRW integration, both quantitatively and qualitatively.
Moreover, the versatility of 3D human reconstruction models spans various domains, including
virtual reality, robot navigation, and game production. In this project, a possible real-life
utilisation of PIFu is explored through the development of a novel automated character rigging
workflow, with the aim to make game development more accessible to a wider audience,
regardless of prior experience. |
author2 |
Lin Weisi |
author_facet |
Lin Weisi Gucon, Nailah Ginylle Pabilonia |
format |
Final Year Project |
author |
Gucon, Nailah Ginylle Pabilonia |
author_sort |
Gucon, Nailah Ginylle Pabilonia |
title |
3D human reconstruction |
title_short |
3D human reconstruction |
title_full |
3D human reconstruction |
title_fullStr |
3D human reconstruction |
title_full_unstemmed |
3D human reconstruction |
title_sort |
3d human reconstruction |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/171980 |
_version_ |
1783955523751641088 |