3D human reconstruction

The human brain's remarkable ability to transform a two-dimensional image into a vivid three-dimensional representation of a person highlights the brain's remarkable capabilities. Nonetheless, translating this extraordinary human capacity into machine learning models, specifically deep...

Full description

Saved in:
Bibliographic Details
Main Author: Gucon, Nailah Ginylle Pabilonia
Other Authors: Lin Weisi
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171980
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The human brain's remarkable ability to transform a two-dimensional image into a vivid three-dimensional representation of a person highlights the brain's remarkable capabilities. Nonetheless, translating this extraordinary human capacity into machine learning models, specifically deep neural networks, for the purpose of “3D Human Reconstruction from a Single Image”, poses a substantial and intricate challenge. While recent developments in 3D human reconstruction models have made notable strides towards producing detailed full body representations from single images, there remains a substantial gap in accurate hand representations in the outputs. This research endeavours to bridge this gap by introducing a novel 3D hand reconstruction workflow (3DHRW) to a pioneering 3D human reconstruction model, known as the Pixel-aligned Implicit Function (PIFu) model [1]. These two elements are integrated through an innovative application designed to harness their capabilities and facilitate hand mesh alignment with the body mesh. Additionally, this study explores the development of automatic hand alignment techniques, offering a foundation for future experimentation. The evaluation results demonstrate the effectiveness of the PIFu and 3DHRW integration, both quantitatively and qualitatively. Moreover, the versatility of 3D human reconstruction models spans various domains, including virtual reality, robot navigation, and game production. In this project, a possible real-life utilisation of PIFu is explored through the development of a novel automated character rigging workflow, with the aim to make game development more accessible to a wider audience, regardless of prior experience.