3D face mapping via Kinect

When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Shangru.
Other Authors: Seah Hock Soon
Format: Final Year Project
Language:English
Published: 2013
Subjects:
Online Access:http://hdl.handle.net/10356/51962
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This project aims to demonstrate how the Kinect data can be used in conjunction with a generic face model to achieve an automated 3D face modeling process from 3D data. The project explored several methods of obtaining the data from the Kinect sensor as well as modifying the data of a 3D model by scripting. A framework was achieved whereby the user was able to generate a directly mapped and transformed 3D plane model using feature point data obtained from the sensor.