3D face mapping via Kinect

When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This...

全面介紹

Saved in:
書目詳細資料
主要作者: Ng, Shangru.
其他作者: Seah Hock Soon
格式: Final Year Project
語言:English
出版: 2013
主題:
在線閱讀:http://hdl.handle.net/10356/51962
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This project aims to demonstrate how the Kinect data can be used in conjunction with a generic face model to achieve an automated 3D face modeling process from 3D data. The project explored several methods of obtaining the data from the Kinect sensor as well as modifying the data of a 3D model by scripting. A framework was achieved whereby the user was able to generate a directly mapped and transformed 3D plane model using feature point data obtained from the sensor.