3D face mapping via Kinect

When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Ng, Shangru.
مؤلفون آخرون: Seah Hock Soon
التنسيق: Final Year Project
اللغة:English
منشور في: 2013
الموضوعات:
الوصول للمادة أونلاين:http://hdl.handle.net/10356/51962
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:When modeling 3D faces from 2D images, many different views of the same stationary face model are required. 2D images are also more susceptible to interference from the environment. The Kinect sensor has been shown to provide reliable 3D information that can be used to aid in 3D face modeling. This project aims to demonstrate how the Kinect data can be used in conjunction with a generic face model to achieve an automated 3D face modeling process from 3D data. The project explored several methods of obtaining the data from the Kinect sensor as well as modifying the data of a 3D model by scripting. A framework was achieved whereby the user was able to generate a directly mapped and transformed 3D plane model using feature point data obtained from the sensor.