Facial Expression Tracking and mimicking system (FacET)
Computer vision development is rapidly gaining in momentum over the years because of its invaluable applications in the field of robotics, animation, and human computer interaction (HCI). The ability to efficiently track the facial expression of a person sitting behind the camera has become both a s...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2008
|
Online Access: | https://animorepository.dlsu.edu.ph/etd_bachelors/11906 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
id |
oai:animorepository.dlsu.edu.ph:etd_bachelors-12551 |
---|---|
record_format |
eprints |
spelling |
oai:animorepository.dlsu.edu.ph:etd_bachelors-125512021-09-11T03:50:42Z Facial Expression Tracking and mimicking system (FacET) Cua, Derrick T. Lim, Julson R. Robin, Von F. Wakatabe, Sherry Rose S. Computer vision development is rapidly gaining in momentum over the years because of its invaluable applications in the field of robotics, animation, and human computer interaction (HCI). The ability to efficiently track the facial expression of a person sitting behind the camera has become both a significant research goal in making animation look more realistic, giving a robotic head human-like expressiveness and in studying facial muscle movement. Another potential use of this technology is in low-bandwidth videoconferencing and chatting applications. The Facial Expression Tracking and Mimicking System (FacET) was developed to further the research into this area. The FacET System is designed to track the facial features of a person such as the eyes, eyebrows, and mouth recognize the facial expressions conveyed by the user and animate these expressions using a 3D avatar. The system uses a combination of optical flow and intensity information to track certain points in the face. Principal Component Analysis (PCA) is then used to classify the points into 6 basic facial expressions. The system can accurately classify 5 out of 6 expressions namely: happy, sad, fear, anger, and surprise while disgust obtained poor recognition results. Other facial expression classifications tend to overlap with that of disgust, thus producing poor results. 2008-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/etd_bachelors/11906 Bachelor's Theses English Animo Repository |
institution |
De La Salle University |
building |
De La Salle University Library |
continent |
Asia |
country |
Philippines Philippines |
content_provider |
De La Salle University Library |
collection |
DLSU Institutional Repository |
language |
English |
description |
Computer vision development is rapidly gaining in momentum over the years because of its invaluable applications in the field of robotics, animation, and human computer interaction (HCI). The ability to efficiently track the facial expression of a person sitting behind the camera has become both a significant research goal in making animation look more realistic, giving a robotic head human-like expressiveness and in studying facial muscle movement. Another potential use of this technology is in low-bandwidth videoconferencing and chatting applications.
The Facial Expression Tracking and Mimicking System (FacET) was developed to further the research into this area. The FacET System is designed to track the facial features of a person such as the eyes, eyebrows, and mouth recognize the facial expressions conveyed by the user and animate these expressions using a 3D avatar. The system uses a combination of optical flow and intensity information to track certain points in the face. Principal Component Analysis (PCA) is then used to classify the points into 6 basic facial expressions. The system can accurately classify 5 out of 6 expressions namely: happy, sad, fear, anger, and surprise while disgust obtained poor recognition results. Other facial expression classifications tend to overlap with that of disgust, thus producing poor results. |
format |
text |
author |
Cua, Derrick T. Lim, Julson R. Robin, Von F. Wakatabe, Sherry Rose S. |
spellingShingle |
Cua, Derrick T. Lim, Julson R. Robin, Von F. Wakatabe, Sherry Rose S. Facial Expression Tracking and mimicking system (FacET) |
author_facet |
Cua, Derrick T. Lim, Julson R. Robin, Von F. Wakatabe, Sherry Rose S. |
author_sort |
Cua, Derrick T. |
title |
Facial Expression Tracking and mimicking system (FacET) |
title_short |
Facial Expression Tracking and mimicking system (FacET) |
title_full |
Facial Expression Tracking and mimicking system (FacET) |
title_fullStr |
Facial Expression Tracking and mimicking system (FacET) |
title_full_unstemmed |
Facial Expression Tracking and mimicking system (FacET) |
title_sort |
facial expression tracking and mimicking system (facet) |
publisher |
Animo Repository |
publishDate |
2008 |
url |
https://animorepository.dlsu.edu.ph/etd_bachelors/11906 |
_version_ |
1712577563534032896 |