Creating realistic laughter through facial expressions for affective embodied conversational agents
Most Affective Embodied Conversational Agents (ECAs) of today employ several modalities of interaction. They can use verbal modalities such as text and voice to communicate with the user. They can also use non-verbal modalities such as facial expressions and gestures to express themselves. However,...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2013
|
Online Access: | https://animorepository.dlsu.edu.ph/etd_masteral/4363 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
Summary: | Most Affective Embodied Conversational Agents (ECAs) of today employ several modalities of interaction. They can use verbal modalities such as text and voice to communicate with the user. They can also use non-verbal modalities such as facial expressions and gestures to express themselves. However, these systems lack an important element of communication, which is laughter. Laughter is an important communicative signal because of its usage as feedback in social discourse and because it is a means of showing our emotion. Research shows that the non-verbal aspect of laughter is composed mainly of facial expressions. However, laughter facial expressions in ECA still lack important details such as eye movement, cheek movement and wrinkles. Furthermore, laughter is linked to several emotional states such as joy, amusement, and relief. However, no past research has attempted to model synthetic emotional laughter. This research presents an affective laughter facial expression synthesis system for ECAs. The purpose of this system is to allow an ECA to perform realistic and emotionally appropriate laughter during a conversation. |
---|