Modeling continuous visual speech using boosted viseme models

In this paper, a novel connected-viseme approach for modeling continuous visual speech is presented. The approach adopts AdaBoost-HMMs as the viseme models. Continuous visual speech is modeled by connecting the viseme models using level building algorithm. The approach is applied to identify words...

Full description

Saved in:
Bibliographic Details
Main Authors: Dong, Liang, Foo, Say Wei, Yong, Lian
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2009
Subjects:
Online Access:https://hdl.handle.net/10356/91029
http://hdl.handle.net/10220/6002
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this paper, a novel connected-viseme approach for modeling continuous visual speech is presented. The approach adopts AdaBoost-HMMs as the viseme models. Continuous visual speech is modeled by connecting the viseme models using level building algorithm. The approach is applied to identify words and phrases in visual speech. The recognition results indicate that the proposed method has better performance than the conventional approach.