Cross-speaker viseme mapping using hidden Markov models
In this paper, a method of mapping visual speech between different speakers is proposed. This approach adopts Hidden Markov Model (HMM) to model the basic visual speech element – viseme. Some mapping terms are applied to associate the state chains decoded for the visemes produced by different spe...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2009
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/91219 http://hdl.handle.net/10220/5953 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this paper, a method of mapping visual speech between different speakers is proposed. This approach adopts Hidden Markov Model (HMM) to model the basic visual
speech element – viseme. Some mapping terms are applied to associate the state chains decoded for the visemes produced by different speakers. The HMMs configured in
this way are trained using the Baum-Welch estimation, and are used to generate new visemes. Experiments are conducted to map the visemes produced by several
speakers to a destination speaker. The experimental results show that the proposed approach provides good accuracy and continuity for mapping the visemes. |
---|