Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging

While deep learning technologies are now capable of generating realistic images confusing humans, the research efforts are turning to the synthesis of images for more concrete and application-specific purposes. Facial image generation based on vocal characteristics from speech is one of such importa...

Full description

Saved in:
Bibliographic Details
Main Author: Bai, Yeqi
Other Authors: Wang Lipo
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/139255
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-139255
record_format dspace
spelling sg-ntu-dr.10356-1392552023-07-07T18:53:34Z Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging Bai, Yeqi Wang Lipo School of Electrical and Electronic Engineering Yitu Technology Zhang Zhenjie elpwang@ntu.edu.sg Engineering::Electrical and electronic engineering While deep learning technologies are now capable of generating realistic images confusing humans, the research efforts are turning to the synthesis of images for more concrete and application-specific purposes. Facial image generation based on vocal characteristics from speech is one of such important yet challenging tasks. It is the key enabler to influential use cases of image generation, especially for business in public security and entertainment. Existing solutions to the problem of speech2face renders limited image quality and fails to preserve facial similarity due to the lack of quality dataset for training and appropriate integration of vocal features. In this paper, we investigate these key technical challenges and propose Speech Fusion to Face, or SF2F in short, attempting to address the issue of facial image quality and the poor connection between vocal feature domain and modern image generation models. By adopting new strategies and approaches, we demonstrate dramatic performance boost over the state-of-the-art solution, by doubling the recall of individual identity, and lifting the quality score from 15 to 19 based on the mutual information score with VGGFace classifier. Bachelor of Engineering (Electrical and Electronic Engineering) 2020-05-18T07:06:27Z 2020-05-18T07:06:27Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/139255 en A3271-191 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Bai, Yeqi
Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
description While deep learning technologies are now capable of generating realistic images confusing humans, the research efforts are turning to the synthesis of images for more concrete and application-specific purposes. Facial image generation based on vocal characteristics from speech is one of such important yet challenging tasks. It is the key enabler to influential use cases of image generation, especially for business in public security and entertainment. Existing solutions to the problem of speech2face renders limited image quality and fails to preserve facial similarity due to the lack of quality dataset for training and appropriate integration of vocal features. In this paper, we investigate these key technical challenges and propose Speech Fusion to Face, or SF2F in short, attempting to address the issue of facial image quality and the poor connection between vocal feature domain and modern image generation models. By adopting new strategies and approaches, we demonstrate dramatic performance boost over the state-of-the-art solution, by doubling the recall of individual identity, and lifting the quality score from 15 to 19 based on the mutual information score with VGGFace classifier.
author2 Wang Lipo
author_facet Wang Lipo
Bai, Yeqi
format Final Year Project
author Bai, Yeqi
author_sort Bai, Yeqi
title Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
title_short Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
title_full Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
title_fullStr Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
title_full_unstemmed Speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
title_sort speech fusion to face : bridging the gap between human's vocal characteristics and facial imaging
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/139255
_version_ 1772826759032471552