Generative models for speech emotion synthesis

Several attempts have been made to synthesize speech from text. However, existing methods tend to generate speech that sound artificial and lack emotional content. In this project, we investigate using Generative Adversarial Networks (GANs) to generate emotional speech. WaveGAN (2019) was a fir...

Full description

Saved in:
Bibliographic Details
Main Author: Raj, Nathanael S.
Other Authors: Jagath C. Rajapakse
Format: Final Year Project
Language:English
Published: 2019
Subjects:
Online Access:http://hdl.handle.net/10356/76865
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Several attempts have been made to synthesize speech from text. However, existing methods tend to generate speech that sound artificial and lack emotional content. In this project, we investigate using Generative Adversarial Networks (GANs) to generate emotional speech. WaveGAN (2019) was a first attempt at generating speech using raw audio waveforms. It produced natural sounding audio, including speech, bird chirpings and drums. In this project, we applied WaveGAN to emotional speech data from The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), using all 8 categories of emotion. We performed modifications on WaveGAN using advanced conditioning strategies, namely Sparse Vector Conditioning and introducing Auxiliary Classifiers. In experiments conducted with human listeners, we found that these methods greatly aided subjects in identifying the generated emotions correctly, and improved ease of intelligibility and quality of generated samples.