Speech emotion recognition using WaveNet
Speech emotion recognition is known to be a challenging and complex task for machine learning models. Two challenges that are faced when doing speech emotion recognition are 1) human emotions are hard to distinguished and 2) detection of emotion could only be captured at specific moments in an utter...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156592 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Speech emotion recognition is known to be a challenging and complex task for machine learning models. Two challenges that are faced when doing speech emotion recognition are 1) human emotions are hard to distinguished and 2) detection of emotion could only be captured at specific moments in an utterance. Hereby, this paper proposes a Speech Emotion Recognition (SER) architecture inspired by WaveNet architecture. This architecture does not rely neither on tedious pre-processing nor the recurrent layers. The novelty of our approach uses both speech waveforms and audio features as inputs, usage on casual dilated convolutions for capturing temporal dependencies and the use of self-attention mechanism. Self-attention permit inputs to interact with each other to pay close attention on the valuable parts of the input to learn the connection between them. We illustrate improved performances SER with our model on EMO-DB datasets over the existing base-line models.
Index Term: speech emotion recognition, self-attention, deep learning, computational paralinguistics |
---|