Sound-event classification for robot hearing
Throughout the years, there have been several methods of executing the process of sound-event classification. The use of spectrograms and a time-frequency spectral analysis that illustrates the magnitude spectrum of the signal within a 2D time-frequency plane are some examples of the well known meth...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157932 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Throughout the years, there have been several methods of executing the process of sound-event classification. The use of spectrograms and a time-frequency spectral analysis that illustrates the magnitude spectrum of the signal within a 2D time-frequency plane are some examples of the well known methods. Even though intensive research was done, there are still greater developments that can be achieved. For instance, for sound-based recognition, there still exists a research gap to enhance its accuracy and reliability. By using a spectrogram, audio signals can be visualised and evaluated into a time-frequency spectral analysis of a magnitude spectrum on a 2D plane. However, the magnitude spectrum is not enough to classify the audio sources. To address this issue, a method, first proposed by Jiang Xudong and Ren Jianfeng, called “Regularised 2D complex-log-Fourier transform” is introduced. The
addition to this process is a phase spectrum which will also be used to do sound-event classification. On top of this, the Principal Component Analysis (PCA) is used to extract out significant information and remove unnecessary data in the audio samples. Last but not least, the calculated values using the Mahalanobis Distance will be used to identify the belonging classes of the sound events. |
---|