Using AI for music source separation
This report summarizes the research, methodologies, and experimental implementation on Music Source Separation (MSS). It is the task of isolating individual instrument source signals that we are interested in from a music piece. In recent years, supervised deep learning methods are known to be state...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/149109 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-149109 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1491092023-07-07T17:40:13Z Using AI for music source separation Lee, Jasline Jie Yu Ponnuthurai Nagaratnam Suganthan School of Electrical and Electronic Engineering EPNSugan@ntu.edu.sg Engineering::Electrical and electronic engineering This report summarizes the research, methodologies, and experimental implementation on Music Source Separation (MSS). It is the task of isolating individual instrument source signals that we are interested in from a music piece. In recent years, supervised deep learning methods are known to be state-of-the-art source separation technology and can be categorised as Spectrogram-based and Waveform-based methods. These models usually run on large computational power across multiple GPUs over long training hours. Albeit the success of integrating machine learning in the separation process, many lack visual representation for users to appreciate the connection between the input, model, and output. In this project, we will be focusing on the separation of 4 sources: bass, drums, vocals, and other accompaniments from an input song mixture. The objective is to analyse the impacts of different components present in both Spectrogram and Waveform based systems through fine-tuning, data handling and ablation testing. This allows us to understand the contribution of each component to the overall system and make informed choices to maximise the performance of the model given the limitation of a single GPU and the dataset. Experimental results of this project demonstrate 3 key points. Firstly, the importance of the use of RNN architecture such as BiLSTM and BiGRU in the separation of music. Secondly, the quality of the dataset and the type of data augmentation have a larger impact on the performance of the model compared to the quantity of the dataset. Lastly, the computational efficiency of the model can be improved when an uncompressed dataset and BiGRU is used. On top of the experimental results, a graphical interface is also introduced to the model to allow end-users to have a clear conception of the relationship between the input, model and output. Bachelor of Engineering (Information Engineering and Media) 2021-05-26T13:20:56Z 2021-05-26T13:20:56Z 2021 Final Year Project (FYP) Lee, J. J. Y. (2021). Using AI for music source separation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149109 https://hdl.handle.net/10356/149109 en A1109-201 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Lee, Jasline Jie Yu Using AI for music source separation |
description |
This report summarizes the research, methodologies, and experimental implementation on Music Source Separation (MSS). It is the task of isolating individual instrument source signals that we are interested in from a music piece. In recent years, supervised deep learning methods are known to be state-of-the-art source separation technology and can be categorised as Spectrogram-based and Waveform-based methods. These models usually run on large computational power across multiple GPUs over long training hours. Albeit the success of integrating machine learning in the separation process, many lack visual representation for users to appreciate the connection between the input, model, and output.
In this project, we will be focusing on the separation of 4 sources: bass, drums, vocals, and other accompaniments from an input song mixture. The objective is to analyse the impacts of different components present in both Spectrogram and Waveform based systems through fine-tuning, data handling and ablation testing. This allows us to understand the contribution of each component to the overall system and make informed choices to maximise the performance of the model given the limitation of a single GPU and the dataset. Experimental results of this project demonstrate 3 key points. Firstly, the importance of the use of RNN architecture such as BiLSTM and BiGRU in the separation of music. Secondly, the quality of the dataset and the type of data augmentation have a larger impact on the performance of the model compared to the quantity of the dataset. Lastly, the computational efficiency of the model can be improved when an uncompressed dataset and BiGRU is used.
On top of the experimental results, a graphical interface is also introduced to the model to allow end-users to have a clear conception of the relationship between the input, model and output. |
author2 |
Ponnuthurai Nagaratnam Suganthan |
author_facet |
Ponnuthurai Nagaratnam Suganthan Lee, Jasline Jie Yu |
format |
Final Year Project |
author |
Lee, Jasline Jie Yu |
author_sort |
Lee, Jasline Jie Yu |
title |
Using AI for music source separation |
title_short |
Using AI for music source separation |
title_full |
Using AI for music source separation |
title_fullStr |
Using AI for music source separation |
title_full_unstemmed |
Using AI for music source separation |
title_sort |
using ai for music source separation |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/149109 |
_version_ |
1772828686131658752 |