Generating music with sentiments
In this thesis, I focus on the music generation conditional on human sentiments such as positive and negative. As there are no existing large-scale music datasets annotated with sentiment labels, generating high-quality music conditioned on sentiments is hard. I thus build a new dataset consisting o...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/374 https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=1372&context=etd_coll |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | In this thesis, I focus on the music generation conditional on human sentiments such as positive and negative. As there are no existing large-scale music datasets annotated with sentiment labels, generating high-quality music conditioned on sentiments is hard. I thus build a new dataset consisting of the triplets of lyric, melody and sentiment, without requiring any manual annotations. I utilize an automated sentiment recognition model (based on the BERT trained on Edmonds Dance dataset) to "label'' the music according to the sentiments recognized from its lyrics. I then train the model of generating sentimental music and call the method Sentimental Lyric and Melody Generator (SLMG). Specifically, SLMG is consisted of three modules: 1) an encoder-decoder model trained end-to-end for generating lyric and melody; 2) a music sentiment classifier trained on labelled data; and 3) a modified beam search algorithm that guides the music generation process by incorporating the music sentiment classifier. I conduct subjective and objective evaluations on the generated music and the evaluation results show that SLMG is capable of generating tuneful lyric and melody with specific sentiments. |
---|