Music generation with deep learning techniques

This research paper studies the development and performance of a Text-to-Music Transformer model. The main objective is to investigate the generative potential of the multimodal transformation, where textual input is converted into musical scores in MIDI format. A comprehensive literature review on...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Paul Solomon Si En
Other Authors: Alexei Sourin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175113
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This research paper studies the development and performance of a Text-to-Music Transformer model. The main objective is to investigate the generative potential of the multimodal transformation, where textual input is converted into musical scores in MIDI format. A comprehensive literature review on existing music synthesis methods forms the basis of this study. This study creates the textual dataset in a novel way by using CLaMP to select the top 30 textual descriptors of the music. A pre-trained RoBERTa model and Octuple tokenizers are used to process the text and musical scores respectively. Thereafter, this music transformer uses neural network architectures with a Fast Transformer base to facilitate the infusion of textual information into generated sequences. Embeddings, linear layers, and cross-entropy loss calculations are used for all 6 musical attributes, with hyperparameter training to promote coherent and varied musical outputs. The generated music was evaluated with a musical analysis and a user study. The results verify that the transformer model can generate music that is either melodious or expresses the textual prompt.