Music generation with deep learning techniques

This report demonstrated the use of a deep convolutional generative adversarial network (DCGAN) in generating expressive music with dynamics. The existing deep learning models for music generation were reviewed. However, most research focused on musical composition and removed expressive attributes...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Toh, Raymond Kwan How
مؤلفون آخرون: Alexei Sourin
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2021
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/148097
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:This report demonstrated the use of a deep convolutional generative adversarial network (DCGAN) in generating expressive music with dynamics. The existing deep learning models for music generation were reviewed. However, most research focused on musical composition and removed expressive attributes during data preprocessing, which resulted in mechanical-sounding, generated music. To address the issue, music elements such as pitch, time, velocity were extracted from MIDI files and encoded with piano roll data representation. With the piano roll data representation, DCGAN learned the data distribution from the given dataset and generated new data derived from the same distribution. The generated music was evaluated based on its incorporation of music dynamics and a user study. The evaluation results verified that DCGAN was capable of generating expressive music comprising of music dynamics and syncopated rhythm.