Music generation with deep learning techniques
With the advancement of artificial intelligence techniques in recent years, the task of music generation has gained much attention. Music is a type of sequential data comprising distinctive structures and comes in many various forms, which makes for an interesting problem that can be tackled using m...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/168300 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-168300 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1683002023-06-16T15:37:23Z Music generation with deep learning techniques Tan, Wen Xiu Alexei Sourin School of Computer Science and Engineering assourin@ntu.edu.sg Engineering::Computer science and engineering With the advancement of artificial intelligence techniques in recent years, the task of music generation has gained much attention. Music is a type of sequential data comprising distinctive structures and comes in many various forms, which makes for an interesting problem that can be tackled using many different approaches. Emotions cannot be removed from music as the art form naturally invokes emotions, from composers to listeners. Generating emotive music has been explored by various researchers, interested to produce human-like sounds that can influence the feelings of people. However, there has been few research done on allowing users to control the music generated automatically. There are various ways that users can input information and textual data is one of the ways for users to input information to guide the direction in which the music should be generated. In this work, we propose a method to combine sentiments of textual data from users to generate suitable emotional music. A user study was conducted to evaluate the generated music, demonstrating that they are able to effectively convey the emotions present in the textual input. Bachelor of Science in Data Science and Artificial Intelligence 2023-06-11T23:37:27Z 2023-06-11T23:37:27Z 2023 Final Year Project (FYP) Tan, W. X. (2023). Music generation with deep learning techniques. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/168300 https://hdl.handle.net/10356/168300 en SCSE22-0120 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Tan, Wen Xiu Music generation with deep learning techniques |
description |
With the advancement of artificial intelligence techniques in recent years, the task of music generation has gained much attention. Music is a type of sequential data comprising distinctive structures and comes in many various forms, which makes for an interesting problem that can be tackled using many different approaches. Emotions cannot be removed from music as the art form naturally invokes emotions, from composers to listeners. Generating emotive music has been explored by various researchers, interested to produce human-like sounds that can influence the feelings of people. However, there has been few research done on allowing users to control the music generated automatically. There are various ways that users can input information and textual data is one of the ways for users to input information to guide the direction in which the music should be generated. In this work, we propose a method to combine sentiments of textual data from users to generate suitable emotional music. A user study was conducted to evaluate the generated music, demonstrating that they are able to effectively convey the emotions present in the textual input. |
author2 |
Alexei Sourin |
author_facet |
Alexei Sourin Tan, Wen Xiu |
format |
Final Year Project |
author |
Tan, Wen Xiu |
author_sort |
Tan, Wen Xiu |
title |
Music generation with deep learning techniques |
title_short |
Music generation with deep learning techniques |
title_full |
Music generation with deep learning techniques |
title_fullStr |
Music generation with deep learning techniques |
title_full_unstemmed |
Music generation with deep learning techniques |
title_sort |
music generation with deep learning techniques |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/168300 |
_version_ |
1772826279030030336 |