Music generation with deep learning techniques

This project sets out to explore diverse methodologies for image-to-music generation, presenting two distinct approaches: one centered on emotion and the other utilizing text as an intermediary conduit between images and music. The primary aim is to develop and refine an image-to-music generation mo...

Full description

Saved in:
Bibliographic Details
Main Author: Zhou, Yuxuan
Other Authors: Alexei Sourin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175144
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This project sets out to explore diverse methodologies for image-to-music generation, presenting two distinct approaches: one centered on emotion and the other utilizing text as an intermediary conduit between images and music. The primary aim is to develop and refine an image-to-music generation model grounded in the alignment of valence-arousal scores. However, despite concerted efforts, the model's efficacy is hindered by a dearth of data and computational constraints, resulting in unsatisfactory outcomes. In response to these challenges, an alternative path is pursued, integrating pretrained vision-language models and text-to-music generation frameworks for music synthesis. The model generates 15-second music clips with a sampling rate of 36kHz. Employing prompt engineering techniques bolsters coherence within the generated musical compositions. Subsequently, a user study is conducted to evaluate the musical output, revealing a commendable level of coherence and musicality achieved by the model.