Static visualisations of music mood using deep learning

Of the many aspects of music, including pitch, volume, tempo, modality, etc., mood is one of the fewer visualised aspects. This is due to mood being harder to quantify and being rather subjective. Additionally, much of today’s work on music visualisation focuses on animated representations of musi...

Full description

Saved in:
Bibliographic Details
Main Author: Ang, Justin Teng Hng
Other Authors: Alexei Sourin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175148
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Of the many aspects of music, including pitch, volume, tempo, modality, etc., mood is one of the fewer visualised aspects. This is due to mood being harder to quantify and being rather subjective. Additionally, much of today’s work on music visualisation focuses on animated representations of music, meant to be viewed while listening along. Thus, there is a gap for static visualisations of music mood, which can be used to give viewers a quick overview of the overall ambience of a piece of music. A model has been proposed that combines the MuLan model for audio embedding and Stable Diffusion-XL Turbo for image generation to generate images from audio files, with the aim of visualising the mood of music. This model is trained using a dataset of classical music pieces and corresponding images generated using DALL-E. The generated images are subjected to analysis, and the model undergoes user testing to evaluate its effectiveness.