Music visualisation with deep learning

Music visualisation has become an integral part of music performance, appreciation and study. Even before computers, people have tried to visualise different aspects of music, from Kandinsky’s abstract paintings to Oskar Fischinger’s animated videos. Music-visual association is an innate sensory res...

Full description

Saved in:
Bibliographic Details
Main Author: Chong, Kyrin Sethel
Other Authors: Alexei Sourin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/168427
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Music visualisation has become an integral part of music performance, appreciation and study. Even before computers, people have tried to visualise different aspects of music, from Kandinsky’s abstract paintings to Oskar Fischinger’s animated videos. Music-visual association is an innate sensory response for a small percentage of the population, known as “synaesthetes”. Even for individuals without synaesthesia, music can be associated with colours consistently enough to reach a general agreement rate. Music visualisation can be conducted on a wide variety of musical characteristics, of which timbre is one of the least visualised. Moreover, timbre is difficult to quantify and categorise as it is commonly labelled with semantic descriptors that vary from person to person. As such, this project explores the algorithm for a standard timbre-to-colour conversion that is both widely accepted by the general public and also, when given a certain colour, enables identification of the timbre from which the colour was generated.