Static visualisations of music mood using deep learning
Of the many aspects of music, including pitch, volume, tempo, modality, etc., mood is one of the fewer visualised aspects. This is due to mood being harder to quantify and being rather subjective. Additionally, much of today’s work on music visualisation focuses on animated representations of musi...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175148 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
總結: | Of the many aspects of music, including pitch, volume, tempo, modality, etc., mood is one of the fewer visualised aspects. This is due to mood being harder to quantify and being rather subjective. Additionally, much of today’s work on music visualisation focuses on animated representations of music, meant to be viewed while listening along. Thus, there is a gap for static visualisations of music mood, which can be used to give viewers a quick overview of the overall ambience of a piece of music. A model has been proposed that combines the MuLan model for audio embedding and Stable Diffusion-XL Turbo for image generation to generate images from audio files, with the aim of visualising the mood of music. This model is trained using a dataset of classical music pieces and corresponding images generated using DALL-E. The generated images are subjected to analysis, and the model undergoes user testing to evaluate its effectiveness. |
---|