Music visualization with deep learning

Music visualization offers a unique way to experience music beyond just listening. While dynamic visualizations are the status quo, our research has found a capacity for static visualizations to convey complex musical concepts. Moreover, with the advent of advancements in artificial intelligence and...

Full description

Saved in:
Bibliographic Details
Main Author: Kumar, Neel
Other Authors: Alexei Sourin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/176030
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Music visualization offers a unique way to experience music beyond just listening. While dynamic visualizations are the status quo, our research has found a capacity for static visualizations to convey complex musical concepts. Moreover, with the advent of advancements in artificial intelligence and deep learning making it easier than ever to generate visualizations through technologies like DALL.E and Stable Diffusion, this study investigates its potential for generating static abstract visualizations of music, aiming to represent higher-level features such as mode, timbre, and symbolism. By leveraging technical advancements, particularly in transformer-based neural networks, this study explores a novel approach that combines music and natural language processing to create visual signatures that reflect the essence and emotional content of musical compositions. The findings demonstrate the model's capability to produce visually compelling and aesthetically pleasing representations of music, highlighting the underutilized potential of static visualizations in capturing complex musical attributes as well as identifying scopes for future improvement. Finally, the effectiveness of this approach was evaluated to test the hypothesis and usefulness of results. Several practical applications for visualizations such as enhancements to live and recorded performances, educational tools, therapeutic aids and artistic entertainment amongst others. While the results show promise, they underscore the need for refinement and further exploration to fully unlock the potential of this technology. Ultimately, the ability of this technology to create cross-modal understanding—capturing both general patterns and nuanced details—will determine their effectiveness in reshaping the intersection of audio and visual experiences.