Transformers as feature extractors in emotion-based music visualization
Cross-modal similarity learning evolves around the feature embeddings of the target modalities. With advancements in Deep Neural Network, feature extractions have seen an increasing sophistication. Convolutional Neural Networks (CNNs) and Residual Networks (ResNets) have proven to perform great...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175170 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Cross-modal similarity learning evolves around the feature embeddings of the target
modalities. With advancements in Deep Neural Network, feature extractions have seen an
increasing sophistication. Convolutional Neural Networks (CNNs) and Residual Networks
(ResNets) have proven to perform great feature extractions in the field of both computer
vision and music analysis, both of which are crucial to music visualization. However, the
emergence of transformers poses a question as to whether such networks are still the best
choice for such tasks.
This project will first explore existing works on music visualizations, and then study the use of
emotion dimensions such as valence and arousal to quantify emotions. It also explores how
audio signals and spectrograms can be used to analyse the emotions evoked by a piece of
music. Ultimately, this project proposes to use transformers as feature extractors, and
thereafter, leading to better music visualizations using cross-modal similarity learning. The
experiments conducted proved that transformers perform better than state-of-the-art
approaches. |
---|