Music recommender system based on emotions from facial expression
Music classification algorithms have become an important component of musical systems. Although current research has had some success in using audio features to classify music, there is a lack of analysis on other crucial musical components, such as the lyrics of a song. Song lyrics can reveal the a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175211 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175211 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1752112024-04-26T15:41:39Z Music recommender system based on emotions from facial expression Quah, Joey Owen Noel Newton Fernando School of Computer Science and Engineering OFernando@ntu.edu.sg Computer and Information Science Music classification Music recommender system Deep learning Music classification algorithms have become an important component of musical systems. Although current research has had some success in using audio features to classify music, there is a lack of analysis on other crucial musical components, such as the lyrics of a song. Song lyrics can reveal the artist’s intention, which may not be fully conveyed through audio features alone. Hence, this paper explores the extent to which song lyrics can further improve the accuracy of music classification based on emotion. The dataset was created by scraping song lyrics from Genius and extracting audio features using the Spotify API. The songs are split into four basic emotion categories: angry, calm, happy and sad. Both deep learning and transfer learning approaches were employed to build models capable of predicting the emotion based on song lyrics and audio features. Results showed an improvement in accuracy when combining both model predictions. Furthermore, given the deterioration in mental health worldwide, music recommender systems can benefit from an enhanced classification model to recommend music that can improve people’s mood. As such, simple desktop application was also developed to recommend music to users based on their facial emotions detected in real-time. The application integrated the combined model predictions for music recommendation and utilised Spotify API to generate playlists. Bachelor's degree 2024-04-21T10:35:29Z 2024-04-21T10:35:29Z 2024 Final Year Project (FYP) Quah, J. (2024). Music recommender system based on emotions from facial expression. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175211 https://hdl.handle.net/10356/175211 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Music classification Music recommender system Deep learning |
spellingShingle |
Computer and Information Science Music classification Music recommender system Deep learning Quah, Joey Music recommender system based on emotions from facial expression |
description |
Music classification algorithms have become an important component of musical systems. Although current research has had some success in using audio features to classify music, there is a lack of analysis on other crucial musical components, such as the lyrics of a song. Song lyrics can reveal the artist’s intention, which may not be fully conveyed through audio features alone. Hence, this paper explores the extent to which song lyrics can further improve the accuracy of music classification based on emotion.
The dataset was created by scraping song lyrics from Genius and extracting audio features using the Spotify API. The songs are split into four basic emotion categories: angry, calm, happy and sad. Both deep learning and transfer learning approaches were employed to build models capable of predicting the emotion based on song lyrics and audio features. Results showed an improvement in accuracy when combining both model predictions. Furthermore, given the deterioration in mental health worldwide, music recommender systems can benefit from an enhanced classification model to recommend music that can improve people’s mood. As such, simple desktop application was also developed to recommend music to users based on their facial emotions detected in real-time. The application integrated the combined model predictions for music recommendation and utilised Spotify API to generate playlists. |
author2 |
Owen Noel Newton Fernando |
author_facet |
Owen Noel Newton Fernando Quah, Joey |
format |
Final Year Project |
author |
Quah, Joey |
author_sort |
Quah, Joey |
title |
Music recommender system based on emotions from facial expression |
title_short |
Music recommender system based on emotions from facial expression |
title_full |
Music recommender system based on emotions from facial expression |
title_fullStr |
Music recommender system based on emotions from facial expression |
title_full_unstemmed |
Music recommender system based on emotions from facial expression |
title_sort |
music recommender system based on emotions from facial expression |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175211 |
_version_ |
1800916097553137664 |