Automatic music mood classification
A new technology on music resources classifications is needed to search for or manage music pieces more effectively as the traditional music resource management requires a lot of manual work and is too time-consuming. In recent days, researches on automatic classifications based on music mood are co...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2010
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/40345 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | A new technology on music resources classifications is needed to search for or manage music pieces more effectively as the traditional music resource management requires a lot of manual work and is too time-consuming. In recent days, researches on automatic classifications based on music mood are conducted by many researchers, which have great significance as music contents are good representations of human emotions. The emotion plane proposed by Thayer, which defines the emotion classes dimensionally in terms of arousal and valence, is commonly adopted to avoid the ambiguity of the music emotion description. Another investigation on music mood perception show that human perceives music emotions in the way that is most similar to Hevner’s categorization of music mood.
Empirical studies also show that the main features that influence the human perception of music emotions are: music tempo, pitch and music articulation. Hence, these three audio features are extracted in order to classify a piece of music into the correct emotional expression categorization. The proposed approach for musical mood classification is implemented with the MIR-Toolbox integrated within the MATLAB software, which is dedicated to the extraction from audio files of musical features such as tonality, rhythm and so on.
A database containing music pieces of all 8 clusters of emotional expressions was constructed according to Hevner’s categorization, within which the training set was used for music features extraction, while the testing set was used for musical classification and accuracy test.
The result of this automatic music mood classification project is quite satisfactory, as the categorization outcomes are quite accurate. However, due to the limited time and author’s knowledge, some other music features such as tonality that related to emotional expression are not extracted, which may reduce the classification accuracy. Hence, much more effort needs to be put in to perform a better categorization approach in the future. |
---|