Document details

Music Emotion Classification: Dataset Acquisition and Comparative Analysis

Author(s): Panda, Renato ; Paiva, Rui Pedro

Date: 2012

Persistent ID: https://hdl.handle.net/10316/95169

Origin: Estudo Geral - Universidade de Coimbra

Project/scholarship: info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT ;


Description

In this paper we present an approach to emotion classification in audio music. The process is conducted with a dataset of 903 clips and mood labels, collected from Allmusic database, organized in five clusters similar to the dataset used in the MIREX Mood Classification Task. Three different audio frameworks - Marsyas, MIR Toolbox and Psysound, were used to extract several features. These audio features and annotations are used with supervised learning techniques to train and test various classifiers based on support vector machines. To access the importance of each feature several different combinations of features, obtained with feature selection algorithms or manually selected were tested. The performance of the solution was measured with 20 repetitions of 10-fold cross validation, achieving a F-measure of 47.2% with precision of 46.8% and recall of 47.6%.

This work was supported by the MOODetector project (PTDC/EIA-EIA/102185/2008), financed by the Fundação para Ciência e Tecnologia - Portugal.

Document Type Other
Language English
facebook logo  linkedin logo  twitter logo 
mendeley logo

Related documents