Autor(es): Panda, Renato ; Paiva, Rui Pedro
Data: 2012
Identificador Persistente: https://hdl.handle.net/10316/95169
Origem: Estudo Geral - Universidade de Coimbra
Projeto/bolsa: info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT ;
Autor(es): Panda, Renato ; Paiva, Rui Pedro
Data: 2012
Identificador Persistente: https://hdl.handle.net/10316/95169
Origem: Estudo Geral - Universidade de Coimbra
Projeto/bolsa: info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT ;
In this paper we present an approach to emotion classification in audio music. The process is conducted with a dataset of 903 clips and mood labels, collected from Allmusic database, organized in five clusters similar to the dataset used in the MIREX Mood Classification Task. Three different audio frameworks - Marsyas, MIR Toolbox and Psysound, were used to extract several features. These audio features and annotations are used with supervised learning techniques to train and test various classifiers based on support vector machines. To access the importance of each feature several different combinations of features, obtained with feature selection algorithms or manually selected were tested. The performance of the solution was measured with 20 repetitions of 10-fold cross validation, achieving a F-measure of 47.2% with precision of 46.8% and recall of 47.6%.
This work was supported by the MOODetector project (PTDC/EIA-EIA/102185/2008), financed by the Fundação para Ciência e Tecnologia - Portugal.