Acoustic feature selection for automatic emotion recognition from speech

Jia Rong, Gang Li, Yi Ping Phoebe Chen

Research output: Contribution to journalArticleResearchpeer-review

105 Citations (Scopus)


Emotional expression and understanding are normal instincts of human beings, but automatical emotion recognition from speech without referring any language or linguistic information remains an unclosed problem. The limited size of existing emotional data samples, and the relative higher dimensionality have outstripped many dimensionality reduction and feature selection algorithms. This paper focuses on the data preprocessing techniques which aim to extract the most effective acoustic features to improve the performance of the emotion recognition. A novel algorithm is presented in this paper, which can be applied on a small sized data set with a high number of features. The presented algorithm integrates the advantages from a decision tree method and the random forest ensemble. Experiment results on a series of Chinese emotional speech data sets indicate that the presented algorithm can achieve improved results on emotional recognition, and outperform the commonly used Principle Component Analysis (PCA)/Multi-Dimensional Scaling (MDS) methods, and the more recently developed ISOMap dimensionality reduction method.

Original languageEnglish
Pages (from-to)315-328
Number of pages14
JournalInformation Processing and Management
Issue number3
Publication statusPublished - 1 May 2009


  • Emotion recognition
  • Feature selection
  • Machine learning

Cite this