Emotional Classification of Acoustic Information With Optimal Feature Subset Selection Methods

  • Authors

    • Dr Swarna Kuchibhotla
    • Mr Niranjan M.S.R
    2018-05-31
    https://doi.org/10.14419/ijet.v7i2.32.13521
  • MFCC, SFS, SFFS, KNN
  • This paper mainly focuses on classification of various Acoustic emotional corpora with frequency domain features using feature subset selection methods. The emotional speech samples are classified into neutral,  happy, fear , anger,  disgust and sad  states by using properties of statistics  of spectral features estimated from Berlin and Spanish emotional utterances. The Sequential Forward Selection(SFS) and Sequential Floating Forward Selection(SFFS)feature subset selection algorithms are  for extracting more informative features. The number of speech emotional samples available for training is smaller than that of the number of features extracted from the speech sample in both Berlin and Spanish corpora which is called curse of dimensionality. Because of this  feature vector of high dimensionality the efficiency of the classifier decreases and at the same time the computational time also increases. For additional  improvement in the efficiency of the classifier  a subset of  features which are optimal is needed and is obtained by using feature subset selection methods. This will enhances the performance of the system with high efficiency and lower computation time. The classifier used in this work is the standard K Nearest Neighbour (KNN) Classifier. Experimental evaluation   proved  that the performance of the classifier is enhanced with SFFS because it vanishes the nesting effect suffered by SFS. The results also showed that an optimal feature subset is a better choice for classification rather than full feature set.

     

     

  • References

    1. [1] Felix Burkhardt, Astrid Paeschke, Miriam Rolfes, Wal-ter F Sendlmeier, and Benjamin Weiss. A database of german emotional speech. In Interspeech, pages 1517– 1520, 2005.

      [2] Roddy Cowie, Ellen Douglas-Cowie, Nicolas Tsapat-soulis, George Votsis, Stefanos Kollias, Winfried Fellenz, and John G Taylor. Emotion recognition in human-computer interaction. Signal Processing Magazine, IEEE, 18(1):32–80, 2001.

      [3] Bradley Efron and Robert J Tibshirani. An introduction to the bootstrap, volume 57. CRC press, 1994.

      [4] Vladimir Hozjan, Zdravko Kacic, Asuncion Moreno, Antonio Bonafonte, and Albino Nogueiras. Interface databases: Design and collection of a multilingual emo-tional speech database. In LREC, 2002.

      [5] Alejandro Jaimes and Nicu Sebe. Multimodal human– computer interaction: A survey. Computer vision and image understanding, 108(1):116–134, 2007.

      [6] Anil Jain and Douglas Zongker. Feature selection: Evalu-ation, application, and small sample performance. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(2):153–158, 1997.

      [7] George H John, Ron Kohavi, Karl Pfleger, et al. Irrele-vant features and the subset selection problem. In ICML, volume 94, pages 121–129, 1994.

      [8] Ron Kohavi and George H John. Wrappers for feature subset selection. Artificial intelligence, 97(1):273–324, 1997.

      [9] Swarna Kuchibhotla, HD Vankayalapati, RS Vaddi, and KR Anne. A comparative analysis of classifiers in emo-tion recognition through acoustic features. International Journal of Speech Technology, pages 1–8, 2014.

      [10] Swarna Kuchibhotla, BS Yalamanchili, HD Vankayala-pati, and KR Anne. Speech emotion recognition using regularized discriminant analysis. In Proceedings of the International Conference on Frontiers of Intelligent Com-puting: Theory and Applications (FICTA) 2013, pages 363–369. Springer, 2014.

      [11] Swarna Kuchibhotlaa, Hima Deepthi Vankayalapati, BhanuSree Yalamanchili, and Koteswara Rao Anne. Analysis and evaluation of discriminant analysis tech-niques for multiclass classification of human vocal emo-tions. In Advances in Intelligent Informatics, pages 325– 333. Springer, 2015.

      [12] Huan Liu and Lei Yu. Toward integrating feature selection algorithms for classification and clustering.Knowledge and Data Engineering, IEEE Transactions on, 17(4):491–502, 2005.

      [13] Iker Luengo, Eva Navas, and Inmaculada Hernaez´. Fea-ture analysis and evaluation for automatic emotion iden-tification in speech. Multimedia, IEEE Transactions on, 12(6):490–501, 2010.

      [14] Iain R Murray and John L Arnott. Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion. The Journal of the Acoustical Society of America, 93:1097, 1993.

      [15] Maja Pantic and Leon JM Rothkrantz. Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE, 91(9):1370–1390, 2003.

      [16] Jouni Pohjalainen, Okko Ras¨anen,¨ and Serdar Kadio-glu. Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Computer Speech & Language, 2013.

      [17] Pavel Pudil, Jana Novovicovˇa,´ and Josef Kittler. Floating search methods in feature selection. Pattern recognition letters, 15(11):1119–1125, 1994.

      [18] Nobuo Sato and Yasunari Obuchi. Emotion recognition using mel-frequency cepstral coefficients. Information and Media Technologies, 2(3):835–848, 2007.

      [19] Mohammad Hossein Sedaaghi, Dimitrios Ververidis, and Constantine Kotropoulos. Improving speech emotion recognition using adaptive genetic algorithms. In Proc.European Signal Processing Conference (EUSIPCO),Polland, 2007.

      [20] Petr Somol, Pavel Pudil, J Novovicovˇa,´ and Pavel Paclık. Adaptive floating search methods in feature selection, Pattern recognition letters, 20(11):1157–1163, 1999.

      [21] Jianhua Tao and Tieniu Tan. Affective computing: A review. In Affective computing and intelligent interaction, pages 981–995. Springer, 2005.

      [22] Sergios Theodoridis and Konstantinos Koutroumbas. Pat-tern recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS, 19(2):376, 2008.

      [23] HD Vankayalapati, KR Anne, and K Kyamakya. Ex-traction of visual and acoustic features of the driver for monitoring driver ergonomics applied to extended driver assistance systems. In Data and Mobility, pages 83–94. Springer, 2010.

      [24] Siddha VR Kyamakya K. Vankayalapati HD, Anne KR. Driver emotion detection from the acoustic features of the driver for real-time assessment of driving ergonomics process. International Society for Advanced Science and Technology (ISAST) Transactions on Computers and Intelligent Systems journal, 3(1):65–73, 2011.

      [25] Dimitrios Ververidis and Constantine Kotropoulos. Fast sequential floating forward selection applied to emotional speech features estimated on des and susas data collec-tions. In Proc. XIV European Signal Processing Conf, 2006.

      [26] Dimitrios Ververidis and Constantine Kotropoulos. Fast and accurate sequential floating forward feature selection with the bayes classifier applied to speech emotion recognition. Signal Processing, 88(12):2956–2970, 2008.

      [27] Thurid Vogt, Elisabeth Andre,´ and Johannes Wagner. Automatic recognition of emotions from speech: a re-view of the literature and recommendations for practical realisation. In Affect and emotion in human-computer interaction, pages 75–91. Springer, 2008.

      [28] A Wayne Whitney. A direct method of nonparametric measurement selection. Computers, IEEE Transactions on, 100(9):1100–1103, 1971.

  • Downloads

  • How to Cite

    Swarna Kuchibhotla, D., & Niranjan M.S.R, M. (2018). Emotional Classification of Acoustic Information With Optimal Feature Subset Selection Methods. International Journal of Engineering & Technology, 7(2.32), 39-43. https://doi.org/10.14419/ijet.v7i2.32.13521