Simultaneous evolutionary neural network based automated video based facial expression analysis

 
 
 
  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract


    In real life scenario, facial expressions and emotions are nothing but responses to the external and internal events of human being. In Human Computer Interaction (HCI), recognition of end user’s expressions and emotions from the video streaming plays very important role. In such systems it is required to track the dynamic changes in human face movements quickly in order to deliver the required response system. In real time applications, this Facial Expression Recognition (FER) is very helpful like physical fatigue detection based on facial detection and expressions such as driver fatigue detection in order to prevent the accidents on road. Face expression based physical fatigue analysis or detection is out of scope of this work, but this work proposed a Simultaneous Evolutionary Neural Network (SENN) classification scheme is proposed for recognising human emotion or expression. In this work, at first, automatically detects and tracks facial landmarks in videos, and face is detected by using enhanced adaboost algorithm with haar features. Then, in order to describe facial expression modifications, geometric features are taken out and the Local Binary Pattern (LBP) is extracted to improve the detection accuracy and it has a much lower-dimensional size. With the aim of examining the temporal facial expression modifications, we apply SENN probabilistic classifiers, which examine the facial expressions in individual frames, and after that promulgate the likelihoods during the course of the video to take the temporal features of facial expressions such as glad, sad, anger, and fear feelings. The experimental results show that the performance of proposed SENN scheme is attained better results compared than existing recognition schemes like Time-Delay Neural Network with Support Vector Regression (TDNN-SVR) and SVR. 


  • Keywords


    Human computer interaction (HCI), facial expressions and emotions, simultaneous evolutionary neural network (SENN), adaboost, geometric features, local binary pattern (LBP), classification.

  • References


      [1] Samad R & Sawada H, “Edge-based Facial Feature Extraction Using Gabor Wavelet and Convolution Filters”, MVA, pp.430-433, (2011).

      [2] Thai LH, Nguyen NDT & Hai TS, “A facial expression classification system integrating canny, principal component analysis and artificial neural network”, International Journal of Machine Learning and Computing, Vol.1, No.4, pp.388-393, (2011).

      [3] Sisodia P, Akhilesh V & Sachin K, “Human Facial Expression Recognition using Gabor Filter Bank with Minimum Number of Feature Vectors”, International Journal of Applied Information Systems, Vol.5, No. 9, pp.9-13, (2013).

      [4] Jun Y & Zengfu W, “A Video-Based Facial Motion Tracking and Expression Recognition System”, Springer Science Business Media New York, (2016).

      [5] Dey A, “Contour based Procedure for Face Detection and Tracking from Video”, 3rd Int'I Conf. on Recent Advances in Information Technology I RAIT, (2016).

      [6] Le DN, Van Chung L & Nguyen GN, “Performance evaluation of video-based face recognition approaches for online video contextual advertisement user-oriented system”, Information Systems Design and Intelligent Applications, pp.287-295, (2016).

      [7] Haque MA, Irani R, Nasrollahi K & Moeslund TB, “Facial video-based detection of physical fatigue for maximal muscle activity”, IET Computer Vision, (2016).

      [8] Hajati F, Tavakolian M, Gheisari S, Gao Y & Mian AS, “Dynamic Texture Comparison Using Derivative Sparse Representation: Application to Video-Based Face Recognition”, IEEE Transactions on Human-Machine Systems, (2017).

      [9] Chen J, Chen Z, Chi Z & Fu H, “Facial expression recognition in video with multiple feature fusion”, IEEE Transactions on Affective Computing, (2016).

      [10] Meng H, Bianchi-Berthouze N, Deng Y, Cheng J & Cosmas JP, “Time-delay neural network for continuous emotional dimension prediction from facial expression sequences”, IEEE transactions on cybernetics, Vol.46, No.4, pp.916-929, (2016).

      [11] Chiranjeevi P, Gopalakrishnan V & Moogi P, “Neutral face classification using personalized appearance models for fast and robust emotion detection”, IEEE Transactions on Image Processing, Vol.24, No.9, pp.2701-2711, (2015).

      [12] Hayat M & Bennamoun M, “An automatic framework for textured 3D video-based facial expression recognition”, IEEE Transactions on Affective Computing, Vol.5, No.3, pp.301-313, (2014).

      [13] El Meguid MKA & Levine MD, “Fully Automated Recognition of Spontaneous Facial Expressions in Videos Using Random Forest Classifiers”, Affective Computing, Vol.5, pp.418-431, (2014).

      [14] Nicolaou MA, Gunes H & Pantic M, “Output-associative RVM regression for dimensional and continuous emotion prediction”, Image Vis. Comput. Best Autom. Face Gesture Recognit., Vol.30, No.3, pp.186–196, (2012).

      [15] Eyben F, Petridis S, Schuller B, Tzimiropoulos G, Zafeiriou S & Pantic M, “Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks”, IEEE International Conference on Acoustics, Speech and Signal Processing, pp.5844-5847, (2011).

      [16] Nicolaou MA, Gunes H & Pantic M, “Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space”, IEEE Trans. Affect. Comput., Vol.2, No2, pp. 92–105, (2011).

      [17] Christopher ACK, Barrett F, Gur RE, Gur RC & Verma R, “Computerized Measurement of Facial Expression of Emotions in Schizophrenia”, Journal of Neuroscience Methods, (2007).

      [18] Cootes TF, Edwards GJ & Taylor CJ, “Active Appearance Models”, IEEE Trans on PAMI, Vol.23, No.6, pp.681–685, (2001).

      [19] Yang MH, Kriegman DJ & Ahuja N, “Detecting Faces in Images: A Survey”, IEEE Trans on PAMI, Vol.24, No.1, pp.34–58, (2002).

      [20] Viola P & Jones M, “Robust Real-time Object Detection”, International Journal of Computer Vision, Vol.57, No.2, pp.137–154, (2004).

      [21] Li SZ & Zhang Z, “FloatBoost Learning and Statistical Face Detection”, IEEE Trans on PAMI, Vol.26, No.9, pp.1112–1123, (2004).

      [22] Wang P & Ji Q, “Learning Discriminant Features for Multi-View Face and Eye Detection”, Computer Vision and Image Understanding, Vol.105, No.2, pp.99–111, (2007).

      [23] Blei DM, Ng AY & Jordan MI, “Latent dirichlet allocation”, J. Mach. Learn.Res., Vol.3, pp.993–1022, (2003).

      [24] Rocha M, Cortez P & Neves J, “Simultaneous evolution of neural network topologies and weights for classification and regression”, International Work-Conference on Artificial Neural Networks, pp. 59-66, (2005).

      [25] Riedmiller M, “Advanced supervised learning in multi-layer perceptrons-from backpropagation to adaptive learning algorithms”, Computer Standards & Interfaces, Vol.16, No.3, pp. 265-278, (1994).

      [26] Andreasen, NC, Scale for the Assessment of Negative Symptoms (SANS), Iowa City: University of Iowa, (1984).

      [27] Yeasin M, Bullot B & Sharma R, “From facial expression to level of interest: a spatio-temporal approach”, IEEE Conference on Computer Vision and Pattern Recognition, (2004).

      [28] Wolf L, Hassner T & Maoz I, “Face recognition in unconstrained videos with matched background similarity”, Proc. IEEE Comput. Soc.Conf. Comput. Vis. Pattern Recognit., pp.529–534, (2011).


 

View

Download

Article ID: 9211
 
DOI: 10.14419/ijet.v7i1.1.9211




Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.