Emotion based mental retardation recognition framework (EMRRF) using HPSO-ANN technique

  • Authors

    • A. Vijaya Kumar Vinayaka Missions Research Foundation
    • Dr R. Ponnusamy CVR College of Engineering, Hyderabad, Telungana India.
    2018-12-17
    https://doi.org/10.14419/ijet.v7i4.20358
  • Background Subtraction, Emotion Recognition, Feature Extraction, Mental Retardation, Video Processing.
  • Linking the human emotion with the reduced brain skill of that particular person becomes the social competent concept. Various research methods has been conducted to predict the mental retardation of people which concludes that the human emotions can be used to predict them successfully. It is due to uncontrollable emotional states of mentally retarded people comparatively than the normal mental age people. The mentally retarded people cannot control their facial emotions which is more difficult to decode. Finding stable emotions of people can be used to predict the solutions of requirements. There are no research work has been available to accurately predict the emotional behaviour of humans. The main goal of this research work is to introduce the system to predict the varying emotional state of people accurately. This is attained by introducing the new framework namely Emotion based Mental Retardation Recognition Framework (EMRRF) which can recognize the different kind of emotions. In this work, input videos are preprocessed first to differentiate the required object from the noisy pixels and background portions. After preprocessing, feature extraction is performed to predict the emotions where the extracted features are color, texture and shape features. The extracted features are learned by applying the Hybridized Particle Swarm Optimization and Artificial Neural Network (HPSO-ANN) to ensure the accurate prediction of required object emotional state present in video. The overall experimentation of the research work is done in the matlab simulation environment from which it is proved that the proposed research method leads to better result than the existing research works.

     

     

  • References

    1. [1] Russel JA, “Core affect and psychological construction of emotionâ€, Psychol Rev. 2003; 110:145–72. https://doi.org/10.1037/0033-295X.110.1.145.

      [2] Shackman JE, Pollak SD, “Experiential Influences on multimodal perception of emotionsâ€, Child Dev. 2005; 76:pp.1116–1126. https://doi.org/10.1111/j.1467-8624.2005.00901.x.

      [3] Lini Joseph, “Recognition and Understanding of Emotions in Persons with Mild to Moderate Mental Retardationâ€, 2015; springer India Pvt. Ltd. Vol.2, pp.59-66.

      [4] Ramchand Hablani, Narendra Chaudhari & Sanjay Tanwani, “Recognition of Facial Expressions using Local Binary Patterns of Important Facial Partsâ€, International Journal of Image Processing (IJIP), Volume (7): Issue (2): 2013; pp.163-167.

      [5] Caifeng Shana, Shaogang Gongb, Peter W. McOwanb, “Facial expression recognition based on Local Binary Patterns: A comprehensive studyâ€, Image and Vision Computing 27, 2009, pp.803–816. https://doi.org/10.1016/j.imavis.2008.08.005.

      [6] Talele Kiran, Tuckley Kushal, “Facial Expression Classification using SupportVector Machine Based on Bidirectional Local Binary Pattern Histogram Feature Descriptorâ€, 978-1-5090-2239-7/16/$31.00 copyright 2016 IEEESNPD 2016, May 30-June 1, 2016, Shanghai, China, 1-4.

      [7] X. Feng, M. Pietikainen and A. Hadid, “Facial Expression Recognition with Local Binary Patterns and Linear Programmingâ€, Pattern Recognition and Image Analysis, vol. 15, No. 2, 2005, pp.546-548.

      [8] De Vries BBA, White SM, Knight SJL, Regan R, Homphray T, Young JD, Super M, McKeown C, Splitt M, Quarrell OWJ, Trainer AH, Niermeijer MF, Malcolm S, Flint J, Hurst JA, Winter RM, “Clinical studies on submicroscopic subtelomeric rearrangements: a checklistâ€, J Med Genet, 2001, 38:pp.145–150. https://doi.org/10.1136/jmg.38.3.145.

      [9] Hippolyte, L., Barisnikov, K., Van der Linden, M., & Detraux, J. J., “Facial emotional recognition abilities to emotional attribution: A study in Down syndromeâ€, Research in Developmental Disabilities, 2009, 30, pp.1007-1022. https://doi.org/10.1016/j.ridd.2009.02.004.

      [10] Carla C. V. P. de Santana, Wania C. de Souza, and M. Angela G. Feitosa, “Recognition of facial emotional expressions and its correlation with cognitive abilities in children with Down syndromeâ€, Psychology & Neuroscience, 2014, Vol.7, No.2, pp.73-81. https://doi.org/10.3922/j.psns.2014.017.

      [11] Deshmukh and Fadewar, “Analysis of Mental Illness Using Facial Expressionsâ€, International Journal of Innovative Research in Computer and Communication Engineering, Vol. 5, Issue 1, January 2017.

      [12] Guadalupe Elizabeth Morales, Ernesto Octavio Lopez, Claudia Castro-Campos, David Jose Charles and Yanko Norberto Mezquita-Hoyos, “Contributions to the Cognitive Study of Facial Recognition on Down syndrome: A New Approximation to Exploring Facial emotion Processing Styleâ€, article in journal of intellectual disability, 2014, Vol.2, pp.124-132.

      [13] Kim, J. and André, E., “Emotion recognition based on physiological changes in music listeningâ€, IEEE transactions on pattern analysis and machine intelligence, 2008, 30(12), pp.2067-2083. https://doi.org/10.1109/TPAMI.2008.26.

      [14] Harrison, A., Sullivan, S., Tchanturia, K. and Treasure, J., “Emotional functioning in eating disorders: attentional bias, emotion recognition and emotion regulationâ€, Psychological medicine, 2010, 40(11), pp.1887-1897. https://doi.org/10.1017/S0033291710000036.

      [15] Edwards, J., Jackson, H.J. and Pattison, P.E., “Emotion recognition via facial expression and affective prosody in schizophrenia: a methodological reviewâ€, Clinical psychology review, 22(6), 2002, pp.789-832. https://doi.org/10.1016/S0272-7358(02)00130-7.

      [16] Revathi, R. and Hemalatha, M., “An Emerging Trend of Feature Extraction Method in Video Processingâ€, Academy & Industry Research Collaboration Center (AIRCC), CSCP, 2012, pp. 69–80.

      [17] Kim, H., Sakamoto, R., Kitahara, I., Toriyama, T. and Kogure, K., “Robust foreground segmentation from color video sequences using background subtraction with multiple thresholdsâ€, Proc. KJPR, 2006, pp.188-193.

      [18] Yang, M., Kpalma, K. and Ronsin, J., “A survey of shape feature extraction techniquesâ€, thesis, 2008, pp.1-38.

      [19] Maire, M.R., “Contour detection and image segmentationâ€, University of California, Berkeley, 2009, pp.1-88.

      [20] Gupta, N. and Athavale, V.A., “Comparative Study of Different Low Level Feature Extraction Techniques for Content based Image Retrievalâ€, International Journal of Computer Technology and Electronics Engineering (IJCTEE), 2011, Volume 1, Issue 1, pp.39-42.

      [21] Gomashe, A.S. and Keole, R., “A Novel Approach of Color Histogram Based Image Search/Retrievalâ€, International Journal of Computer Science and Mobile Computing, 2015, pp.57-65.

      [22] Arif, T., Shaaban, Z., Krekor, L. and Baba, S., “Object Classification via Geometrical, Zernike and Legendre Momentsâ€, Journal of Theoretical & Applied Information Technology, 6(3), 2009, pp.031 – 037.

      [23] Choras, R.S., “Image feature extraction techniques and their applications for CBIR and biometrics systemsâ€, International journal of biology and biomedical engineering, 2007, 1(1), pp.6-16.

      [24] Zand, M., Doraisamy, S., Halin, A.A. and Mustaffa, M.R., “Texture classification and discrimination for region-based image retrievalâ€, Journal of Visual Communication and Image Representation, 2015, 26, pp.305-316. https://doi.org/10.1016/j.jvcir.2014.10.005.

      [25] Patel, M.N. and Tandel, P., “A Survey on Feature Extraction Techniques for Shape based Object Recognitionâ€, International Journal of Computer Applications, 2016, 137(6), pp.16-20.

      [26] Singh, P., Gupta, V.K. and Hrisheekesha, P.N., “A review on shape based descriptors for image retrievalâ€, International Journal of Computer Applications, 2015, 125(10), pp.27-32.

      [27] Huynh-Thu, Q. and Ghanbari, M., “Scope of validity of PSNR in image/video quality assessmentâ€, Electronics letters, 2008, 44(13), pp.800-801. https://doi.org/10.1049/el:20080522.

      [28] Poobathy, D. and Chezian, R.M., “Edge detection operators: Peak signal to noise ratio based comparisonâ€, International Journal of Image, Graphics and Signal Processing, 2014, 10, pp.55-61. https://doi.org/10.5815/ijigsp.2014.10.07.

      [29] Köksoy, O., “Multi-response robust design: Mean square error (MSE) criterion, Applied Mathematics and Computationâ€, 2006, 175(2), pp.1716-1729. https://doi.org/10.1016/j.amc.2005.09.016.

      [30] Sarbishei, O. and Radecka, K., “Analysis of mean-square-error (MSE) for fixed-point FFT unitsâ€, IEEE International Symposium on Circuits and Systems (ISCAS), 2011, pp. 1732-1735. https://doi.org/10.1109/ISCAS.2011.5937917.

  • Downloads

  • How to Cite

    Vijaya Kumar, A., & R. Ponnusamy, D. (2018). Emotion based mental retardation recognition framework (EMRRF) using HPSO-ANN technique. International Journal of Engineering & Technology, 7(4), 4151-4156. https://doi.org/10.14419/ijet.v7i4.20358