Local Temporal Block Difference Pattern for Action Recognition in Surveillance Videos using Tree Based Classifiers

  • Authors

    • Poonkodi M SRM Institute of Science and Technology
    • Vadivu G SRM Institute of Science and Technology
    2018-07-10
    https://doi.org/10.14419/ijet.v7i3.11645
  • Action Recognition, Decision Tree, Random Forest, REPTree, Temporal Difference.
  • Intelligent video classification and prediction is a fundamental step towards effective retrieval system. Huge volume of video is available for navigation today and managing such video and prediction of the activity before its completion gains importance in video surveillance,human computer recognition, gesture recognition etc., An eminent Local Temporal Block Difference Pattern (LTBDP) is introduced

    which enable efficient feature extraction that could be given to Tree Classifiers like Random Forest and REPTree for further prediction.The proposed pattern has been evaluated on UT-interaction dataset which enable researchers to predict ongoing human actions in an efficient manner. Experimental results using LTBDP in Random Forest and REPTree classifier gives 85.6% and 66.45% accuracy respectively.

  • References

    1. [1] Bobick, Aaron F. “Movement, activity and action: the role of knowledge in the perception of motion.†Philosophical Transactions of the Royal Society B: Biological Sciences Vol.352, No. 1358 (1997), pp.1257-1265.

      [2] Laptev, Ivan, and Patrick P´erez. “Retrieving actions in movies.†In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1-8. IEEE, 2007.

      [3] Turaga, Pavan, Rama Chellappa, Venkatramana S. Subrahmanian, and Octavian Udrea. “Machine recognition of human activities: A survey.†IEEE Transactions on Circuits and Systems for Video technology Vol.18, No.11 (2008), pp.1473-1488.

      [4] Ryoo, Michael S., and Jake K. Aggarwal. “Semantic representation and recognition of continued and recursive human activities.†International journal of computer vision Vol.82, No.1 (2009), pp. 1-24.

      [5] Poppe, Ronald. “A survey on vision-based human action recognition.â€Image and vision computing Vol.28, No. 6 (2010): 976-990.

      [6] Aggarwal, Jake K., and Michael S. Ryoo. “Human activity analysis: A review.†ACM Computing Surveys (CSUR) Vol.43, No. 3 (2011), pp.16.

      [7] Vishwakarma, Sarvesh, and Anupam Agrawal. “A survey on activity recognition and behavior understanding in video surveillance.†The Visual Computer Vol. 29, No.10 (2013), pp. 983-1009.

      [8] Ziaeefard, Maryam, and Robert Bergevin. “Semantic human activity recognition: a literature review.†Pattern Recognition Vol.48, No.8(2015), pp. 2329-2345.

      [9] Geetha, M. Kalaiselvi, J. Arunnehru, and A. Geetha. “Early Recognition of Suspicious Activity for Crime Prevention.†Emerging Technologies in Intelligent Applications for Image and Video Processing Vol.205 (2016).

      [10] Gupta, Abhinav, Aniruddha Kembhavi, and Larry S. Davis. “Observing human-object interactions: Using spatial and functional compatibility for recognition.†IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.31, No.10 (2009), pp. 1775-1789.

      [11] Yao, Bangpeng, and Li Fei-Fei. “Grouplet: A structured image representation

      for recognizing human and object interactions.†In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp.9-16. IEEE, 2010.

      [12] Delaitre, Vincent, Ivan Laptev, and Josef Sivic. “Recognizing human actions in still images: a study of bag-of-features and part-based representations.â€In BMVC 2010-21st British Machine Vision Conference.2010.

      [13] Datta, Ankur, Mubarak Shah, and N. Da Vitoria Lobo. “Person-onperson

      violence detection in video data.†In Pattern Recognition, 2002.Proceedings. 16th International Conference on, Vol. 1, pp. 433-438.IEEE, 2002.

      [14] Park, Sangho, and Jake K. Aggarwal. “Simultaneous tracking of multiple body parts of interacting persons.†Computer Vision and Image Understanding Vol.102, No.1 (2006), pp. 1-21.

      [15] Bloom, Victoria, Vasileios Argyriou, and Dimitrios Makris. “Linear latent low dimensional space for online early action recognition and prediction.†Pattern Recognition Vol.72 (2017), pp. 532-547.

      [16] Mar´ın-Jim´enez, Manuel J., Enrique Yeguas, and Nicol´as P´erez De La Blanca. “Exploring STIP-based models for recognizing human interactions in TV videos.†Pattern Recognition Letters Vol.34, No.15 (2013), pp. 1819-1828.

      [17] Sener, Fadime, and Nazli Ikizler-Cinbis. “Two-person interaction recognition via spatial multiple instance embedding.†Journal of Visual Communication and Image Representation Vol.32 (2015), pp. 63-73.

      [18] Harris, Chris, and Mike Stephens. “A combined corner and edge detector.â€

      In Alvey vision conference, Vol. 15, No.50, (1988), pp. 10-5244.

      [19] Breiman, Leo. “Random forests.†Machine learning Vol.45, No.1 (2001), pp. 5-32.

      [20] Arunnehru, J., and M. Kalaiselvi Geetha. “Difference intensity distance group pattern for recognizing actions in video using Support Vector Machines.†Pattern Recognition and Image Analysis Vol.26, No.4 (2016), pp. 688-696.

      [21] Lepetit, Vincent, and Pascal Fua. “Keypoint recognition using randomized

      trees.†IEEE transactions on pattern analysis and machine intelligence Vol.28, No.9 (2006), pp. 1465-1479.

      [22] Ozuysal, Mustafa, Pascal Fua, and Vincent Lepetit. “Fast keypoint recognition in ten lines of code.†In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1-8. Ieee, 2007.

  • Downloads

  • How to Cite

    M, P., & G, V. (2018). Local Temporal Block Difference Pattern for Action Recognition in Surveillance Videos using Tree Based Classifiers. International Journal of Engineering & Technology, 7(3), 1405-1409. https://doi.org/10.14419/ijet.v7i3.11645