Key-frame extraction for summarization of surveillance footage by analysis of colour histograms

 
 
 
  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract


    Everyday a plethora of video content is generated by surveillance cameras all over the world. This footage has two major problems, it takes a lot of space even for parts that are not important (empty rooms, night time recording etc) and also takes a lot of time to review the same. Our objective through this paper is to reduce the spatial and temporal redundancies of the video through a process known as video summarisation. This paper proposes a summarization algorithm for surveillance footage via key-frame extraction, based on comparison of consecutive frames of the video over certain frame descriptors. The algorithm avoids exhaustive comparison by using K-means on the colour bins of each temporal shot to extract dominant colour bins to extract relevant sections of the surveillance footage. Experiments are performed on the i-Lids dataset for AVSS (Advanced Video and Signal based Surveillance) 2007 and EC Funded CAVIAR project’ dataset on city surveillance. Ground truth was used as the metric to judge the validity of the proposed algorithm. The obtained values from the algorithm are evaluated for precision, recall and F-measure. We found out that our algorithm satisfies the ground truth of the all video datasets and is also fast enough to perform the action. We conclude by showing the results and their comparisons of how our algorithm performs by highlighting various metrics of precision and accuracy.

     

     


     

  • Keywords


    Frame descriptors; Key frames extraction; Surveillance; Video Summarization; Visual Summary evaluation.

  • References


      [1] Zeinab Zeinalpour, Behrouz Minaei Bidgoli, Mahmud Fathi, “Video Summarization Using Genetic Algorithm and Information Theory” Computer Conference, 14th International CSI, 2009: 158-163.

      [2] Clive Norris. A review of the increased use of CCTV and video-surveillance for crime prevention purposes in europe. Briefing Paper for Civil Liberties, Justice and Home.

      [3] Vikas Choudhary and Anil Kumar Tiwari. Surveillance video synopsis. In ICVGIP, pages 207–212. IEEE, 2008. https://doi.org/10.1109/ICVGIP.2008.84.

      [4] Zhonglan Wu and Pin Xu, “Research on the technology of Video key-frame extraction based on clustering”, IEEE Fourth international conference on Multimedia Information networking and security, 2012, p. 290-293

      [5] Naveed Ejaz et al., “Adaptive key-frame extraction for video summarization using an aggregation mechanism”, Journal of Visual Communication 23 (2012), p.1031-140. https://doi.org/10.1016/j.jvcir.2012.06.013.

      [6] Michael J. Swain and Dana H. Ballard. Colour indexing. International Journal of Computer Vision, 7(1):11–32, November 1991. https://doi.org/10.1007/BF00130487.

      [7] Linda G. Shapiro and George C. Stockman. Computer Vision. Prentice Hall, 2001.

      [8] H.D. Cheng, X.H. Jiang, Y. Sun, and Jingli Wang. Colour image segmentation: advances and prospects. Pattern Recognition, 34(12):2259 – 2281, 2001. https://doi.org/10.1016/S0031-3203(00)00149-7.

      [9] Ofir Pele and Michael Werman. The quadratic-chi histogram distance family. In Kostas Daniilidis, Petros Maragos, and Nikos Paragios, editors, Computer Vision - ECCV 2010, volume 6312 of Lecture Notes in Computer Science, pages 749–762. Springer Berlin Heidelberg, 2010. https://doi.org/10.1007/978-3-642-15552-9_54.

      [10] Huayong Liu, Wenting Meng, Zhi Liu, “Key Frame Extraction of Online Video Based on Optimized Frame Difference”. 9th International Conference on Fuzzy Systems and Knowledge Discovery, 2012: 1238-1242. https://doi.org/10.1109/ICCT.2012.6511333.

      [11] Aju Sony, Kavya Ajith, Keerthi Thomas, Tijo Thomas, Oeepa P. L., “Video Summarization by Clustering Using Euclidean Distance”. Proc. International Conference on Signal Processing, Communication, Computing and Networking Technologies, 2011: 642-646. https://doi.org/10.1109/ICSCCN.2011.6024630.

      [12] Anastasios D. Doulamis, Nikolaos D. Doulamis and Stefanos D. Kollias” Efficient Video Summarization Based on A Fuzzy Video Content Representation”. IEEE International Symposium on Circuit and Systems,2000:301-304.

      [13] Suresh C Raikwar, Charul Bhatnagar and Anand Singh Jalal, “A frame work for key-frame extraction from surveillance Video”, 5th International Conference on Computer and Communication Technology”, IEEE, 2014, p. 297-300. https://doi.org/10.1109/ICCCT.2014.7001508.

      [14] Zhao et al. “Key-frame extraction and shot retrieval using nearest feature line”,Proceedings of ACM Workshop on Multimedia, 2000,p. 217- 220. https://doi.org/10.1145/357744.357942.

      [15] Mukhargee et al., “Key-frame estimation in video using randomness measure of feature point pattern”, IEEE transactions on circuits on systems for video technology, vol.7, no.5, May 2007, p. 612-620. https://doi.org/10.1109/TCSVT.2007.895353.

      [16] Zhuang Y, Rui Y, Huang T.S and Mehvotra S, “Adaptive key-frame extraction using unsupervised clustering”, Proceedings of International conference on Image Processing, 1998, p 866-870.

      [17] Gang Y and Liu, “Video summarization using singular value decomposition”, Proceedings of Computer Vision and Pattern Recognition, 2000, p 347-358.

      [18] J. Peng and Q. Xiao-Lin, Keyframe-Based Video Summary using Visual Attention Clues, IEEE MultiMedia, (2), pp. 64–73, (2009). https://doi.org/10.1109/MMUL.2009.65.

      [19] H. Liu and T. Li, Key Frame Extraction based on Improved Frame Blocks Features and Second Extraction, In IEEE 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), pp. 1950–1955, (2015).

      [20] Yogamangalam, R., & Karthikeyan, B. (2013). Segmentation techniques comparison in image processing. International Journal of Engineering and Technology (IJET), 5(1), 307-313.

      [21] A A, M., & G.R Sathiaseelan, J. (2018). Contrast Enhancement of Grayscale and Color images using Adaptive Techniques. International Journal of Engineering & Technology, 7(2.22), 1-4. https://doi.org/10.14419/ijet.v7i2.22.11798.


 

View

Download

Article ID: 11865
 
DOI: 10.14419/ijet.v7i4.11865




Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.