An advanced approach for distortionless seamcarving in video analysis


  • S. Immanuel Alex Pandian





Seam Carving, Spatial Coherence, Temporal Coherence, Video Analysis.


Video synopsis is a technique that creates summary of the video or it can converts the abstraction of selected frames. This innovative approach permits the organizations to review long videos in minutes. It is more convenient to superimpose objects on to the static background and parallel displaying events to generate video synopsis. In this, paper a propelled noiseless video synopsis technique, which utilizes object-extracting method for vital objects. Along this technique spatial and temporal coherence cost is used to maintain time and position of the important objects. The proposed method will generate video spots and seam craving method to reduce the input (original) video. Finally, experimental results gives that our proposed method can produce a large reducing ratio, while preserving all the important objects of choice. Therefore, this noiseless approach can facilitate users to watch the surveillance video with greater accuracy.




[1] Avidan, S. and Shamir, A. 2007. Seam carving for contentaware image resizing. ACM Transactions on Graphics, vol. 26, no. 3.

[2] Achanta, R. and Su¨sstrunk, S., “Saliency Detection for Content-aware Image Resizing,†in [IEEE Intl. Conf. on Image Processing], (2009).

[3] Rubinstein, M., Shamir, A., and Avidan, S. 2009. Multioperator media retargeting, ACM Transactions on Graphics, vol. 28, no. 3

[4] Grundmann, M., Kwatra, V., Han, M., and Essa, I. 2010. Discontinuous seam-carving for video retargeting. Computer Vision and Pattern Recognition (CVPR), 569-576

[5] Kiess, J., Guthier, B., Kopf, S., and Effelsberg, W. 2012. SeamCrop: Changing the size and aspect ratio of videos. Workshop on Mobile Video, 13-18.

[6] Miaomiao Zhao, Hongxia Liu and Yi Wan, An Improved Canny Edge Detection Algorithm Based on DCT, 2015 IEEE.

[7] Shamir, A. and Sorkine, O., “Visual media retargeting.†in [SIGGRAPH ASIA Courses], ACM (2009)

[8] M. Rubinstein, A. Shamir, and S. Avidan. Multi-operator media retargeting. ACM Transactions on Graphics, SIGGRAPH 2009, 28(3):1–11, 2009.

[9] M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir. A comparative study of image retargeting, 2010. ACM

[10] N. Petrovic, N. Jojic, and T. S. Huang, o fast forward,â€Multimedia Tools and Applications, vol. 26, no. 3, pp. 327–344, 2005.

[11] B. H¨oferlin, M. H¨oferlin, D. Weiskopf, and G. Heidemann, “Information-based adaptive fast-forward for visual surveillance,†MultimediaTools and Applications, vol. 55, no. 1, pp. 127–150, 2011

[12] M. A. Smith and T. Kanade, “Video skimming and characterization through the combination of image and language understanding,†in Content-Based Access of Image and Video Database, 1998. Proceedings, 1998 IEEE International Workshop on. IEEE, 1998, pp. 61–70.

[13] C. Kim and J.-N. Hwang, “An integrated scheme for object-based video abstraction,†in Proceedings of the eighth ACM international conference on Multimedia. ACM, 2000, pp. 303–311.

[14] J. Oh, Q. Wen, S. Hwang, and J. Lee, “Video abstraction,†Video data management and information retrieval, pp. 321–346, 2004.

[15] Y. Li, T. Zhang, and D. Tretter, “An overview of video abstraction techniques,†Technical Report HPL-2001-191, HP Laboratory, Tech. Rep., 2001.

[16] M. M. Yeung and B.-L. Yeo, “Video visualization for compact presentation and fast browsing of pictorial content,†Circuits and Systems for Video Technology, IEEE Transactions on, vol. 7, no. 5, pp. 771–785, 1997.

[17] C. T. Dang and H. Radha, “Heterogeneity image patch index and its application to consumer video summarization,†Image Processing, IEEE Transactions on, vol. 23, no. 6, pp. 2704–2718, 2014.

[18] S. Chakraborty, O. Tickoo, and R. Iyer, “Adaptive keyframe selection for video summarization,†in Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on. IEEE, 2015, pp. 702–709.

[19] S. Lu, Z. Wang, T. Mei, G. Guan, and D. D. Feng, “A bag-of-importance model with locality-constrained coding-based feature learning for video summarization,†Multimedia, IEEE Transactions on, vol. 16, no. 6, pp. 1497–1509, 2014.

[20] Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological video synopsis and indexing,†Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 11, pp. 1971–1984, 2008.

[21] J. Zhu, S. Feng, D. Yi, S. Liao, Z. Lei, and S. Z. Li, “High-performance video condensation system,†Circuits and Systems for Video Technology, IEEE Transactions on, vol. 25, no. 7, pp. 1113–1124, 2015.

[22] Y. Nie, C. Xiao and H. Sun, “Compact video synopsis via global spatiotemporal optimizationâ€, IEEE Transactions on Visualization and Computer Graphics, Vol.19, pp.1664–1676, 2013.

[23] K. Choeychuen, “An efficient implementation of the nearest neighbor based visual objects trackingâ€, International Symposium on Intelligent Signal Processing and Communications, 2006.

[24] C.R. Huang, P.C. Chung and D.K. Yang, “Maximum a posteriori probability estimation for online surveillance video synopsisâ€, IEEE Trans. on Circuits and Systems for Video Technology, Vol.24, pp.1417–1429, 2014.

[25] Shao-Ping Lu, Song-Hai Zhang, Jin Wei and Shi-Min Hu, “Timeline editing of objects in videoâ€, IEEE Transactions on Visualization and Computer Graphics, Vol.19, No.7, pp.1218– 1227, 2013.

[26] Tong, Y., Maosen, X., Caiwen, M., et al.: ‘Object based video synopses. 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA), Ottawa, Canada, September 2014, pp. 1138–1141.

[27] Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. Proc. 17th Int. Conf. on Pattern Recognition, 2004. ICPR 2004, 2004, vol. 2, pp. 28–31.

[28] Xin, L., Kejun, W., Wei, W., et al.: ‘A multiple object tracking method using Kalman filter’. 2010 IEEE Int. Conf. on Information and Automation (ICIA), Harbin, China, June 2010, pp. 1862–1866.

[29] Lei, S., Junliang, X., Haizhou, A., et al.: ‘A tracking based fast online complete video synopsis approach’. 2012 Int. Conf. on Pattern Recognition (ICPR), Tsukuba, Japan, November 2012, pp. 1956–1959.

View Full Article: