Object Detection in The Image Using the Method of Selecting Significant Structures

  • Authors

    • Vladimir Mokshin
    • Ildar Sayfudinov
    • Svetlana Yudina
    • Leonid Sharnin
    2018-12-03
    https://doi.org/10.14419/ijet.v7i4.38.27759
  • pattern recognition, structural significance, image map, organization of perception, visual attention, segmentation.
  • The approach to image segmentation is reviewed in the article. The method of highlighting significant contours in the image is reviewed. Some structures in the image attract attention more than others due to certain distinctive properties. The article reviews the approach of highlighting significant structures in the image representing the areas of candidates identifying the object in the video frame for mobile platforms. For example, such shapes can be smoother, longer and closed. Such structures are called significant. It would be expedient to use only these significant structures to increase the speed of image recognition by computer vision methods focused on the contour selection. This approach allocates the computing resources only to significant structures, thus reducing the total computation time. Since the image consists of many pixels and links between them, which are called edges, significant structures can be measured. The article presents an approach to measuring the structure significance that largely coincides with human perception. Some image structures attract our attention without the need for a systematic scan of the entire image. In most cases, this significance represents the structure properties as a whole, i.e. parts of the structure cannot be isolated. This article presents a measure of significance based on the measurement of length and curvature. The measure highlights structures characteristic of human perception, and they often correspond to objects of interest in the image. A method is presented for calculating significance using an iterative scheme combined into a single local network for processing elements. The optimization approach to represent a processed image highlighting significant locations is used in the network.

     

     

  • References

    1. [1] Yakimov, I., Kirpichnikov, A., Mokshin, V., Yakhina, Z., Gainullin, R. 2017. The comparison of structured modeling and simulation of queuing systems. Communications in Computer and Information Science (CCIS), 800, 256-267.

      [2] Tutubalin, P.I., Mokshin, V.V. 2017. The Evaluation of the cryptographic strength of as ymmetric encryption algorithms. 2017 Second Russia and Pacific Conference on Compute r Technology and Applications (RPC), IEEE, 180—183. doi:10.1109/RPC.2017.8168094

      [3] Mokshin, V.V., Saifudinov, I.R., Kirpichnikov, A.P., Sharnin, L.M. 2016. Recognition of vehicles on the basis of heuristic data and machine learning. Bulletin of Kazan Technological University, 19(5), 130-137.

      [4] Emmanouil, T.A., Treisman, A. 2008. Perception & Psychophysics. Springer, 70(6), 946-954.

      [5] Gurevich, I.B., Yashina, V.V. 2008. Descriptive approach to image analysis: image models. Pattern Recognition and Image Analysis. Springer, 18(4), 518-541.

      [6] Vapnik, V. Vashist, A. 2009. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5-6), 544-557.

      [7] Abramov, S.K., Zelensky, A.A., Lukin, V.V., Ponomarenko, N.N. 2009. The use of the TID2008 database in the development of visual quality metrics and image processing methods. Radiotelektronny i comp ' yuternÑ– sistemi, 4, 99-109.

      [8] Mandellos, N.A., Keramitsoglou, I., Kiranoudis, Ch.T. 2011. A background subtraction algorithm for detecting and tracking vehicles. Expert Systems with Applications, 38, 1619–1631.

      [9] Cheng, X., Zhang, Y., Chen, Y., Wu, Y., Yue, Yi. 2017. Pest identification via deep residual learning in complex background. Computers and Electronics in Agriculture, 141, 351-356.

      [10] Jiaxing, L., Dexiang, Z., Jingjing, Z., Jun, Z., Lina, X. 2017. Facial Expression Recognition with Faster R-CNN. Procedia Computer Science, 107, 135-140.

      [11] Redmon, J., Divvala, S., Girshick, R., Farhadi, A. 2016. You only look once: Unified, real-time object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 779—788.

      [12] Long, J. Shelhamer, E., Darrell, T. 2017. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640—651.

      [13] Chen, L.C. Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L. 2017. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, V, 99.

      [14] Goferman, S. Zelnik-Manor, L., Tal, A. 2012. Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 1915—1926.

      [15] Wagemans, J., Kubovy, M., Peterson, M.A. Elder, J.H., Palmer, S.E., Singh, M. A Century of Gestalt Psychology in Visual Perception: I. Perceptual Grouping and Figure–Ground Organization. Psychological Bulletin, 138(6), 1172-1217.

      [16] Saifudinov, I.R., Mokshin, V.V., Kirpichnikov, A.P. 2017. Grouping of contours of objects of structural images on the basis of network of element visibility. Bulletin of Kazan Technological University, 20(9), 120-123.

      [17] Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.Y. 2011. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 353—367.

      [18] Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A. 2012. Saliency filters: Contrast based filtering for salient region detection. IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, 733—740.

      [19] Cheng, M.M., Mitra, N.J., Huang, X., Torr, P.H., Hu, S.M. 2015. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 569—582.

      [20] Smirnov, E.A., Timoshenko, D.M., Andrianov, S.N. 2014. Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks. AASRI Procedia, 6, 89-94.

      [21] Lowe, D.G. 1984. Perceptual Organization and Visual Recognition, Kluwer Academic Publishers, Boston, Mass.

      [22] Li, G., Yu, Y. 2015. Visual saliency based on multiscale deep features. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 5455—5463.

      [23] Li, G., Yu, Y. 2016. Deep contrast learning for salient object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 478—487.

      [24] Liu, N., Han, J. 2016. Dhsnet: Deep hierarchical saliency network for salient object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 678—686.

      [25] Zitnick, C.L., Dollar, P. 2014. Edge boxes: Locating object proposals from edges. Computer Vision – ECCV 2014. Lecture Notes in Computer Science, Springer, Cham, 8693, 391-405.

      [26] Arbelaez, P., Pont-Tuset, J., Barron, J.T., Marques, F., Malik, J. 2014. Multiscale combinatorial grouping. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(1), 128—140.

      [27] Feng, J., Wei, Y., Tao, L., Zhang, C., Sun, J. 2011. Salient object detection by composition. International Conference on Computer Vision, Barcelona, 1028—1035.

      [28] Zhang, J., Sclaroff, S., Lin, Z., Shen, X., Price, B., Mech, R. 2016. Unconstrained salient object detection via proposal subset optimization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 5733—5742.

      [29] Goferman, S., Zelnik-Manor, L., Tal, A. 2012. Context-aware saliency detection. IEEE Trans Pattern Anal Mach Intell, 34(10), 1915—1926.

      [30] Poli, R., Langdon, W.B., McPhee, N.F. 2008. A Field Guide to Genetic Programming. https://www.lulu.com/

      [31] Amit, Y., Kong, A. 1996. Graphical templates for model registration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(3):225–236.

      [32] Luessi, M., Eichmann, M., Schuster, G.M., Katsaggelos, A.K. 2009. Framework for efficient optimal multilevel image thresholding. Journal of Electronic Imaging, 18(1), 013004.

      [33] Vincent, L., Soille, P. 1991. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(6), 583—598.

      [34] Shakowat, Md., Sarker, Z., Tan, W.H., Logeswaran, R. 2007. Morphological based technique for image segmentation. Int. J. Inf. Tech, 14(1), 55—80.

      [35] Arbelaez, P., Maire, M., Fowlkes, C. 2008. Contour detection and hierarchical image segmentation. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 33(5).

  • Downloads

  • How to Cite

    Mokshin, V., Sayfudinov, I., Yudina, S., & Sharnin, L. (2018). Object Detection in The Image Using the Method of Selecting Significant Structures. International Journal of Engineering & Technology, 7(4.38), 1187-1192. https://doi.org/10.14419/ijet.v7i4.38.27759