Recognition of Food with Monotonous Appearance using Speeded-Up Robust Feature (SURF)

  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract

    Food has become one of the most photographed objects since the inceptions of smart phones and social media services. Recently, the analysis of food images using object recognition techniques have been investigated to recognize food categories. It is a part of a framework to accomplish the tasks of estimating food nutrition and calories for health-care purposes. The initial stage of food recognition pipeline is to extract the features in order to capture the food characteristics. A local feature by using SURF is among the efficient image detector and descriptor. It is using fast hessian detector to locate interest points and haar wavelet for descriptions. Despite the fast computation of SURF extraction, the detector seems ineffective as it obviously detects quite a small volume of interest points on the food objects with monotonous appearance. It occurs due to 1) food has texture-less surface 2) image has small pixel dimensions, and 3) image has low contrast and brightness. As a result, the characteristics of these images that were captured are clueless and lead to low classification performance. This problem has been manifested through low production of interest points. In this paper, we propose a technique to detect denser interest points on monotonous food by increasing the density of blobs in fast hessian detector in SURF. We measured the effect of this technique by performing a comparison on SURF interest points detection by using different density of blobs detection. SURF is encoded by using Bag of Features (BoF) model and Support Vector Machine (SVM) with linear kernel adopted for classification. The findings has shown the density of interest point detection has prominent effect on the interest points detection and classification performance on the respective food categories with 86% classification accuracy on UEC100-Food dataset.



  • Keywords

    bag of features; food recognition; local features; object recognition; SURF

  • References

      [1] Y. Kuang, K. Åström, L. Kopp, M. Oskarsson, and M. Byröd, “Optimizing visual vocabularies using soft assignment entropies,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6495 LNCS, no. PART 4, pp. 255–268, 2011.

      [2] K. Li, F. Wang, and L. Zhang, “A new algorithm for image recognition and classification based on improved Bag of Features algorithm,” Opt. - Int. J. Light Electron Opt., vol. 127, no. 11, pp. 4736–4740, 2016.

      [3] L. Xie, Q. Tian, and B. Zhang, “Simple Techniques Make Sense: Feature Pooling and Normalization for Image Classification,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 7, pp. 1251–1264, 2016.

      [4] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. Comput. Vis., pp. 2564–2571, 2011.

      [5] A. J. D. Pablo F. Alcantarilla Adrien Bartoli, “KAZE features,” Proc. ECCV2012, 2012.

      [6] S. Giovany, “ScienceDirect ScienceDirect ScienceDirect ScienceDirect ScienceDirect ScienceDirect Machine Learning and SIFT Approach Indonesian Food Image Machine Learning Approach for Indonesian Food Image Recognition Machine Learning Approach for Indonesian Food Imag,” Procedia Comput. Sci., vol. 116, pp. 612–620, 2017.

      [7] M. B. Fengqing Zhu Nitin Khanna, Carol J. Boushey, Edward J. Delp, “Multiple Hypotheses Image Segmentation and ClassificationWith Application to Dietary Assessment,” IEEE J. Biomed. Heal. INFORMATICS, vol. 19, no. 1, pp. 377–388, 2015.

      [8] H. Kagaya and K. Aizawa, “New Trends in Image Analysis and Processing -- ICIAP 2015 Workshops,” vol. 9281, pp. 350–357, 2015.

      [9] Y. Kawano and K. Yanai, “FoodCam: A real-time food recognition system on a smartphone,” Multimed. Tools Appl., vol. 74, no. 14, pp. 5263–5287, 2015.

      [10] G. M. Farinella, D. Allegra, M. Moltisanti, F. Stanco, and S. Battiato, “Retrieval and classification of food images,” Comput. Biol. Med., vol. 77, pp. 23–39, 2016.

      [11] M. B. Fengqing Zhu InsooWoo, Sung Ye Kim, Carol J. Boushey, David S. Ebert, Edward J. Delp, “The Use of Mobile Devices in Aiding Dietary Assessment and Evaluation,” IEEE J. Sel. Top. Signal Process., vol. 4, no. 4, pp. 756–766, 2010.

      [12] Y. M. Kiyoharu Aizawa He Li, Chamin Morikawa, “Food Balance Estimation by Using Personal Dietary Tendencies in a Multimedia Food Log,” IEEE Trans. Multimed., vol. 15, no. 8, pp. 2176–2185, 2013.

      [13] M. N. Razali, N. Manshor, and A. A. Halin, Food Category Recognition using SURF and MSER Local Feature Representation. Bangi,Malaysia: Springer, Cham, 2017.

      [14] E. Aguilar, B. Remeseiro, M. Bolaños, and P. Radeva, “Grab, Pay and Eat: Semantic Food Detection for Smart Restaurants,” pp. 1–10, 2017.

      [15] Z. Zong, D. T. Nguyen, P. Ogunbona, and W. Li, “On the combination of local texture and global structure for food classification,” Proc. - 2010 IEEE Int. Symp. Multimedia, ISM 2010, pp. 204–211, 2010.

      [16] Z. Z. Duc Thanh Nguyen Philip O. Ogunbona, Yasmine Probst ,Wanqing Li, “Food image classification using local appearance and global structural information,” Neurocomputing, vol. 140, pp. 242–251, 2014.

      [17] K. Y. Yoshiyuki Kawano, “FoodCam: A real-time food recognition system on a smartphone,” Multimed. Tools Appl., vol. 74, no. 14, pp. 5263–5287, 2015.

      [18] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, 2008.

      [19] L. G. MariosM. Anthimopoulos Luca Scarnato, Peter Diem, Stavroula G.Mougiakakou, “A Food Recognition System for Diabetic Patients Based on an Optimized Bag-of-Features Model,” IEEE J. Biomed. Heal. INFORMATICS, vol. 18, no. 4, pp. 1261–1271, 2014.

      [20] G. Csurka, C. Dance, L. Fan, J. Willamowski, and Cedric Bray, “Visual categorization with bag of keypoints,” Int. Work. Stat. Learn. Comput. Vis., pp. 1–22, 2004.

      [21] Y. Jiang, J. Yang, C. Ngo, and A. G. Hauptmann, “Representations of Keypoint-Based Semantic Concept Detection : A Comprehensive Study Representations of Keypoint-Based Semantic Concept Detection : A Comprehensive Study,” IEEE Trans. Multimed., vol. 12, no. 1, pp. 42–53, 2010.

      [22] T. Kirishanthy and A. Ramanan, “Creating Compact and Discriminative Visual Vocabularies using Visual Bits,” 2015.

      [23] Y. Matsuda, H. Hoashi, and K. Yanai, “Multiple-Food Recognition Considering Co-occurrence Employing Manifold Ranking,” 2012 21st Int. Conf. Pattern Recognit., no. Icpr, pp. 2017 – 2020, 2012.

      [24] R. Mohamed, M. N. S. Zainudin, N. Sulaiman, and T. Perumal, “Multi-label Classification for Recognition of Physical Activity from Various Accelerometer Sensor Positions,” J. ICT, no. 2, pp. 209–231, 2018.




Article ID: 23368
DOI: 10.14419/ijet.v7i4.31.23368

Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.