Comparing Convolution Neural Network Models for Leaf Recognition

  • Authors

    • Nurbaity Sabri
    • Zalilah Abdul Aziz
    • Zaidah Ibrahim
    • Muhammad Akmal Rasydan Bin Mohd Rosni
    • Abdul Hafiz bin Abd Ghapul
    2018-08-13
    https://doi.org/10.14419/ijet.v7i3.15.17518
  • AlexNet, CNN, GoogLeNet, Leaf recognition
  • This research compares the recognition performance between pre-trained models, GoogLeNet and AlexNet, with basic Convolution Neural Network (CNN) for leaf recognition. Lately, CNN has gained a lot of interest in image processing applications. Numerous pre-trained models have been introduced and the most popular pre-trained models are GoogLeNet and AlexNet. Each model has its own layers of convolution and computational complexity. A great success has been achieved using these classification models in computer vision and this research investigates their performances for leaf recognition using MalayaKew (MK), an open access leaf dataset. GoogLeNet achieves a perfect 100% accuracy, outperforms both AlexNet and basic CNN. On the other hand, the processing time for GoogLeNet is longer compared to the other models due to the high number of layers in its architecture.

     

     

  • References

    1. [1] Wu, S. G., Bao, F. S., Xu, E. Y., Wang, Y. X., Chang, Y. F., & Xiang, Q. L. (2007). A leaf recognition algorithm for plant classification using probabilistic neural network. In Signal Processing and Information Technology, 2007 IEEE International Symposium on (pp. 11-16).

      [2] Wu, H., Wang, L., Zhang, F., & Wen, Z. (2015). Automatic leaf recognition from a big hierarchical image database. International Journal of Intelligent Systems, 30(8), 871-886.

      [3] Chaki, J., Parekh, R., & Bhattacharya, S. (2015). Plant leaf recognition using texture and shape features with neural classifiers. Pattern Recognition Letters, 58, 61-68.

      [4] Yasar, A., Saritas, I., Sahman, M. A., &Dundar, A. O. (2015). Classification of Leaf Type Using Artificial Neural Networks. International Journal of Intelligent Systems and Applications in Engineering, 3(4), 136-139.

      [5] Jamil, N., Hussin, N. A. C., Nordin, S., &Awang, K. (2015). Automatic Plant Identification: Is Shape the Key Feature?.Procedia Computer Science, 76, 436-442.

      [6] Ibrahim, Z., Sabri, N., & Mangshor, N. N. A. (2018). Leaf Recognition using Texture Features for Herbal Plant Identification, IJEECS, vol. 9, no. 1, 152-156.

      [7] Liang, M., & Hu, X. (2015). Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3367-3375).

      [8] M. A. Hedhazi, I. Kourbane and Y. Genc, “On Identifying leaves: A Comparison of CNN with classicial ML methodsâ€, IEEE 25th Signal Processing and Communications Applications Conference, 2017

      [9] Y. D. Zhang, Z. D. Dong, X. Chen, W. Jia, S. Du, K. Muhammad and S. H. Wang, “Image based fruit category classification by 13-layer deep convolutional neural network and data augmentationâ€, Journal of Multimedia Tools and Applications, Springer, pp. 1-20, 2017.

      [10] Ballester, P., & de Araújo, R. M. (2016, February). On the Performance of GoogLeNet and AlexNet Applied to Sketches. In AAAI (pp. 1124-1128)

      [11] Tian, Y., Luo, P., Wang, X., & Tang, X. (2015). Deep learning strong parts for pedestrian detection. In Proceedings of the IEEE international conference on computer vision (pp. 1904-1912)

      [12] Lévy, D., & Jain, A. (2016). Breast mass classification from mammograms using deep convolutional neural networks. arXiv preprint arXiv:1612.00542

      [13] Silva, P. F., Marcal, A. R., & da Silva, R. M. A. (2013, June). Evaluation of features for leaf discrimination. In International Conference Image Analysis and Recognition (pp. 197-204). Springer, Berlin, Heidelberg.

      [14] Lee, S. H., Chan, C. S., Wilkin, P., & , P. (2015, September). Deep-plant: Plant identification with convolutional neural networks. In Image Processing (ICIP), 2015 IEEE International Conference on (pp. 452-456).

      [15] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).

      [16] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv, 2014.

      [17] Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in plant science, 7, 1419.

      [18] Zhong, Z., Jin, L., & Xie, Z. (2015, August). High performance offline handwritten Chinese character recognition using GoogLeNet and directional feature maps. In Document Analysis and Recognition (ICDAR), IEEE 13th International Conference on (pp. 846-850).

      [19] Jiaqi, S., Changwen, Q. & Jianwei, L. (2017). A Performance analysis of convolutional neural network models in SAR target recognition. IEEE SAR in Big Data Era: Models, Methods and Applications, (pp. 1-6).

      [20] Maha A. M., Esra A. H. Bilal, T. & Naoufel, W. (2018). Automatic Target Automation in SAR Images. IEEE International Conference on Artificial Intelligence and BigData, (160-164).

  • Downloads

  • How to Cite

    Sabri, N., Abdul Aziz, Z., Ibrahim, Z., Akmal Rasydan Bin Mohd Rosni, M., & Hafiz bin Abd Ghapul, A. (2018). Comparing Convolution Neural Network Models for Leaf Recognition. International Journal of Engineering & Technology, 7(3.15), 141-144. https://doi.org/10.14419/ijet.v7i3.15.17518