FPGA Realization of Deep Neural Network for Hardware Trojan Detection

  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract

    With the increase in outsourcing design and fabrication, malicious third-party vendors often insert hardware Trojan (HT) in the integrated Circuits(IC). It is difficult to identify these Trojans since the nature and characteristics of each Trojan differ significantly. Any method developed for HT detection is limited by its capacity on dealing with varied types of Trojans. The main purpose of this study is to show using deep learning (DL), this problem can be dealt with some extent and the effect of deep neural network (DNN) when it is realized on field programmable gate array (FPGA). In this paper, we propose a comparison of accuracy in finding faults on ISCAS’85 benchmark circuits between random forest classifier and DNN. Further for the faster processing time and less power consumption, the network is implemented on FPGA. The results show the performance of deep neural network gets better when a large number of nets are used and faster in the execution of the algorithm. Also, the speedup of the neuron is 100x times better when implemented on FPGA with 15.32% of resource utilization and provides less power consumption than GPU.

  • Keywords

    Deep Neural Network; Deep Learning; FPGA; Hardware Trojan; Random forest classifier

  • References

      [1]. C. Wang, L. Gong, Q. Yu, X. Li, Y. Xie, and X. Zhou, “Dlau: A scalable deep learning accelerator unit on fpga,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 36, no. 3, pp. 513–517, 2016.

      [2]. S. Bhunia, M. S. Hsiao, M. Banga, and S. Narasimhan, “Hardware trojan attacks: Threat analysis and countermeasures,” Proceedings of the IEEE, vol.102, no. 8, pp. 1229–1247, 2014.

      [3]. B. Cakır and S. Malik, “Hardware trojan detection for gate-level ics using signal correlation based clustering,” in 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2015, pp. 471–476.

      [4]. R. Vinayakumar, K. Soman, P. Poornachandran, and S. Akarsh, “Application of deep learning architectures for cyber security,” in Cybersecurity and Secure Information Systems. Springer, 2019, pp. 125–160.

      [5]. M. Nirmaladevi and S. Arumugam, “Vlsi implementation of artificial neural networks—asurvey,” International Journal of Modelling and Simulation,vol. 30, no. 2, pp. 148–154, 2010.

      [6]. T. V. Huynh, “Deep neural network accelerator based on fpga,” in 2017 4th NAFOSTED Conference on Information and Computer Science. IEEE, 2017, pp. 254–257.

      [7]. J. Maria, J. Amaro, G. Falcao, and L. A. Alexandre, “Stacked autoencoders using low-power accelerated architectures for object recognition in autonomous systems,” Neural Processing Letters, vol. 43, no. 2, pp. 445–458, 2016.

      [8]. Y. Jin and D. Kim, “Unsupervised feature learning by pre-route simulation of auto-encoder behavior model,” International Journal of Computer and Information Engineering, vol. 8, no. 5, pp. 706–710, 2014.

      [9]. M. G. Coutinho, M. F. Torquato, and M. A. Fernandes, “Deep neural network hardware implementation based on stacked sparse autoencoder,” IEEE Access, vol. 7, pp. 40 674–40 694, 2019.

      [10]. I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.

      [11]. M. Panicker and C. Babu, “Efficient fpga implementation of sigmoid and bipolar sigmoid activation functions for multilayer perceptrons,” IOSR Journal of Engineering, vol. 2, no. 6, pp. 1352–1356, 2012.




Article ID: 30946
DOI: 10.14419/ijet.v9i3.30946

Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.