Using K-Fold Cross Validation Proposed Models for Spikeprop Learning Enhancements

 
 
 
  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract


    Spiking Neural Network (SNN) uses individual spikes in time field to perform as well as to communicate computation in such a way as the actual neurons act. SNN was not studied earlier as it was considered too complicated and too hard to examine. Several limitations concerning the characteristics of SNN which were not researched earlier are now resolved since the introduction of SpikeProp in 2000 by Sander Bothe as a supervised SNN learning model. This paper defines the research developments of the enhancement Spikeprop learning using K-fold cross validation for datasets classification. Hence, this paper introduces acceleration factors of SpikeProp using Radius Initial Weight and Differential Evolution (DE) Initialization weights as proposed methods. In addition, training and testing using K-fold cross validation properties of the new proposed method were investigated using datasets obtained from Machine Learning Benchmark Repository as an improved Bohte’s algorithm. A comparison of the performance was made between the proposed method and Backpropagation (BP) together with the Standard SpikeProp. The findings also reveal that the proposed method has better performance when compared to Standard SpikeProp as well as the BP for all datasets performed by K-fold cross validation for classification datasets.

     

     


     

  • Keywords


    SpikeProp, K-fold cross validation, reduce time error measurement, Spiking Neural Network and Backpropagation (BP).

  • References


      [1] Duda, R. O., Hart, P. E., & Stork, D. G. (1973). Pattern classification and scene analysis. John Wiley and Sons.

      [2] Gerstner, W., & Kistler, W. M. (2002). Spiking neuron models: An introduction. Cambridge University Press.

      [3] Kasabov, N. (2012). Evolving, probabilistic spiking neural networks and neurogenetic systems for spatio- and spectro-temporal data modelling and pattern recognition. Natural Intelligence: The INNS Magazine, 1(2), 23-37.

      [4] Eberhart, R., & Kennedy, J. (1995). A new optimizer using particle swarm theory. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39-43.

      [5] Xu, J., Lam, A. Y., & Li, V. O. (2011). Chemical reaction optimization for task scheduling in grid computing. IEEE Transactions on Parallel and Distributed Systems, 22(10), 1624-1631.

      [6] Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm algorithm. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation, pp. 4104-410.

      [7] Khanesar, M. A., Teshnehlab, M., & Shoorehdeli, M. A. (2007). A novel binary particle swarm optimization. Proceedings of the IEEE Mediterranean Conference on Control and Automation, pp. 1-6.

      [8] Srinivasan, T. R., & Shanmugalakshmi, R. (2012). Neural approach for resource selection with PSO for grid scheduling. International Journal of Computer Applications, 53(11):37-41.

      [9] Yuan, X., Nie, H., Su, A., Wang, L., & Yuan, Y. (2009). An improved binary particle swarm optimization for unit commitment problem. Expert Systems with Applications, 36(4), 8049-8055.

      [10] Grüning, A., & Sporea, I. (2011). Supervised learning of logical operations in layered spiking neural networks with spike train encoding. Neural Processing Letters, 36(2), 117-134.

      [11] Yu, J., Xi, L., & Wang, S. (2013). An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Processing Letters, 26(3), 217-231.

      [12] Gerstner, W., Kempter, R., van Hemmen, J. L., & Wagner, H. (1999). Hebbian learning and spiking neurons. Physical Review E, 59(4), 4498-4514.

      [13] Belatreche, A., & Paul, R. (2012). Dynamic cluster formation using populations of spiking neurons. Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1-6.

      [14] Ferster, D., & Spruston, N. (1995). Cracking the neuronal code. Science, 270(5237), 756-757.

      [15] Gerstner, Kempter, Hemmen and Wagner. (1999). Hebbian learning of pulse timing in the Barn Owl Auditory System in Maass. Berlin: MIT-Press.

      [16] Thorpe, S., Delorme, A., & Van Rullen, R. (2001). Spike-based strategies for rapid processing. Neural Networks, 14(6–7), 715-725.

      [17] Bohte, S. M., Kok, J. N., & Poutré, J. A. L. (2000). Spike-prop: Error-backpropagation in multi-layer networks of spiking neurons. Proceedings of the 8th European Symposium on Artificial Neural, pp. 1-6.

      [18] Xin, J., & Embrechts, M. J. (2001). Supervised learning with spiking neural networks. Proceedings of International Joint Conference on Neural Networks, pp. 1772-1777.

      [19] Moore, S. C. (2002). Back-propagation in spiking neural networks. Master thesis, University of Bath.

      [20] Tiňo, P., & Mills, A. J. (2005). Learning beyond finite memory in recurrent networks of spiking neurons. Neural Computation, 18(3), 591-613.

      [21] Bohte, S. M., Kok, J. N., & La Poutre, H. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(41-44), 17–37.

      [22] Baldi, P., & Heiligenberg, W. (1988). How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Biological Cybernetics, 59(4-5), 313–318.

      [23] Eurich, C. W., & Wilke, S. D. (2000). Multi-dimensional encoding strategy of spiking neurons. Neural Computation. 12(17), 1519–1529.

      [24] Pouget, A., Deneve, S., Ducom, J. C., & Latham, P. E. (1999). Narrow vs. wide tuning curves: What’s best for a population code? Neural Computation, 11(1), 85–90.

      [25] Snippe, H. P., & Koenderink, J. J. (1992). Discrimination thresholds for channelcoded systems. Biological Cybernetics. 66(6), 543–551.

      [26] Zhang, G., Hu, M. Y., Patuwo, B. E., & Indro, D. C. (1999). Artificial neural networks in bankruptcy prediction: General framework and cross-validation analysis. European Journal of Operational Research, 116(1), 16–32.

      [27] Wysoski, S. G., Benuskova, L., & Kasabov, N. (2006). On-line learning with structural adaptation in a network of spiking neurons for visual pattern recognition. Proceedings of the International Conference on Artificial Neural Networks, pp. 61-70.

      [28] Wysoski, S. G., Benuskova, L., & Kasabov, N. (2007). Text-independent speaker authentication with spiking neural networks. Proceedings of the 17th International Conference on Artificial Neural Networks, pp. 758-767.

      [29] Loiselle, S., Rouat, J., Pressnitzer, D., & Thorpe, S. (2005). Exploration of rank order coding with spiking neural networks for speech recognition. Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 2076-2080.

      [30] McLachlan, G., Do, K. A., & Ambroise, C. (2005). Analyzing microarray gene expression data. John Wiley and Sons.

      [31] Qasem, S. N., & Shamsuddin, S. M. (2012). Radial basis function network based on time variant multi-objective particle swarm optimization for medical diseases diagnosis. Applied Soft Computing, 11(11), 1427–1438.

      [32] Yu, J., Xi, L., & Wang, S. (2007). An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Processing Letters, 26(3), 217-231.

      [33] Kasabov, N. (2009). Integrative connectionist learning systems inspired by nature: Current models, future trends and challenges. Natural Computing, 8(2), 199-218.

      [34] Schliebs, S., Defoin-Platel, M., Worner, S., & Kasabov, N. (2009). Integrated feature and parameter optimization for an evolving spiking neural network: Exploring heterogeneous probabilistic models. Neural Networks, 22(5-6), 623-632.

      [35] Schraudolph, N. N., & Graepel, T. (2002). Towards stochastic conjugate gradient methods. Proceedings of the IEEE 9th International Conference on Neural Information Processing, pp. 853-856.

      [36] Fukuoka, T., Tokunaga, A., Kondo, E., Miki, K., Tachibana, T., & Noguchi, K. (1998). Change in mRNAs for neuropeptides and the GABAA receptor in dorsal root ganglion neurons in a rat experimental neuropathic pain model. Pain, 78(1), 13-26.

      [37] Panda, P., & Roy, K. (2016). Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1–8.

      [38] Kheradpisheh, S. R., Ganjtabesh, M., & Masquelier, T. (2016). Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition. Neurocomputing, 205, 382-392.

      [39] Tavanaei, A., & Maida, A. (2017, November). Bio-inspired multi-layer spiking neural network extracts discriminative features from speech signals. Proceedings of the International Conference on Neural Information Processing, pp. 899-908.

      [40] Liu, T., Liu, Z., Lin, F., Jin, Y., Quan, G., & Wen, W. (2017, November). Mt-spike: A multilayer time-based spiking neuromorphic architecture with temporal error backpropagation. Proceedings of the 36th International Conference on Computer-Aided Design, pp. 450-457.

      [41] Tavanaei, A., & Maida, A. S. (2017). Bp-stdp: Approximating backpropagation using spike timing dependent plasticity, https://arxiv.org/pdf/1711.04214.pdf.

      [42] Saleh, A. Y., Hamed, H. N. B. A., Shamsuddin, S. M., & Ibrahim, A. O. (2017). A new hybrid k-means evolving spiking neural network model based on differential evolution. Proceedings of the International Conference of Reliable Information and Communication Technology, pp. 571-583.


 

View

Download

Article ID: 20790
 
DOI: 10.14419/ijet.v7i4.11.20790




Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.