Using K-Fold Cross Validation Proposed Models for Spikeprop Learning Enhancements
-
https://doi.org/10.14419/ijet.v7i4.11.20790
Received date: October 2, 2018
Accepted date: October 2, 2018
Published date: October 2, 2018
-
SpikeProp, K-fold cross validation, reduce time error measurement, Spiking Neural Network and Backpropagation (BP). -
Abstract
Spiking Neural Network (SNN) uses individual spikes in time field to perform as well as to communicate computation in such a way as the actual neurons act. SNN was not studied earlier as it was considered too complicated and too hard to examine. Several limitations concerning the characteristics of SNN which were not researched earlier are now resolved since the introduction of SpikeProp in 2000 by Sander Bothe as a supervised SNN learning model. This paper defines the research developments of the enhancement Spikeprop learning using K-fold cross validation for datasets classification. Hence, this paper introduces acceleration factors of SpikeProp using Radius Initial Weight and Differential Evolution (DE) Initialization weights as proposed methods. In addition, training and testing using K-fold cross validation properties of the new proposed method were investigated using datasets obtained from Machine Learning Benchmark Repository as an improved Bohte’s algorithm. A comparison of the performance was made between the proposed method and Backpropagation (BP) together with the Standard SpikeProp. The findings also reveal that the proposed method has better performance when compared to Standard SpikeProp as well as the BP for all datasets performed by K-fold cross validation for classification datasets.
-
References
- Duda, R. O., Hart, P. E., & Stork, D. G. (1973). Pattern classifica-tion and scene analysis. John Wiley and Sons.
- Gerstner, W., & Kistler, W. M. (2002). Spiking neuron models: An introduction. Cambridge University Press.
- Kasabov, N. (2012). Evolving, probabilistic spiking neural networks and neurogenetic systems for spatio- and spectro-temporal data modelling and pattern recognition. Natural Intelligence: The INNS Magazine, 1(2), 23-37.
- Eberhart, R., & Kennedy, J. (1995). A new optimizer using particle swarm theory. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39-43.
- Xu, J., Lam, A. Y., & Li, V. O. (2011). Chemical reaction optimiza-tion for task scheduling in grid computing. IEEE Transactions on Parallel and Distributed Systems, 22(10), 1624-1631.
- Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm algorithm. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 1997. Computa-tional Cybernetics and Simulation, pp. 4104-410.
- Khanesar, M. A., Teshnehlab, M., & Shoorehdeli, M. A. (2007). A novel binary particle swarm optimization. Proceedings of the IEEE Mediterranean Conference on Control and Automation, pp. 1-6.
- Srinivasan, T. R., & Shanmugalakshmi, R. (2012). Neural approach for resource selection with PSO for grid scheduling. International Journal of Computer Applications, 53(11):37-41.
- Yuan, X., Nie, H., Su, A., Wang, L., & Yuan, Y. (2009). An im-proved binary particle swarm optimization for unit commitment problem. Expert Systems with Applications, 36(4), 8049-8055.
- Grüning, A., & Sporea, I. (2011). Supervised learning of logical op-erations in layered spiking neural networks with spike train encod-ing. Neural Processing Letters, 36(2), 117-134.
- Yu, J., Xi, L., & Wang, S. (2013). An improved particle swarm op-timization for evolving feedforward artificial neural networks. Neu-ral Processing Letters, 26(3), 217-231.
- Gerstner, W., Kempter, R., van Hemmen, J. L., & Wagner, H. (1999). Hebbian learning and spiking neurons. Physical Review E, 59(4), 4498-4514.
- Belatreche, A., & Paul, R. (2012). Dynamic cluster formation using populations of spiking neurons. Proceedings of the IEEE Interna-tional Joint Conference on Neural Networks, pp. 1-6.
- Ferster, D., & Spruston, N. (1995). Cracking the neuronal code. Science, 270(5237), 756-757.
- Gerstner, Kempter, Hemmen and Wagner. (1999). Hebbian learning of pulse timing in the Barn Owl Auditory System in Maass. Berlin: MIT-Press.
- Thorpe, S., Delorme, A., & Van Rullen, R. (2001). Spike-based strategies for rapid processing. Neural Networks, 14(6–7), 715-725.
- Bohte, S. M., Kok, J. N., & Poutré, J. A. L. (2000). Spike-prop: Er-ror-backpropagation in multi-layer networks of spiking neurons. Proceedings of the 8th European Symposium on Artificial Neural, pp. 1-6.
- Xin, J., & Embrechts, M. J. (2001). Supervised learning with spik-ing neural networks. Proceedings of International Joint Conference on Neural Networks, pp. 1772-1777.
- Moore, S. C. (2002). Back-propagation in spiking neural networks. Master thesis, University of Bath.
- Tiňo, P., & Mills, A. J. (2005). Learning beyond finite memory in recurrent networks of spiking neurons. Neural Computation, 18(3), 591-613.
- Bohte, S. M., Kok, J. N., & La Poutre, H. (2002). Error-backpropagation in temporally encoded networks of spiking neu-rons. Neurocomputing, 48(41-44), 17–37.
- Baldi, P., & Heiligenberg, W. (1988). How sensory maps could en-hance resolution through ordered arrangements of broadly tuned re-ceivers. Biological Cybernetics, 59(4-5), 313–318.
- Eurich, C. W., & Wilke, S. D. (2000). Multi-dimensional encoding strategy of spiking neurons. Neural Computation. 12(17), 1519–1529.
- Pouget, A., Deneve, S., Ducom, J. C., & Latham, P. E. (1999). Nar-row vs. wide tuning curves: What’s best for a population code? Neural Computation, 11(1), 85–90.
- Snippe, H. P., & Koenderink, J. J. (1992). Discrimination thresholds for channelcoded systems. Biological Cybernetics. 66(6), 543–551.
- Zhang, G., Hu, M. Y., Patuwo, B. E., & Indro, D. C. (1999). Arti-ficial neural networks in bankruptcy prediction: General framework and cross-validation analysis. European Journal of Operational Re-search, 116(1), 16–32.
- Wysoski, S. G., Benuskova, L., & Kasabov, N. (2006). On-line learning with structural adaptation in a network of spiking neurons for visual pattern recognition. Proceedings of the International Con-ference on Artificial Neural Networks, pp. 61-70.
- Wysoski, S. G., Benuskova, L., & Kasabov, N. (2007). Text-independent speaker authentication with spiking neural networks. Proceedings of the 17th International Conference on Artificial Neu-ral Networks, pp. 758-767.
- Loiselle, S., Rouat, J., Pressnitzer, D., & Thorpe, S. (2005). Explora-tion of rank order coding with spiking neural networks for speech recognition. Proceedings of the IEEE International Joint Confer-ence on Neural Networks, pp. 2076-2080.
- McLachlan, G., Do, K. A., & Ambroise, C. (2005). Analyzing mi-croarray gene expression data. John Wiley and Sons.
- Qasem, S. N., & Shamsuddin, S. M. (2012). Radial basis function network based on time variant multi-objective particle swarm opti-mization for medical diseases diagnosis. Applied Soft Computing, 11(11), 1427–1438.
- Yu, J., Xi, L., & Wang, S. (2007). An improved particle swarm op-timization for evolving feedforward artificial neural networks. Neu-ral Processing Letters, 26(3), 217-231.
- Kasabov, N. (2009). Integrative connectionist learning systems in-spired by nature: Current models, future trends and challeng-es. Natural Computing, 8(2), 199-218.
- Schliebs, S., Defoin-Platel, M., Worner, S., & Kasabov, N. (2009). Integrated feature and parameter optimization for an evolving spik-ing neural network: Exploring heterogeneous probabilistic mod-els. Neural Networks, 22(5-6), 623-632.
- Schraudolph, N. N., & Graepel, T. (2002). Towards stochastic con-jugate gradient methods. Proceedings of the IEEE 9th International Conference on Neural Information Processing, pp. 853-856.
- Fukuoka, T., Tokunaga, A., Kondo, E., Miki, K., Tachibana, T., & Noguchi, K. (1998). Change in mRNAs for neuropeptides and the GABAA receptor in dorsal root ganglion neurons in a rat experi-mental neuropathic pain model. Pain, 78(1), 13-26.
- Panda, P., & Roy, K. (2016). Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1–8.
- Kheradpisheh, S. R., Ganjtabesh, M., & Masquelier, T. (2016). Bio-inspired unsupervised learning of visual features leads to robust in-variant object recognition. Neurocomputing, 205, 382-392.
- Tavanaei, A., & Maida, A. (2017, November). Bio-inspired multi-layer spiking neural network extracts discriminative features from speech signals. Proceedings of the International Conference on Neural Information Processing, pp. 899-908.
- Liu, T., Liu, Z., Lin, F., Jin, Y., Quan, G., & Wen, W. (2017, No-vember). Mt-spike: A multilayer time-based spiking neuromorphic architecture with temporal error backpropagation. Proceedings of the 36th International Conference on Computer-Aided Design, pp. 450-457.
- Tavanaei, A., & Maida, A. S. (2017). Bp-stdp: Approximating backpropagation using spike timing dependent plasticity, https://arxiv.org/pdf/1711.04214.pdf.
- Saleh, A. Y., Hamed, H. N. B. A., Shamsuddin, S. M., & Ibrahim, A. O. (2017). A new hybrid k-means evolving spiking neural net-work model based on differential evolution. Proceedings of the In-ternational Conference of Reliable Information and Communication Technology, pp. 571-583.
-
Downloads
-
How to Cite
Y.H. Ahmed, F., Hassan Ali, Y., & Mariyam Shamsuddin, S. (2018). Using K-Fold Cross Validation Proposed Models for Spikeprop Learning Enhancements. International Journal of Engineering and Technology, 7(4.11), 145-151. https://doi.org/10.14419/ijet.v7i4.11.20790
