A Survey on Prevention of Overfitting in Convolution Neural Networks Using Machine Learning Techniques

  • Authors

    • Dr M.R.Narasinga Rao
    • V Venkatesh Prasad
    • P Sai Teja
    • Md Zindavali
    • O Phanindra Reddy
    2018-05-31
    https://doi.org/10.14419/ijet.v7i2.32.15399
  • Machine Learning, Connvultion Neural Network, Overfitting, Dropout
  • Deep neural nets with a vast quantity of parameters are very effective machine getting to know structures. However, overfitting is an extreme problem in such networks. Massive networks are also sluggish to use, making it difficult to cope with overfitting by combining the predictions of many distinct large neural nets at test time. Dropout is a method for addressing this problem. The important thing concept is to randomly drop units (at the side of their connections) from the neural network for the duration of education. This prevents units from co-adapting an excessive amount of. during schooling, dropout samples from an exponential quantity of various "thinned" networks. At take a look at the time, it is simple to precise the impact of averaging the predictions of plenty of these thinned networks through in reality using a single unthinned network that has smaller weights. This considerably minimize overfitting and provides fundamental enhancements over other regularization techniques. We show that dropout enhance the overall performance of neural networks on manage gaining knowledge of obligations in imaginative and prescient, speech reputation, document type and computational biology, acquiring today's effects on many benchmark facts sets.

     

     

  • References

    1. [1]. Itamar Arel, Derek C Rose, and Thomas P Karnowski. Deep machine learning-a new frontier in artificial intelligence research [research frontier]. Computational Intelligence Magazine, IEEE, 5(4):13–18, 2010

      [2]. Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In Computer Vision–ECCV 2006, pages 404–417. Springer, 2006.

      [3]. Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends R in Machine Learning, 2(1):1–127, 2009.

      [4]. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layerwise training of deep networks. Advances in neural information processing systems, 19:153, 2007.

      [5]. Y-Lan Boureau, Jean Ponce, and Yann LeCun. A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 111–118, 2010

      [6]. . Dan Ciresan, Ueli Meier, Jonathan Masci, and J¨urgen Schmidhuber. A committee of neural networks for traffic sign classification. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 1918–1921. IEEE, 2011.

      [7]. . Dan Ciresan, Ueli Meier, Jonathan Masci, and J¨urgen Schmidhuber. A committee of neural networks for traffic sign classification. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 1918–1921. IEEE, 2011.

      [8]. . Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, and J¨urgen Schmidhuber. Deep, big, simple neural nets for handwritten digit recognition. Neural computation, 22(12):3207–3220, 2010.

      [9]. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005.

      [10]. ] S. Thrun and L. Pratt, Eds., Learning to learn. Norwell, MA, USA: Kluwer, 1998.

      [11]. G. Webb, “Multiboosting: A technique for combining boosting and wagging,†Machine learning, vol. 40, no. 2, pp. 159–196, 2000.

      [12]. W. J. Dixon and A. M. Mood, “The statistical sign test,†Journal of the American Statistical Association, vol. 41, no. 236, pp. 557–566, 1946

      [13]. D. Saari and V. Merlin, “The Copeland method,†Economic Theory, vol. 8, no. 1, pp. 51–76, 1996.

      [14]. ] J. N. Hooker, “Testing heuristics: We have it all wrong,†Journal of Heuristics, vol. 1, pp. 33–42, 1995

      [15]. ] T. Lane and W. Smart, “Why (PO)MDPs lose for spatial tasks and what to do about it,†in Proceedings of the ICML 2005 Workshop on Rich Representations for Reinforcement Learning, 2005

      [16]. M. Snel and S. Whiteson, “Multi-task evolutionary shaping without prespecified representations,†in GECCO 2010: Proceedings of the Genetic and Evolutionary Computation Conference, July 2010, pp. 1031–1038.

      [17]. Nitish Srivastava,, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting

  • Downloads

  • How to Cite

    M.R.Narasinga Rao, D., Venkatesh Prasad, V., Sai Teja, P., Zindavali, M., & Phanindra Reddy, O. (2018). A Survey on Prevention of Overfitting in Convolution Neural Networks Using Machine Learning Techniques. International Journal of Engineering & Technology, 7(2.32), 177-180. https://doi.org/10.14419/ijet.v7i2.32.15399