Browse > Article
http://dx.doi.org/10.5762/KAIS.2018.19.4.161

A Study on Random Selection of Pooling Operations for Regularization and Reduction of Cross Validation  

Ryu, Seo-Hyeon (Defense Agency for Technology and Quality)
Publication Information
Journal of the Korea Academia-Industrial cooperation Society / v.19, no.4, 2018 , pp. 161-166 More about this Journal
Abstract
In this paper, we propose a method for the random selection of pooling operations for the regularization and reduction of cross validation in convolutional neural networks. The pooling operation in convolutional neural networks is used to reduce the size of the feature map and for its shift invariant properties. In the existing pooling method, one pooling operation is applied in each pooling layer. Because this method fixes the convolution network, the network suffers from overfitting, which means that it excessively fits the models to the training samples. In addition, to find the best combination of pooling operations to maximize the performance, cross validation must be performed. To solve these problems, we introduce the probability concept into the pooling layers. The proposed method does not select one pooling operation in each pooling layer. Instead, we randomly select one pooling operation among multiple pooling operations in each pooling region during training, and for testing purposes, we use probabilistic weighting to produce the expected output. The proposed method can be seen as a technique in which many networks are approximately averaged using a different pooling operation in each pooling region. Therefore, this method avoids the overfitting problem, as well as reducing the amount of cross validation. The experimental results show that the proposed method can achieve better generalization performance and reduce the need for cross validation.
Keywords
Convolutional neural networks; Cross validation; Deep learning; Overfitting; Pooling;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Ciregan, Dan, Ueli Meier, and Jürgen Schmidhuber, "Multi-column deep neural networks for image classification," Computer vision and pattern recognition (CVPR), 2012 IEEE conference on. IEEE, 2012. DOI: https://doi.org/10.1109/CVPR.2012.6248110
2 Parkhi, Omkar M., Andrea Vedaldi, and Andrew Zisserman, "Deep Face Recognition," BMVC, vol. 1. no. 3. 2015. DOI: https://doi.org/10.5244/C.29.41
3 Tian, Yonglong, et al, "Deep learning strong parts for pedestrian detection," Proceedings of the IEEE international conference on computer vision, 2015. DOI: https://doi.org/10.1109/ICCV.2015.221
4 Amodei, Dario, et al, "Deep speech 2: End-to-end speech recognition in english and mandarin," International Conference on Machine Learning. 2016.
5 Srivastava, Nitish, et al, "Dropout: a simple way to prevent neural networks from overfitting," Journal of Machine Learning Research 15.1 (2014): 1929-1958.
6 Wan, Li, et al, "Regularization of neural networks using dropconnect," Proceedings of the 30th international conference on machine learning (ICML-13). 2013.
7 Ioffe, Sergey, and Christian Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," arXiv preprint arXiv:1502.03167 (2015).
8 Huang, Gao, et al, "Deep networks with stochastic depth," European Conference on Computer Vision. Springer, Cham, 2016. DOI: https://doi.org/10.1007/978-3-319-46493-0_39
9 Simonyan, Karen, and Andrew Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556 (2014).
10 Iandola, Forrest, et al, "Densenet: Implementing efficient convnet descriptor pyramids," arXiv preprint arXiv:1404.1869 (2014).