Browse > Article
http://dx.doi.org/10.9717/kmms.2018.21.12.1387

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning  

Park, Sung-Wook (Dept. of Computer Eng., Sunchon National University)
Kim, Do-Yeon (Dept. of Computer Eng., Sunchon National University)
Publication Information
Abstract
Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.
Keywords
Deep Learning; Computer Vision; Convolutional Neural Network; Transfer Learning;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
연도 인용수 순위
1 L. Yann, B. Yoshua, and H. Geoffrey, “Deep Leaning,” Nature, Vol. 521, No. 7553, pp. 436-444, 2015.   DOI
2 Y. Lecun, L. Bottou, and Y. Bengio, “Gradient-based Learning Applied to Document Recognition,” Proceeding of The IEEE, Vol. 86, No. 11, pp. 2278-2324, 1998.   DOI
3 K. Alex, S. Ilya, and H. Geoffrey, "ImageNet Classification with Deep Convolutional Neural Networks," Proceeding of Advances in Neural Information Processing System, pp. 1097-1105, 2012.
4 J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, et al., "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition," arXiv, arXiv: 1310.1531, 2013.
5 A.S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, "CNN Features Off-the-shelf: An Astounding Baseline for Recognition," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 512-519, 2014.
6 K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," arXiv, arXiv:1512.03385, 2015.
7 C. Szegedy, V. Vanhoucke, S. loffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," arXiv, arXiv:1512.00567, 2015.
8 G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261-2269, 2017.
9 M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro et al., "TensorFlow: Large-scale Machine Learning on Heterogeneous Systems," arXiv, arXiv:1603.04467, 2015.
10 Keras, https://github.com/fchollet/keras (accessed Jun., 4, 2018).
11 V. Nair and G. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," Proceeding of the 27th International Conference on Machine Learning, pp. 807-814, 2010.
12 Y. Jeong, l. Ansari, J. Shim, and J. Lee, "A Car Plate Area Detection System Using Deep Convolution Neural Network," Journal of Korea Multimedia Society, Vol. 20, No. 8, pp. 1166-1174, 2017.   DOI
13 J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, "How Transferable Are Features in Deep Neural Networks?," arXiv, arXiv:1411.1792, 2014.
14 K. Janocha and W.M. Czarnecki, "On Loss Functions for Deep Neural Networks in Classification," arXiv, arXiv:1702.05659, 2017.
15 M. Lin, Q. Chen, and S. Yan, "Network In Network," arXiv, arXiv:1312.4400v3, 2014.
16 T. Tieleman and G. Hinton, Rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude, Coursera: Neural Networks for Machine Learning Technical Report, 2012.
17 I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the Importance of Initialization and Momentum in Deep Learning," Proceeding of the 30th International Conference on Machine Learning, Vol. 28, pp. 1139-1147, 2013.
18 Y. Bengio, "Practical Recommendations for Gradient-based Training of Deep Architectures," arXiv, arXiv:1206.5533, 2012.
19 K. He, X. Zhang, S. Ren, and J. Sun, "Delving Deep into Rectifiers: Surpassing Human-level Performance on Imagenet Classification," Proceeding of International Conference on Computer Vision, pp. 1026-1034, 2015.
20 L. Prechelt, "Early Stopping-but When?," Neural Networks: Tricks of the Trade, pp. 53-67, 2012.
21 D. Jung, J. Son, and S. Kim, "Shot Category Detection Based on Object Detection Using Convolutional Neural Networks," Proceeding of International Conference on Advanced Communication Technology, pp. 36-39, 2018.
22 S. Park, U. Park, and D. Kim, "Depth Image-based Object Segmentation Scheme for Simproving Human Action Recognition," Proceeding of International Conference on Electronics, Information, and Communication, pp. 1-3, 2018.
23 X. Glorot and Y. Bengio, "Understanding the Difficulty of Training Deep Feedforward Neural Networks," Proceeding of the International Conference on Artificial Intelligence and Statistics, pp. 249-256, 2010.