DOI QR코드

DOI QR Code

Accuracy Analysis and Comparison in Limited CNN using RGB-csb

RGB-csb를 활용한 제한된 CNN에서의 정확도 분석 및 비교

  • 공준배 (군산대학교) ;
  • 장민석 (군산대학교 컴퓨터정보통신공학부) ;
  • 남광우 (군산대학교 컴퓨터정보통신공학부) ;
  • 이연식 (군산대학교 컴퓨터정보통신공학부)
  • Received : 2019.10.29
  • Accepted : 2020.02.15
  • Published : 2020.02.29

Abstract

This paper introduces a method for improving accuracy using the first convolution layer, which is not used in most modified CNN(: Convolution Neural Networks). In CNN, such as GoogLeNet and DenseNet, the first convolution layer uses only the traditional methods(3×3 convolutional computation, batch normalization, and activation functions), replacing this with RGB-csb. In addition to the results of preceding studies that can improve accuracy by applying RGB values to feature maps, the accuracy is compared with existing CNN using a limited number of images. The method proposed in this paper shows that the smaller the number of images, the greater the learning accuracy deviation, the more unstable, but the higher the accuracy on average compared to the existing CNN. As the number of images increases, the difference in accuracy between the existing CNN and the proposed method decreases, and the proposed method does not seem to have a significant effect.

본 논문은 대부분의 변형된 CNN(: Convolution Neural Networks)에서 사용하지 않는 첫 번째 컨볼루션 층(convolution layer)을 사용해 정확도 향상을 노리는 방법을 소개한다. GoogLeNet, DenseNet과 같은 CNN에서 첫 번째 컨볼루션 층에서는 기존방식(3×3 컨볼루션연산 및 배규정규화, 활성화함수)만을 사용하는데 이 부분을 RGB-csb(: RGB channel separation block)로 대체한다. 이를 통해 RGB값을 특징 맵에 적용시켜 정확성을 향상시킬 수 있는 선행연구 결과에 추가적으로, 기존 CNN과 제한된 영상 개수를 사용하여 정확도를 비교한다. 본 논문에서 제안한 방법은 영상의 개수가 적을수록 학습 정확도 편차가 커 불안정하지만 기존 CNN에 비해 정확도가 평균적으로 높음을 알 수 있다. 영상의 개수가 적을수록 평균적으로 약 2.3% 높은 정확도를 보였으나 정확도 편차는 5% 정도로 크게 나타났다. 반대로 영상의 개수가 많아질수록 기존 CNN과의 평균 정확도의 차이는 약 1%로 줄어들고, 각 학습 결과의 정확도 편차 또한 줄어든다.

Keywords

References

  1. J. Park, "Comparison Speed of Pedestrian Detection with Parallel Processing Graphic Processor and General Purpose Processor," J. of the Korea Institute of Electronic Communication Sciences, vol. 10, no. 2, June 2015, pp. 239-245. https://doi.org/10.13067/JKIECS.2015.10.2.239
  2. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: A simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol.15(1), June 2014, pp. 1929-1958.
  3. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," International Conference on Machine Learning, Lille, France, 2015.
  4. V. Nair and G. Hinton, "Rectified linear units improve restricted boltzmann machines," International Conference on Machine Learning, Haifa, Israel, 2010.
  5. S. Chung and Y. Chung "Comparison of Audio Event Detection Performance using DNN," J. of the Korea Institute of Electronic Communication Sciences, vol. 13, no. 3, June 2018, pp. 571-578. https://doi.org/10.13067/JKIECS.2018.13.3.571
  6. Y. Lee and P. Moon, "A Comparison and Analysis of Deep Learning Framework," J. of the Korea Institute of Electronic Communication Sciences, vol. 12, no. 1, 2017, pp. 115-122. https://doi.org/10.13067/JKIECS.2017.12.1.115
  7. C. Szegedy, W. Lium, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," Computer Vision and Pattern Recognition, Boston, U.S.A, 2015.
  8. K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," Computer Vision and Pattern Recognition, Las Vegas, U.S.A, 2016.
  9. G. Huang, Z. Liu, K. Weinberger, and L. Maaten, "Densely connected convolutional networks," Computer Vision and Pattern Recognition, Hawaii, U.S.A, 2017.
  10. J. Kong, Y. Lee, and M. Jang, "Convolution Neural Network with RGB channel separation block," Proc. of MITA2019, Hochiminh, Vietnam, 2019.