DOI QR코드

DOI QR Code

Development of Convolutional Neural Network Basic Practice Cases

합성곱 신경망 기초 실습 사례 개발

  • Hur, Kyeong (Department of Computer Education, Gyeong-In National University of Education)
  • 허경 (경인교육대학교 컴퓨터교육과)
  • Received : 2022.07.30
  • Accepted : 2022.08.18
  • Published : 2022.08.31

Abstract

In this paper, as a liberal arts course for non-majors, we developed a basic practice case for convolutional neural networks, which is essential for designing a basic convolutional neural network course curriculum. The developed practice case focuses on understanding the working principle of the convolutional neural network and uses a spreadsheet to check the entire visualized process. The developed practice case consisted of generating supervised learning method image training data, implementing the input layer, convolution layer (convolutional layer), pooling layer, and output layer sequentially, and testing the performance of the convolutional neural network on new data. By extending the practice cases developed in this paper, the number of images to be recognized can be expanded, or basic practice cases can be made to create a convolutional neural network that increases the compression rate for high-quality images. Therefore, it can be said that the utility of this convolutional neural network basic practice case is high.

본 논문에서는 비전공자들을 위한 교양과정으로, 기초 합성곱신경망 과목 커리큘럼을 설계하는데 필수적으로 요구되는 합성곱신경망 기초 실습 사례를 개발하였다. 개발된 실습 사례는 합성곱신경망의 동작원리를 이해시키는 데 초점을 두고, 시각화된 전체 과정을 확인할 수 있도록 스프레드시트를 사용하였다. 개발된 실습 사례는 지도학습 방식의 이미지 훈련데이터 생성, 입력층, 컨볼루션층(합성곱층), 풀링층 그리고 출력층을 차례대로 구현하고, 신규 데이터에 대해 합성곱신경망의 성능을 테스트하는 것으로 구성되었다. 본 논문에서 개발한 실습사례를 확장하여 인식하려는 이미지 개수를 확장하거나, 고화질의 이미지에 대한 압축률을 높이는 합성곱신경망을 만드는 기초 실습 사례를 만들 수 있다. 따라서, 본 합성곱신경망 기초 실습 사례의 활용도가 높다고 할 수 있다.

Keywords

References

  1. Y. Bengio, I. Goodfellow, and A. Courville, Deep Learning, MIT Press, 2017.
  2. Kurt Hornik, Maxwell Stinchcombe and Halbert White1, "Multilayer feedforward networks are universal approximators," Neural Networks, vol. 2, issue 5, pp. 359-366, 1989. https://doi.org/10.1016/0893-6080(89)90020-8
  3. M. Roodschild, J. Gotay Sardinas, and A. will, A New Approach for the Vanishing Gradient Problem on Sigmoid Activation, Springer Nature, pp. 351-360, 2020.
  4. V. Nair and G. Hinton, "Rectified linear units improve restricted boltzmann machines," International Conference on Machine Learning, pp. 807-814, 2010.
  5. Y. Qin, X. Wang, and J. Zou, "The optimized deep belief networks with improved logistic Sigmoid units and their application in fault diagnosis for planetary gearboxes of wind turbines," IEEE Transactions on Industrial Electronics, vol. 66, no. 5, pp. 3814-3824, July 2018.
  6. X. Wang, Y. Qin, Y. Wang, S. Xiang, and H. Chen, "ReLTanh: An activation function with vanishing gradient resistance for SAE-based DNNs and its application to rotating machinery fault diagnosis," Neurocomputing, vol. 363, pp. 88-98, 2019. https://doi.org/10.1016/j.neucom.2019.07.017
  7. S. Kong and M. Takatsuka, "Hexpo: A vanishing-proof activation function," International Joint Conference on Neural Networks, pp. 2562-2567, 2017.
  8. W. Yoshiyuki and W. Sadami, Deep Learning with Excel, Seoul : Seongandang, 2020.
  9. K. Hur, "Supervised learning artificial neural network parameter optimization and activation function basic training method using spreadsheets," Journal of Practical Engineering Education, vol. 13, no. 2, pp. 233-242, 2021. https://doi.org/10.14702/JPEE.2021.233