DOI QR코드

DOI QR Code

Digital signal change through artificial intelligence machine learning method comparison and learning

인공지능 기계학습 방법 비교와 학습을 통한 디지털 신호변화

  • Yi, Dokkyun (Seongsan Liberal Arts College, Daegu University) ;
  • Park, Jieun (Seongsan Liberal Arts College, Daegu University)
  • 이덕균 (대구대학교 인문교양대학) ;
  • 박지은 (대구대학교 인문교양대학)
  • Received : 2019.07.31
  • Accepted : 2019.10.20
  • Published : 2019.10.28

Abstract

In the future, various products are created in various fields using artificial intelligence. In this age, it is a very important problem to know the operation principle of artificial intelligence learning method and to use it correctly. This paper introduces artificial intelligence learning methods that have been known so far. Learning of artificial intelligence is based on the fixed point iteration method of mathematics. The GD(Gradient Descent) method, which adjusts the convergence speed based on the fixed point iteration method, the Momentum method to summate the amount of gradient, and finally, the Adam method that mixed these methods. This paper describes the advantages and disadvantages of each method. In particularly, the Adam method having adaptivity controls learning ability of machine learning. And we analyze how these methods affect digital signals. The changes in the learning process of digital signals are the basis of accurate application and accurate judgment in the future work and research using artificial intelligence.

앞으로의 시대는 인공지능을 이용한 다양한 분야에 다양한 제품이2 생성될 것이다. 이러한 시대에 인공지능의 학습 방법의 동작 원리를 알고 이를 정확하게 활용하는 것은 상당히 중요한 문제이다. 이 논문은 지금까지 알려진 인공지능 학습 방법을 소개한다. 인공지능의 학습은 수학의 고정점 반복 방법(fixed point iteration method)을 기반으로 하고 있다. 이 방법을 기반으로 수렴 속도를 조절한 GD(Gradient Descent) 방법, 그리고 쌓여가는 양을 누적하는 Momentum 방법, 마지막으로 이러한 방법을 적절히 혼합한 Adam(Adaptive Moment Estimation) 방법 등이 있다. 이 논문에서는 각 방법의 장단점을 설명한다. 특히, Adam 방법은 조정 능력을 포함하고 있어 기계학습의 강도를 조정할 수 있다. 그리고 이러한 방법들이 디지털 신호에 어떠한 영향을 미치는 지에 대하여 분석한다. 이러한 디지털 신호의 학습과정에서의 변화는 앞으로 인공지능을 이용한 작업 및 연구를 수행함에 있어 정확한 활용과 정확한 판단의 기준이 될 것이다.

Keywords

References

  1. G. D. Kim & Y. H. Kim. (2017). A Survey on Oil Spill and Weather Forecast Using Machine Learning Based on Neural Networks and Statistical Methods. Journal of the Korea Convergence Society, 8(10), 1-8. https://doi.org/10.15207/JKCS.2017.8.10.001
  2. J. Ku. (2017). A Study on the Machine Learning Model for Product Faulty Prediction in Internet of Things Environment. Journal of Convergence for Information Technology, 7(1), 5-60.
  3. Y. Jeong. (2018). Machine Learning Based Domain Classification for Korean Dialog System. Journal of Convergence for Information Technology, 9(8), 1-8. https://doi.org/10.22156/CS4SMB.2019.9.8.001
  4. Y. Namgoong, C. O. Kim & C. J. Lee. (2019). A machine learning model for the derivation of major molecular descriptor using candidate drug information of diabetes treatment. Journal of the Korea Convergence Society, 10(3), 23-30. https://doi.org/10.15207/JKCS.2019.10.3.023
  5. L. Deng, J. Li, J. Huang, K. Yao, D. Yu, F. Seide, M. L. Seltzer, G. Zweig, X. He, J. Williams, Y. Gong & A. Acero. (2013). Recent advances in deep learning for speech research at microsoft. ICASSP.
  6. A. Graves. (2013). Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
  7. A. Graves, A. Mohamed & G. Hinton. (2013). Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing(ICASSP), 2013 International Conference, 6645-6649
  8. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever & R. R. Salakhutdinov. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580
  9. T. Tieleman & G. E. Hinton. (2013). Lecture 6.5-RMSProp, COURSERA: Neural Networks for Machine Learning. Technical report.
  10. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang & A. Y. Ng. (2012). Large scale distributed deep networks. in NIPS.
  11. N. Jaitly, P. Nguyen, A. Senior & V. Vanhoucke. (2012). Application of pretrained deep neural networks to large vocabulary speech recognition. in Interspeech.
  12. S. Amari. (1998). Natural gradient works efficiently in learning. Neural computation, 10(2), 251-276. https://doi.org/10.1162/089976698300017746
  13. J. Duchi, E. Hazan & Y. Singer. (2011). Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12, 2121-2159.
  14. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen & T. N. Sainath. (2016). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, 29(6), 82-97.
  15. A. Krizhevsky, I. Sutskever & G. E. Hinton. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  16. R. Pascanu & Y. Bengio. (2013). Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584.
  17. I. Sutskever, J. Martens, G. Dahl & G. E. Hinton. (2013). On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning(ICML-13), 1139-1147.
  18. E. Moulines & F. R. Bach. (2015). ADAM: A method for stochastic optimization. The 3rd International Conference for Learning Representations, San Diego.
  19. Y. Bengio, P. Simard & P. Frasconi. (1994). Learning long-term dependencies with gradient is difficult. IEEE Transaction on neural networks, 5(2), 157-166. https://doi.org/10.1109/72.279181