Browse > Article
http://dx.doi.org/10.3745/KTCCS.2021.10.3.71

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction  

Abbas, Qalab E. (금오공과대학교 IT융복합공학과)
Jang, Sung-Bong (금오공과대학교 산학협력단)
Publication Information
KIPS Transactions on Computer and Communication Systems / v.10, no.3, 2021 , pp. 71-80 More about this Journal
Abstract
The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.
Keywords
Artificial Neural Network Algorithm; Machine Learning; Data Prediction; Learning Performance Comparison;
Citations & Related Records
연도 인용수 순위
  • Reference
1 W. C. Wang, K. W. Chau, C. T. Cheng, and L. Qiu, "A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series," Journal of Hydrology, Vol.374, No.3, pp.294-306, 2009.   DOI
2 A. M. Logar, E. M. Corwin, and W. J. B. Oldham, "A comparison of recurrent neural network learning algorithms," IEEE International Conference on Neural Networks, pp.1129-1134, 1993.
3 A. Shrestha and A. Mahmood, "Review of deep learning algorithms and architectures," IEEE Access, Vol.7, pp.53040-53065, 2019.   DOI
4 S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, Vol.9, No.8, pp.1735-1780, 1997.   DOI
5 A. Sfetsos, "A comparison of various forecasting techniques applied to mean hourly wind speed time series," Renew Energy, Vol.21, No.1, pp.23-35, 2000.   DOI
6 C. L. Wu, K. W. Chau, and C. Fan, "Prediction of rainfall time series using modular artificial neural networks coupled with data-preprocessing techniques," Journal of Hydrology, Vol.389, No.1, pp.146-167, 2010.   DOI
7 G. Li and J. Shi, "On comparing three artificial neural networks for wind speed forecasting," Applied Energy, Vol.87, No.7, pp.2313-2320, 2010.   DOI
8 T. Chai and R. R. Draxler, "Root mean square error (RMSE) or mean absolute error (MAE)?-Arguments against avoiding RMSE in the literature," Geoscience Model Development, Vol.7, No.3, pp.1247-1250, 2014.   DOI
9 T. Kluyver, "Jupyter Notebooks-a publishing format for reproducible computational workflows," Position Power Academy Publication, pp.87-90, 2016.
10 R. Van, G. and F. L. Drake, "Python 3 Reference Manual," CreateSpace, 2009.
11 G. Ariel, "pandas," pandas-dev/pandas: Pandas.
12 G. Varoquaux, G. Buitinck, O. Louppe, F. Grisel, Pedregosa, and A. Mueller, "Scikit-learn," GetMobile Mobile Compututation. Community, Vol.19, No.1, 2015, pp.29-33.
13 I. Shaft, J. Ahmad, S. I. Shah, and F. M. Kashif, "Impact of varying neurons and hidden layers in neural network architecture for a time frequency application," Proceedings of the 10th IEEE International Conference on Multitopic, pp.188-193, 2006.
14 K. Shibata and Y. Ikeda, "Effect of number of hidden neurons on learning in large-scale layered neural networks," Proceedings of the International Joint Conference on ICCAS-SICE, pp.5008-5013, 2009.
15 X. Zhang, L. Liu, L. Xiao, and J. Ji, "Comparison of Machine Learning Algorithms for Predicting Crime Hotspots," IEEE Access, Vol.8, pp.181302-181310, 2020.   DOI
16 A. L. Maas, A. Y. Hannun, and A. Y. Ng, "Rectifier nonlinearities improve neural network acoustic models," In ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Vol.28, 2013.
17 D. P. Kingma and J. L. Ba, "Adam: A method for stochastic optimization," Proceedings of the 3rd International Conference on Learning and Representation, pp.1-15, 2015.
18 M. C. Mukkamala and M. Hein, "Variants of RMSProp and adagrad with logarithmic regret bounds," Proceedings of the 34th International Conference on Machine Learning, Vol.5, pp.3917-3932, 2017.
19 V. Sharma and T. Avinash, "Understanding Activation Functions in Neural Networks," Medium, Vol.4, No.12, pp.1-10, 2017.