DOI QR코드

DOI QR Code

The roles of differencing and dimension reduction in machine learning forecasting of employment level using the FRED big data

  • Choi, Ji-Eun (Department of Statistics, Ewha Womans University) ;
  • Shin, Dong Wan (Department of Statistics, Ewha Womans University)
  • Received : 2019.06.27
  • Accepted : 2019.08.26
  • Published : 2019.09.30

Abstract

Forecasting the U.S. employment level is made using machine learning methods of the artificial neural network: deep neural network, long short term memory (LSTM), gated recurrent unit (GRU). We consider the big data of the federal reserve economic data among which 105 important macroeconomic variables chosen by McCracken and Ng (Journal of Business and Economic Statistics, 34, 574-589, 2016) are considered as predictors. We investigate the influence of the two statistical issues of the dimension reduction and time series differencing on the machine learning forecast. An out-of-sample forecast comparison shows that (LSTM, GRU) with differencing performs better than the autoregressive model and the dimension reduction improves long-term forecasts and some short-term forecasts.

Keywords

References

  1. Arevalo A, Nino J, Hernandez G, and Sandoval J (2016). High-frequency trading strategy based on deep neural networks, Intelligent Computing Methodologies. ICIC 2016, 9773, Springer, Cham.
  2. Cepni O and Swanson NR (2019). Nowcasting and forecasting GDP in emerging markets using global financial and macroeconomic diffusion indexes, International Journal of Forecasting, 35, 555-572. https://doi.org/10.1016/j.ijforecast.2018.10.008
  3. Chiang WC, Enke D, Wu T, and Wang R (2016). An adaptive stock index trading decision support system, Expert Systems with Applications, 59, 195-207. https://doi.org/10.1016/j.eswa.2016.04.025
  4. Cho K, Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, and Bengio Y (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing, 1724-1734.
  5. Chong E, Han C, and Park FC (2017). Deep learning networks for stock market analysis and prediction: methodology, data representations, and case studies, Expert Systems With Applications, 83, 187-205. https://doi.org/10.1016/j.eswa.2017.04.030
  6. Cooijmans T, Ballas N, Laurent C, Gulcehre C, and Courville A (2017). Recurrent batch normalization, arXiv preprint, arXiv: 1603.09025.
  7. Diebold FX and Mariano RS (1995). Comparing predictive accuracy, Journal of Business and Economic Statistics, 13, 253-263. https://doi.org/10.2307/1392185
  8. Hansen PR, Lunde A, and Nason JM (2011). The model confidence set, Econometrica, 79, 453-497. https://doi.org/10.3982/ECTA5771
  9. Hochreiter S and Schmidhuber J (1997). Long short-term memory, Neural Computation, 9, 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  10. Ioffe S and Szegedy C (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift, arXiv preprint, arXiv: 1502.03167.
  11. Kim J and Baek C (2019). Bivariate long range dependent time series forecasting using deep learning, The Korean Journal of Applied Statistics, 32, 69-81. https://doi.org/10.5351/KJAS.2019.32.1.069
  12. Kingma DP and Ba J (2014). Adam: a method for stochastic optimization, arXiv preprint, arXiv: 1412.6980.
  13. Laurent C, Pereyra G, Brakel P, Zhang Y, and Bengio Y (2016). Batch normalized recurrent neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 2657-2661.
  14. Lehmann R and Weyh A (2016). Forecasting employment in Europe: are survey results helpful?, Journal of Business Cycle Research, 12, 81-117. https://doi.org/10.1007/s41549-016-0002-5
  15. Li S, Li W, Cook C, Zhu C, and Gao Y (2018). Independently recurrent neural network (IndRNN): building a longer and deeper RNN, arXiv preprint, arXiv: 1803.04831.
  16. McCracken MW and Ng S (2016). FRED-MD: a monthly database for macroeconomic research, Journal of Business and Economic Statistics, 34, 574-589. https://doi.org/10.1080/07350015.2015.1086655
  17. Qiu M, Song Y, and Akagi F (2016). Application of artificial neural network for the prediction of stock market returns: the case of the Japanese stock market, Chaos, Solitons and Fractals, 85, 1-7. https://doi.org/10.1016/j.chaos.2016.01.004
  18. Rapach DE and Strauss JK (2010). Bagging or combining (or both)? an analysis based on forecasting U.S. employment growth, Econometric Reviews, 29, 511-533. https://doi.org/10.1080/07474938.2010.481550
  19. Rapach DE and Strauss JK (2012). Forecasting US state-level employment growth: an amalgamation approach, International Journal of Forecasting, 28, 315-327. https://doi.org/10.1016/j.ijforecast.2011.08.004
  20. Ruder S (2016). An overview of gradient descent optimization algorithms, arXiv preprint, arXiv:1600.04747.
  21. Santurkar S, Tsipras D, Ilyas A, and Madry A (2018). How does batch normalization help optimization (no, it is not about internal covariate shift), arXiv preprint, arXiv: 1805.11604.
  22. Siliverstovs B (2013). Do business tendency surveys help in forecasting employment? A real-time evidence for Switzerland, OECD Journal: Journal of Business Cycle Measurement and Analysis, 2013/2.
  23. Sola J and Sevilla J (1997). Importance of input data normalization for the application of neural networks to complex industrial problems, IEEE Transactions on Nuclear Science, 44, 1464-1468. https://doi.org/10.1109/23.589532
  24. Stock JH and Watson MW (1996). Evidence on structural instability in macroeconomic time series relations, Journal of Business and Economic Statistics, 14, 11-30. https://doi.org/10.2307/1392096
  25. Tarassow A (2019). Forecasting U.S. money growth using economic uncertainty measures and regularisation techniques, International Journal of Forecasting, 35, 443-457. https://doi.org/10.1016/j.ijforecast.2018.09.012
  26. Uniejewski B, Marcjasz G, andWeron R (2019). Understanding intraday electricity markets: Variable selection and very short-term price forecasting using LASSO, published online, doi.org/10.1016/j.ijforecast.2019.02.001