Browse > Article
http://dx.doi.org/10.5351/KJAS.2019.32.1.069

Bivariate long range dependent time series forecasting using deep learning  

Kim, Jiyoung (Department of Statistics, Sungkyunkwan University)
Baek, Changryong (Department of Statistics, Sungkyunkwan University)
Publication Information
The Korean Journal of Applied Statistics / v.32, no.1, 2019 , pp. 69-81 More about this Journal
Abstract
We consider bivariate long range dependent (LRD) time series forecasting using a deep learning method. A long short-term memory (LSTM) network well-suited to time series data is applied to forecast bivariate time series; in addition, we compare the forecasting performance with bivariate fractional autoregressive integrated moving average (FARIMA) models. Out-of-sample forecasting errors are compared with various performance measures for functional MRI (fMRI) data and daily realized volatility data. The results show a subtle difference in the predicted values of the FIVARMA model and VARFIMA model. LSTM is computationally demanding due to hyper-parameter selection, but is more stable and the forecasting performance is competitively good to that of parametric long range dependent time series models.
Keywords
deep learning; long range dependent; LSTM; FARIMA; FIVARMA; VARFIMA;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Kechagias, S. and Pipiras, V. (2015). Definitions and representations of multivariate long?range dependent time series, Journal of Time Series Analysis, 36, 1-25.   DOI
2 Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization, arXiv:1412.6980.
3 Kohzadi, N., Boyd, M. S., Kermanshahi, B., and Kaastra, I. (1996). A comparison of artificial neural network and time series models for forecasting commodity prices, Neurocomputing, 10, 169-181.   DOI
4 Lobato, I. N. (1997). Consistency of the averaged cross-periodogram in long memory series, Journal of Time Series Analysis, 18, 137-155.   DOI
5 Sela, R. J. and Hurvich, C. M. (2008). Computationally efficient methods for two multivariate fractionally integrated models, Journal of Time Series Analysis, 30, 631-651.   DOI
6 Smith, E. M., Smith, J., Legg, P., and Francis, S. (2017). Predicting the occurrence of world news events using recurrent neural networks and auto-regressive moving average models, Advances in Computational Intelligence Systems, 191-202.
7 Termenon, N., Jaillard, A., Delon-Martin, C., and Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project, Neuroimage, 142, 172-187.   DOI
8 Whittle, P. (1963). On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix, Biometrika, 50, 129-134.   DOI
9 Aladag, C. H., Egrioglu, E., and Kadilar, C. (2009). Forecasting nonlinear time series with a hybrid methodology, Applied Mathematics Letters, 22, 1467-1470.   DOI
10 Baek, C., Kechagias, S., and Pipiras, V. (2018). Asymptotics of bivariate local Whittle estimators with applications to fractal connectivity, Preprint.
11 Gers, F. A., Schmidhuber, J., and Cummins, F. (2000). Learning to forget: continual prediction with LSTM, Neural Computation, 12, 2451-2471.   DOI
12 Granger, C. W. J. and Joyeux, R. (1980). An introduction to long?memory time series models and fractional differencing, Journal of Time Series Analysis, 1, 15-39.   DOI
13 Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors, arXiv:1207.0580.
14 Ho, S. L., Xie, M., and Goh, T. N. (2002). A comparative study of neural network and Box-Jenkins ARIMA modeling in time series prediction, Computers & Industrial Engineering, 42, 371-375.   DOI
15 Hyndman, R. J. (2006). Another look at forecast-accuracy metrics for intermittent demand, Foresight: The International Journal of Applied Forecasting, 4, 43-46.
16 Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory, Neural Computation, 9, 1735-1780.   DOI
17 Hosking, J. R. M. (1981). Fractional differencing, Biometrika, 68, 165-176.   DOI