DOI QR코드

DOI QR Code

딥러닝을 이용한 이변량 장기종속시계열 예측

Bivariate long range dependent time series forecasting using deep learning

  • 김지영 (성균관대학교 통계학과) ;
  • 백창룡 (성균관대학교 통계학과)
  • 투고 : 2018.11.15
  • 심사 : 2018.12.12
  • 발행 : 2019.02.28

초록

본 논문에서는 딥러닝을 이용한 이변량 장기종속시계열(long-range dependent time series) 예측을 고려하였다. 시계열 데이터 예측에 적합한 LSTM(long short-term memory) 네트워크를 이용하여 이변량 장기종속시계열을 예측하고 이를 이변량 FARIMA(fractional ARIMA) 모형인 FIVARMA 모형과 VARFIMA 모형과의 예측 성능을 실증 자료 분석을 통해 비교하였다. 실증 자료로는 기능적 자기공명 영상(fMRI) 및 일일 실현 변동성(daily realized volatility) 자료를 이용하였으며 표본외 예측(out-of sample forecasting) 오차 비교를 통해 예측 성능을 측정하였다. 그 결과, FIVARMA 모형과 VARFIMA 모형의 예측값에는 미묘한 차이가 존재하며, LSTM 네트워크의 경우 초매개변수 선택으로 복잡해 보이지만 계산적으로 더 안정되면서 예측 성능도 모수적 장기종속시계열과 뒤지지 않은 좋은 예측 성능을 보였다.

We consider bivariate long range dependent (LRD) time series forecasting using a deep learning method. A long short-term memory (LSTM) network well-suited to time series data is applied to forecast bivariate time series; in addition, we compare the forecasting performance with bivariate fractional autoregressive integrated moving average (FARIMA) models. Out-of-sample forecasting errors are compared with various performance measures for functional MRI (fMRI) data and daily realized volatility data. The results show a subtle difference in the predicted values of the FIVARMA model and VARFIMA model. LSTM is computationally demanding due to hyper-parameter selection, but is more stable and the forecasting performance is competitively good to that of parametric long range dependent time series models.

키워드

GCGHDE_2019_v32n1_69_f0001.png 이미지

Figure 3.1. Structure of recurrent neural network.

GCGHDE_2019_v32n1_69_f0002.png 이미지

Figure 3.2. Example of underfitting, overfitting, and proper fitting.

GCGHDE_2019_v32n1_69_f0003.png 이미지

Figure 4.1. Example of one step ahead out-of-sample forecasting.

GCGHDE_2019_v32n1_69_f0004.png 이미지

Figure 4.2. SACF and CCF plot of fMRI data.

GCGHDE_2019_v32n1_69_f0005.png 이미지

Figure 4.3. Forecasting plot of brain channel 1, 12.

GCGHDE_2019_v32n1_69_f0006.png 이미지

Figure 4.4. SACF and CCF plot of daily realized volatility data.

GCGHDE_2019_v32n1_69_f0007.png 이미지

Figure 4.5. Forecasting plot of Nikkei 225, KOSPI.

Table 4.1. Result of fMRI data

GCGHDE_2019_v32n1_69_t0001.png 이미지

Table 4.2. MASE ratio with FIVAR and LSTM, VARFI and LSTM: fMRI data

GCGHDE_2019_v32n1_69_t0002.png 이미지

Table 4.3. Result of daily volatility data

GCGHDE_2019_v32n1_69_t0003.png 이미지

Table 4.4. MASE ratio with FIVAR and LSTM, VARFI and LSTM: Daily volatility data

GCGHDE_2019_v32n1_69_t0004.png 이미지

Table 4.5. Notation of daily volatility data

GCGHDE_2019_v32n1_69_t0005.png 이미지

참고문헌

  1. Aladag, C. H., Egrioglu, E., and Kadilar, C. (2009). Forecasting nonlinear time series with a hybrid methodology, Applied Mathematics Letters, 22, 1467-1470. https://doi.org/10.1016/j.aml.2009.02.006
  2. Baek, C., Kechagias, S., and Pipiras, V. (2018). Asymptotics of bivariate local Whittle estimators with applications to fractal connectivity, Preprint.
  3. Gers, F. A., Schmidhuber, J., and Cummins, F. (2000). Learning to forget: continual prediction with LSTM, Neural Computation, 12, 2451-2471. https://doi.org/10.1162/089976600300015015
  4. Granger, C. W. J. and Joyeux, R. (1980). An introduction to long?memory time series models and fractional differencing, Journal of Time Series Analysis, 1, 15-39. https://doi.org/10.1111/j.1467-9892.1980.tb00297.x
  5. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors, arXiv:1207.0580.
  6. Ho, S. L., Xie, M., and Goh, T. N. (2002). A comparative study of neural network and Box-Jenkins ARIMA modeling in time series prediction, Computers & Industrial Engineering, 42, 371-375. https://doi.org/10.1016/S0360-8352(02)00036-0
  7. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory, Neural Computation, 9, 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  8. Hosking, J. R. M. (1981). Fractional differencing, Biometrika, 68, 165-176. https://doi.org/10.1093/biomet/68.1.165
  9. Hyndman, R. J. (2006). Another look at forecast-accuracy metrics for intermittent demand, Foresight: The International Journal of Applied Forecasting, 4, 43-46.
  10. Kechagias, S. and Pipiras, V. (2015). Definitions and representations of multivariate long?range dependent time series, Journal of Time Series Analysis, 36, 1-25. https://doi.org/10.1111/jtsa.12086
  11. Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization, arXiv:1412.6980.
  12. Kohzadi, N., Boyd, M. S., Kermanshahi, B., and Kaastra, I. (1996). A comparison of artificial neural network and time series models for forecasting commodity prices, Neurocomputing, 10, 169-181. https://doi.org/10.1016/0925-2312(95)00020-8
  13. Lobato, I. N. (1997). Consistency of the averaged cross-periodogram in long memory series, Journal of Time Series Analysis, 18, 137-155. https://doi.org/10.1111/1467-9892.00043
  14. Sela, R. J. and Hurvich, C. M. (2008). Computationally efficient methods for two multivariate fractionally integrated models, Journal of Time Series Analysis, 30, 631-651. https://doi.org/10.1111/j.1467-9892.2009.00631.x
  15. Smith, E. M., Smith, J., Legg, P., and Francis, S. (2017). Predicting the occurrence of world news events using recurrent neural networks and auto-regressive moving average models, Advances in Computational Intelligence Systems, 191-202.
  16. Termenon, N., Jaillard, A., Delon-Martin, C., and Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project, Neuroimage, 142, 172-187. https://doi.org/10.1016/j.neuroimage.2016.05.062
  17. Whittle, P. (1963). On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix, Biometrika, 50, 129-134. https://doi.org/10.1093/biomet/50.1-2.129