• 제목/요약/키워드: Time series prediction

검색결과 882건 처리시간 0.033초

Comparison of Stock Price Prediction Using Time Series and Non-Time Series Data

  • Min-Seob Song;Junghye Min
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권8호
    • /
    • pp.67-75
    • /
    • 2023
  • 주가 예측은 금융시장에서 중요하게 다뤄지고 있는 주제이지만 영향을 미칠 수 있는 다수의 요소들로 인해 어려운 주제로 고려되고 있다. 본 논문에서는 시계열 예측 모델 (LSTM, GRU)과 데이터의 시간적 의존성을 고려하지 않는 비 시계열 예측 모델 (RF, SVR, KNN, LGBM)을 주가 예측에 적용하여 성능을 비교하고 분석하였다. 또한 주가 데이터와 기술적 분석 보조지표, 재무제표 지표, 매수매도 지표, 공매도, 외국인 지표 등 다양한 데이터를 조합 및 활용하여 최적의 예측 요소를 찾아내고 업종별로 주가 예측에 영향을 미치는 주요 요소들을 분석했다. 하이퍼파라미터 최적화 과정을 통해 알고리즘별 예측 성능을 향상 시키는 과정도 진행하여 성능에 영향을 주는 요인을 분석하였다. 변수 선택과 하이퍼 파라미터 최적화 과정을 거친 결과, 시계열 예측 알고리즘인 GRU, 그리고 LSTM+GRU의 예측 정확도가 가장 높은 것으로 나타났다.

보조벡터 머신을 이용한 시계열 예측에 관한 연구 (A study on the Time Series Prediction Using the Support Vector Machine)

  • 강환일;정요원;송영기
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.315-315
    • /
    • 2000
  • In this paper, we perform the time series prediction using the SVM(Support Vector Machine). We make use of two different loss functions and two different kernel functions; i) Quadratic and $\varepsilon$-insensitive loss function are used; ii) GRBF(Gaussian Radial Basis Function) and ERBF(Exponential Radial Basis Function) are used. Mackey-Glass time series are used for prediction. For both cases, we compare the results by the SVM to those by ANN(Artificial Neural Network) and show the better performance by SVM than that by ANN.

Two-dimensional attention-based multi-input LSTM for time series prediction

  • Kim, Eun Been;Park, Jung Hoon;Lee, Yung-Seop;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • 제28권1호
    • /
    • pp.39-57
    • /
    • 2021
  • Time series prediction is an area of great interest to many people. Algorithms for time series prediction are widely used in many fields such as stock price, temperature, energy and weather forecast; in addtion, classical models as well as recurrent neural networks (RNNs) have been actively developed. After introducing the attention mechanism to neural network models, many new models with improved performance have been developed; in addition, models using attention twice have also recently been proposed, resulting in further performance improvements. In this paper, we consider time series prediction by introducing attention twice to an RNN model. The proposed model is a method that introduces H-attention and T-attention for output value and time step information to select useful information. We conduct experiments on stock price, temperature and energy data and confirm that the proposed model outperforms existing models.

Bayesian Neural Network with Recurrent Architecture for Time Series Prediction

  • Hong, Chan-Young;Park, Jung-Hun;Yoon, Tae-Sung;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.631-634
    • /
    • 2004
  • In this paper, the Bayesian recurrent neural network (BRNN) is proposed to predict time series data. Among the various traditional prediction methodologies, a neural network method is considered to be more effective in case of non-linear and non-stationary time series data. A neural network predictor requests proper learning strategy to adjust the network weights, and one need to prepare for non-linear and non-stationary evolution of network weights. The Bayesian neural network in this paper estimates not the single set of weights but the probability distributions of weights. In other words, we sets the weight vector as a state vector of state space method, and estimates its probability distributions in accordance with the Bayesian inference. This approach makes it possible to obtain more exact estimation of the weights. Moreover, in the aspect of network architecture, it is known that the recurrent feedback structure is superior to the feedforward structure for the problem of time series prediction. Therefore, the recurrent network with Bayesian inference, what we call BRNN, is expected to show higher performance than the normal neural network. To verify the performance of the proposed method, the time series data are numerically generated and a neural network predictor is applied on it. As a result, BRNN is proved to show better prediction result than common feedforward Bayesian neural network.

  • PDF

어닐링에 의한 Hierarchical Mixtures of Experts를 이용한 시계열 예측 (Prediction of Time Series Using Hierarchical Mixtures of Experts Through an Annealing)

  • 유정수;이원돈
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 1998년도 가을 학술발표논문집 Vol.25 No.2 (2)
    • /
    • pp.360-362
    • /
    • 1998
  • In the original mixtures of experts framework, the parameters of the network are determined by gradient descent, which is naturally slow. In [2], the Expectation-Maximization(EM) algorithm is used instead, to obtain the network parameters, resulting in substantially reduced training times. This paper presents the new EM algorithm for prediction. We show that an Efficient training algorithm may be derived for the HME network. To verify the utility of the algorithm we look at specific examples in time series prediction. The application of the new EM algorithm to time series prediction has been quiet successful.

  • PDF

러프 집합 기반 적응 모델 선택을 갖는 다중 모델 퍼지 예측 시스템 구현과 시계열 예측 응용 (Multiple Model Fuzzy Prediction Systems with Adaptive Model Selection Based on Rough Sets and its Application to Time Series Forecasting)

  • 방영근;이철희
    • 한국지능시스템학회논문지
    • /
    • 제19권1호
    • /
    • pp.25-33
    • /
    • 2009
  • 최근 시계열 예측에 결론부에 선형식을 갖는 TS 퍼지 모델이 많이 이용되고 있는데, 이의 예측 성능은 정상성과 같은 데이터의 특성과 밀접한 관련이 있다. 그러므로 본 논문에서는 특히 비정상 시계열 예측에 매우 효과적인 새로운 예측 기법을 제안하였다. 시계열의 패턴이나 규칙성을 잘 끌어내기 위한 데이터 전처리 과정을 도입하고 다중 모델 TS 퍼지 예측기를 구성한 뒤, 러프집합을 이용한 적응 모델 선택 기법에 의해 입력 데이터의 특성에 따라 가변적으로 적합한 예측 모델을 선택하여 시계열 예측이 수행되도록 하였다. 마지막으로 예측 오차를 감소시키기 위하여 오차 보정 메커니즘을 추가함으로써 예측 성능을 더욱 향상시켰다. 시뮬레이션을 통해 제안된 기법의 성능을 검증하였다. 제안된 기법은 예측 모델 구현과 예측 수행 과정에서 시계열 데이터의 특성들을 잘 반영할 수 있으므로 불확실성과 비정상성을 갖는 시계열의 예측에 매우 효과적으로 이용될 수 있을 것이다.

다중 유사 시계열 모델링 방법을 통한 예측정확도 개선에 관한 연구 (A Study on Improving Prediction Accuracy by Modeling Multiple Similar Time Series)

  • 조영희;이계성
    • 한국인터넷방송통신학회논문지
    • /
    • 제10권6호
    • /
    • pp.137-143
    • /
    • 2010
  • 본 연구에서는 시계열 자료처리를 통해 예측정확도를 개선시키는 방안에 대해 연구하였다. 단일 예측 모형의 단점을 개선하기 위해 유사한 시계열 자료를 선정하여 이들로부터 모델을 유도하였다. 이 모델로부터 유효 규칙을 생성해내 향후 자료의 변화를 예측하였다. 실험을 통해 예측정확도에 있어 유의한 수준의 개선효과가 있었음을 확인하였다. 예측모델 구성을 위해 고정구간과 가변구간을 두고 모델링하여 고정구간, 창이동, 누적구간 방식으로 구분하여 예측정확도를 측정하였다. 이중 누적구간 방식이 가장 정확도가 높게 나왔다.

엘만 순환 신경망을 사용한 전력 에너지 시계열의 예측 및 분석 (The Prediction and Analysis of the Power Energy Time Series by Using the Elman Recurrent Neural Network)

  • 이창용;김진호
    • 산업경영시스템학회지
    • /
    • 제41권1호
    • /
    • pp.84-93
    • /
    • 2018
  • In this paper, we propose an Elman recurrent neural network to predict and analyze a time series of power energy consumption. To this end, we consider the volatility of the time series and apply the sample variance and the detrended fluctuation analyses to the volatilities. We demonstrate that there exists a correlation in the time series of the volatilities, which suggests that the power consumption time series contain a non-negligible amount of the non-linear correlation. Based on this finding, we adopt the Elman recurrent neural network as the model for the prediction of the power consumption. As the simplest form of the recurrent network, the Elman network is designed to learn sequential or time-varying pattern and could predict learned series of values. The Elman network has a layer of "context units" in addition to a standard feedforward network. By adjusting two parameters in the model and performing the cross validation, we demonstrated that the proposed model predicts the power consumption with the relative errors and the average errors in the range of 2%~5% and 3kWh~8kWh, respectively. To further confirm the experimental results, we performed two types of the cross validations designed for the time series data. We also support the validity of the model by analyzing the multi-step forecasting. We found that the prediction errors tend to be saturated although they increase as the prediction time step increases. The results of this study can be used to the energy management system in terms of the effective control of the cross usage of the electric and the gas energies.

Financial Application of Time Series Prediction based on Genetic Programming

  • Yoshihara, Ikuo;Aoyama, Tomoo;Yasunaga, Moritoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.524-524
    • /
    • 2000
  • We have been developing a method to build one-step-ahead prediction models for time series using genetic programming (GP). Our model building method consists of two stages. In the first stage, functional forms of the models are inherited from their parent models through crossover operation of GP. In the second stage, the parameters of the newborn model arc optimized based on an iterative method just like the back propagation. The proposed method has been applied to various kinds of time series problems. An application to the seismic ground motion was presented in the KACC'99, and since then the method has been improved in many aspects, for example, additions of new node functions, improvements of the node functions, and new exploitations of many kinds of mutation operators. The new ideas and trials enhance the ability to generate effective and complicated models and reduce CPU time. Today, we will present a couple of financial applications, espc:cially focusing on gold price prediction in Tokyo market.

  • PDF

비선형, 비정상 시계열 예측을 위한RBF(Radial Basis Function) 신경회로망 구조 (RBF Neural Network Sturcture for Prediction of Non-linear, Non-stationary Time Series)

  • 김상환;이종호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 G
    • /
    • pp.2299-2301
    • /
    • 1998
  • In this paper, a modified RBF (Radial Basis Function) neural network structure is suggested for the prediction of time series with non-linear, non-stationary characteristics. Conventional RBF neural network predicting time series by using past outputs is for sensing the trajectory of the time series and for reacting when there exists strong relation between input and hidden neuron's RBF center. But this response is highly sensitive to level and trend of time serieses. In order to overcome such dependencies, hidden neurons are modified to react to the increments of input variable and multiplied by increments(or decrements) of out puts for prediction. When the suggested structure is applied to prediction of Lorenz equation, and Rossler equation, improved performances are obtainable.

  • PDF