• Title/Summary/Keyword: Time series prediction

Search Result 885, Processing Time 0.03 seconds

Comparison of Stock Price Prediction Using Time Series and Non-Time Series Data

  • Min-Seob Song;Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.67-75
    • /
    • 2023
  • Stock price prediction is an important topic extensively discussed in the financial market, but it is considered a challenging subject due to numerous factors that can influence it. In this research, performance was compared and analyzed by applying time series prediction models (LSTM, GRU) and non-time series prediction models (RF, SVR, KNN, LGBM) that do not take into account the temporal dependence of data into stock price prediction. In addition, various data such as stock price data, technical indicators, financial statements indicators, buy sell indicators, short selling, and foreign indicators were combined to find optimal predictors and analyze major factors affecting stock price prediction by industry. Through the hyperparameter optimization process, the process of improving the prediction performance for each algorithm was also conducted to analyze the factors affecting the performance. As a result of feature selection and hyperparameter optimization, it was found that the forecast accuracy of the time series prediction algorithm GRU and LSTM+GRU was the highest.

A study on the Time Series Prediction Using the Support Vector Machine (보조벡터 머신을 이용한 시계열 예측에 관한 연구)

  • 강환일;정요원;송영기
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.315-315
    • /
    • 2000
  • In this paper, we perform the time series prediction using the SVM(Support Vector Machine). We make use of two different loss functions and two different kernel functions; i) Quadratic and $\varepsilon$-insensitive loss function are used; ii) GRBF(Gaussian Radial Basis Function) and ERBF(Exponential Radial Basis Function) are used. Mackey-Glass time series are used for prediction. For both cases, we compare the results by the SVM to those by ANN(Artificial Neural Network) and show the better performance by SVM than that by ANN.

Two-dimensional attention-based multi-input LSTM for time series prediction

  • Kim, Eun Been;Park, Jung Hoon;Lee, Yung-Seop;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.39-57
    • /
    • 2021
  • Time series prediction is an area of great interest to many people. Algorithms for time series prediction are widely used in many fields such as stock price, temperature, energy and weather forecast; in addtion, classical models as well as recurrent neural networks (RNNs) have been actively developed. After introducing the attention mechanism to neural network models, many new models with improved performance have been developed; in addition, models using attention twice have also recently been proposed, resulting in further performance improvements. In this paper, we consider time series prediction by introducing attention twice to an RNN model. The proposed model is a method that introduces H-attention and T-attention for output value and time step information to select useful information. We conduct experiments on stock price, temperature and energy data and confirm that the proposed model outperforms existing models.

Bayesian Neural Network with Recurrent Architecture for Time Series Prediction

  • Hong, Chan-Young;Park, Jung-Hun;Yoon, Tae-Sung;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.631-634
    • /
    • 2004
  • In this paper, the Bayesian recurrent neural network (BRNN) is proposed to predict time series data. Among the various traditional prediction methodologies, a neural network method is considered to be more effective in case of non-linear and non-stationary time series data. A neural network predictor requests proper learning strategy to adjust the network weights, and one need to prepare for non-linear and non-stationary evolution of network weights. The Bayesian neural network in this paper estimates not the single set of weights but the probability distributions of weights. In other words, we sets the weight vector as a state vector of state space method, and estimates its probability distributions in accordance with the Bayesian inference. This approach makes it possible to obtain more exact estimation of the weights. Moreover, in the aspect of network architecture, it is known that the recurrent feedback structure is superior to the feedforward structure for the problem of time series prediction. Therefore, the recurrent network with Bayesian inference, what we call BRNN, is expected to show higher performance than the normal neural network. To verify the performance of the proposed method, the time series data are numerically generated and a neural network predictor is applied on it. As a result, BRNN is proved to show better prediction result than common feedforward Bayesian neural network.

  • PDF

Prediction of Time Series Using Hierarchical Mixtures of Experts Through an Annealing (어닐링에 의한 Hierarchical Mixtures of Experts를 이용한 시계열 예측)

  • 유정수;이원돈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.360-362
    • /
    • 1998
  • In the original mixtures of experts framework, the parameters of the network are determined by gradient descent, which is naturally slow. In [2], the Expectation-Maximization(EM) algorithm is used instead, to obtain the network parameters, resulting in substantially reduced training times. This paper presents the new EM algorithm for prediction. We show that an Efficient training algorithm may be derived for the HME network. To verify the utility of the algorithm we look at specific examples in time series prediction. The application of the new EM algorithm to time series prediction has been quiet successful.

  • PDF

Multiple Model Fuzzy Prediction Systems with Adaptive Model Selection Based on Rough Sets and its Application to Time Series Forecasting (러프 집합 기반 적응 모델 선택을 갖는 다중 모델 퍼지 예측 시스템 구현과 시계열 예측 응용)

  • Bang, Young-Keun;Lee, Chul-Heui
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.25-33
    • /
    • 2009
  • Recently, the TS fuzzy models that include the linear equations in the consequent part are widely used for time series forecasting, and the prediction performance of them is somewhat dependent on the characteristics of time series such as stationariness. Thus, a new prediction method is suggested in this paper which is especially effective to nonstationary time series prediction. First, data preprocessing is introduced to extract the patterns and regularities of time series well, and then multiple model TS fuzzy predictors are constructed. Next, an appropriate model is chosen for each input data by an adaptive model selection mechanism based on rough sets, and the prediction is going. Finally, the error compensation procedure is added to improve the performance by decreasing the prediction error. Computer simulations are performed on typical cases to verify the effectiveness of the proposed method. It may be very useful for the prediction of time series with uncertainty and/or nonstationariness because it handles and reflects better the characteristics of data.

A Study on Improving Prediction Accuracy by Modeling Multiple Similar Time Series (다중 유사 시계열 모델링 방법을 통한 예측정확도 개선에 관한 연구)

  • Cho, Young-Hee;Lee, Gye-Sung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.6
    • /
    • pp.137-143
    • /
    • 2010
  • A method for improving prediction accuracy through processing time series data has been studied in this research. We have designed techniques to model multiple similar time series data and avoided the shortcomings of single prediction model. We predicted the future changes by effective rules derived from these models. The methods for testing prediction accuracy consists of three types: fixed interval, sliding, and cumulative method. Among the three, cumulative method produced the highest accuracy.

The Prediction and Analysis of the Power Energy Time Series by Using the Elman Recurrent Neural Network (엘만 순환 신경망을 사용한 전력 에너지 시계열의 예측 및 분석)

  • Lee, Chang-Yong;Kim, Jinho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.1
    • /
    • pp.84-93
    • /
    • 2018
  • In this paper, we propose an Elman recurrent neural network to predict and analyze a time series of power energy consumption. To this end, we consider the volatility of the time series and apply the sample variance and the detrended fluctuation analyses to the volatilities. We demonstrate that there exists a correlation in the time series of the volatilities, which suggests that the power consumption time series contain a non-negligible amount of the non-linear correlation. Based on this finding, we adopt the Elman recurrent neural network as the model for the prediction of the power consumption. As the simplest form of the recurrent network, the Elman network is designed to learn sequential or time-varying pattern and could predict learned series of values. The Elman network has a layer of "context units" in addition to a standard feedforward network. By adjusting two parameters in the model and performing the cross validation, we demonstrated that the proposed model predicts the power consumption with the relative errors and the average errors in the range of 2%~5% and 3kWh~8kWh, respectively. To further confirm the experimental results, we performed two types of the cross validations designed for the time series data. We also support the validity of the model by analyzing the multi-step forecasting. We found that the prediction errors tend to be saturated although they increase as the prediction time step increases. The results of this study can be used to the energy management system in terms of the effective control of the cross usage of the electric and the gas energies.

Financial Application of Time Series Prediction based on Genetic Programming

  • Yoshihara, Ikuo;Aoyama, Tomoo;Yasunaga, Moritoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.524-524
    • /
    • 2000
  • We have been developing a method to build one-step-ahead prediction models for time series using genetic programming (GP). Our model building method consists of two stages. In the first stage, functional forms of the models are inherited from their parent models through crossover operation of GP. In the second stage, the parameters of the newborn model arc optimized based on an iterative method just like the back propagation. The proposed method has been applied to various kinds of time series problems. An application to the seismic ground motion was presented in the KACC'99, and since then the method has been improved in many aspects, for example, additions of new node functions, improvements of the node functions, and new exploitations of many kinds of mutation operators. The new ideas and trials enhance the ability to generate effective and complicated models and reduce CPU time. Today, we will present a couple of financial applications, espc:cially focusing on gold price prediction in Tokyo market.

  • PDF

RBF Neural Network Sturcture for Prediction of Non-linear, Non-stationary Time Series (비선형, 비정상 시계열 예측을 위한RBF(Radial Basis Function) 신경회로망 구조)

  • Kim, Sang-Hwan;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2299-2301
    • /
    • 1998
  • In this paper, a modified RBF (Radial Basis Function) neural network structure is suggested for the prediction of time series with non-linear, non-stationary characteristics. Conventional RBF neural network predicting time series by using past outputs is for sensing the trajectory of the time series and for reacting when there exists strong relation between input and hidden neuron's RBF center. But this response is highly sensitive to level and trend of time serieses. In order to overcome such dependencies, hidden neurons are modified to react to the increments of input variable and multiplied by increments(or decrements) of out puts for prediction. When the suggested structure is applied to prediction of Lorenz equation, and Rossler equation, improved performances are obtainable.

  • PDF