• 제목/요약/키워드: TimeSeries Data

Search Result 3,635, Processing Time 0.029 seconds

Analysis of Intrinsic Patterns of Time Series Based on Chaos Theory: Focusing on Roulette and KOSPI200 Index Future (카오스 이론 기반 시계열의 내재적 패턴분석: 룰렛과 KOSPI200 지수선물 데이터 대상)

  • Lee, HeeChul;Kim, HongGon;Kim, Hee-Woong
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.119-133
    • /
    • 2021
  • As a large amount of data is produced in each industry, a number of time series pattern prediction studies are being conducted to make quick business decisions. However, there is a limit to predicting specific patterns in nonlinear time series data due to the uncertainty inherent in the data, and there are difficulties in making strategic decisions in corporate management. In addition, in recent decades, various studies have been conducted on data such as demand/supply and financial markets that are suitable for industrial purposes to predict time series data of irregular random walk models, but predict specific rules and achieve sustainable corporate objectives There are difficulties. In this study, the prediction results were compared and analyzed using the Chaos analysis method for roulette data and financial market data, and meaningful results were derived. And, this study confirmed that chaos analysis is useful for finding a new method in analyzing time series data. By comparing and analyzing the characteristics of roulette games with the time series of Korean stock index future, it was derived that predictive power can be improved if the trend is confirmed, and it is meaningful in determining whether nonlinear time series data with high uncertainty have a specific pattern.

Multiple Model Prediction System Based on Optimal TS Fuzzy Model and Its Applications to Time Series Forecasting (최적 TS 퍼지 모델 기반 다중 모델 예측 시스템의 구현과 시계열 예측 응용)

  • Bang, Young-Keun;Lee, Chul-Heui
    • Journal of Industrial Technology
    • /
    • v.28 no.B
    • /
    • pp.101-109
    • /
    • 2008
  • In general, non-stationary or chaos time series forecasting is very difficult since there exists a drift and/or nonlinearities in them. To overcome this situation, we suggest a new prediction method based on multiple model TS fuzzy predictors combined with preprocessing of time series data, where, instead of time series data, the differences of them are applied to predictors as input. In preprocessing procedure, the candidates of optimal difference interval are determined by using con-elation analysis and corresponding difference data are generated. And then, for each of them, TS fuzzy predictor is constructed by using k-means clustering algorithm and least squares method. Finally, the best predictor which minimizes the performance index is selected and it works on hereafter for prediction. Computer simulation is performed to show the effectiveness and usefulness of our method.

  • PDF

Uncertain Rule-based Fuzzy Technique: Nonsingleton Fuzzy Logic System for Corrupted Time Series Analysis

  • Kim, Dongwon;Park, Gwi-Tae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.3
    • /
    • pp.361-365
    • /
    • 2004
  • In this paper, we present the modeling of time series data which are corrupted by noise via nonsingleton fuzzy logic system. Nonsingleton fuzzy logic system (NFLS) is useful in cases where the available data are corrupted by noise. NFLS is a fuzzy system whose inputs are modeled as fuzzy number. The abilities of NFLS to approximate arbitrary functions, and to effectively deal with noise and uncertainty, are used to analyze corrupted time series data. In the simulation results, we compare the results of the NFLS approach with the results of using only a traditional fuzzy logic system.

Reverse Engineering of a Gene Regulatory Network from Time-Series Data Using Mutual Information

  • Barman, Shohag;Kwon, Yung-Keun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.849-852
    • /
    • 2014
  • Reverse engineering of gene regulatory network is a challenging task in computational biology. To detect a regulatory relationship among genes from time series data is called reverse engineering. Reverse engineering helps to discover the architecture of the underlying gene regulatory network. Besides, it insights into the disease process, biological process and drug discovery. There are many statistical approaches available for reverse engineering of gene regulatory network. In our paper, we propose pairwise mutual information for the reverse engineering of a gene regulatory network from time series data. Firstly, we create random boolean networks by the well-known $Erd{\ddot{o}}s-R{\acute{e}}nyi$ model. Secondly, we generate artificial time series data from that network. Then, we calculate pairwise mutual information for predicting the network. We implement of our system on java platform. To visualize the random boolean network graphically we use cytoscape plugins 2.8.0.

Neural Network-based Time Series Modeling of Optical Emission Spectroscopy Data for Fault Prediction in Reactive Ion Etching

  • Sang Jeen Hong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.131-135
    • /
    • 2023
  • Neural network-based time series models called time series neural networks (TSNNs) are trained by the error backpropagation algorithm and used to predict process shifts of parameters such as gas flow, RF power, and chamber pressure in reactive ion etching (RIE). The training data consists of process conditions, as well as principal components (PCs) of optical emission spectroscopy (OES) data collected in-situ. Data are generated during the etching of benzocyclobutene (BCB) in a SF6/O2 plasma. Combinations of baseline and faulty responses for each process parameter are simulated, and a moving average of TSNN predictions successfully identifies process shifts in the recipe parameters for various degrees of faults.

  • PDF

Evolutionary Computation-based Hybird Clustring Technique for Manufacuring Time Series Data (제조 시계열 데이터를 위한 진화 연산 기반의 하이브리드 클러스터링 기법)

  • Oh, Sanghoun;Ahn, Chang Wook
    • Smart Media Journal
    • /
    • v.10 no.3
    • /
    • pp.23-30
    • /
    • 2021
  • Although the manufacturing time series data clustering technique is an important grouping solution in the field of detecting and improving manufacturing large data-based equipment and process defects, it has a disadvantage of low accuracy when applying the existing static data target clustering technique to time series data. In this paper, an evolutionary computation-based time series cluster analysis approach is presented to improve the coherence of existing clustering techniques. To this end, first, the image shape resulting from the manufacturing process is converted into one-dimensional time series data using linear scanning, and the optimal sub-clusters for hierarchical cluster analysis and split cluster analysis are derived based on the Pearson distance metric as the target of the transformation data. Finally, by using a genetic algorithm, an optimal cluster combination with minimal similarity is derived for the two cluster analysis results. And the performance superiority of the proposed clustering is verified by comparing the performance with the existing clustering technique for the actual manufacturing process image.

Development of a Period Analysis Algorithm for Detecting Variable Stars in Time-Series Observational Data

  • Kim, Dong-Heun;Kim, Yonggi;Yoon, Joh-Na;Im, Hong-Seo
    • Journal of Astronomy and Space Sciences
    • /
    • v.36 no.4
    • /
    • pp.283-292
    • /
    • 2019
  • The purpose of this study was to develop a period analysis algorithm for detecting new variable stars in the time-series data observed by charge coupled device (CCD). We used the data from a variable star monitoring program of the CBNUO. The R filter data of some magnetic cataclysmic variables observed for more than 20 days were chosen to achieve good statistical results. World Coordinate System (WCS) Tools was used to correct the rotation of the observed images and assign the same IDs to the stars included in the analyzed areas. The developed algorithm was applied to the data of DO Dra, TT Ari, RXSJ1803, and MU Cam. In these fields, we found 13 variable stars, five of which were new variable stars not previously reported. Our period analysis algorithm were tested in the case of observation data mixed with various fields of view because the observations were carried with 2K CCD as well as 4K CCD at the CBNUO. Our results show that variable stars can be detected using our algorithm even with observational data for which the field of view has changed. Our algorithm is useful to detect new variable stars and analyze them based on existing time-series data. The developed algorithm can play an important role as a recycling technique for used data

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

QP-DTW: Upgrading Dynamic Time Warping to Handle Quasi Periodic Time Series Alignment

  • Boulnemour, Imen;Boucheham, Bachir
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.851-876
    • /
    • 2018
  • Dynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for quasi-periodic time series. In the current situation, except the recently published the shape exchange algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time series containing different number of periods each), according to the recent classification of time series alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery, classification, etc.).

Wind Data Simulation Using Digital Generation of Non-Gaussian Turbulence Multiple Time Series with Specified Sample Cross Correlations (임의의 표본상호상관함수와 비정규확률분포를 갖는 다중 난류시계열의 디지털 합성방법을 이용한 풍속데이터 시뮬레이션)

  • Seong, Seung-Hak;Kim, Wook;Kim, Kyung-Chun;Boo, Jung-Sook
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.19 no.5
    • /
    • pp.569-581
    • /
    • 2003
  • A method of synthetic time series generation was developed and applied to the simulation of homogeneous turbulence in a periodic 3 - D box and the hourly wind data simulation. The method can simulate almost exact sample auto and cross correlations of multiple time series and control non-Gaussian distribution. Using the turbulence simulation, influence of correlations, non-Gaussian distribution, and one-direction anisotropy on homogeneous structure were studied by investigating the spatial distribution of turbulence kinetic energy and enstrophy. An hourly wind data of Typhoon Robin was used to illustrate a capability of the method to simulate sample cross correlations of multiple time series. The simulated typhoon data shows a similar shape of fluctuations and almost exactly the same sample auto and cross correlations of the Robin.