• Title/Summary/Keyword: Time series decomposition

Search Result 127, Processing Time 0.026 seconds

Empirical Mode Decomposition (EMD) and Nonstationary Oscillation Resampling (NSOR): I. their background and model description

  • Lee, Tae-Sam;Ouarda, TahaB.M.J.;Kim, Byung-Soo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.90-90
    • /
    • 2011
  • Long-term nonstationary oscillations (NSOs) are commonly observed in hydrological and climatological data series such as low-frequency climate oscillation indices and precipitation dataset. In this work, we present a stochastic model that captures NSOs within a given variable. The model employs a data-adaptive decomposition method named empirical mode decomposition (EMD). Irregular oscillatory processes in a given variable can be extracted into a finite number of intrinsic mode functions with the EMD approach. A unique data-adaptive algorithm is proposed in the present paper in order to study the future evolution of the NSO components extracted from EMD.

  • PDF

Hierarchical Smoothing Technique by Empirical Mode Decomposition (경험적 모드분해법에 기초한 계층적 평활방법)

  • Kim Dong-Hoh;Oh Hee-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.2
    • /
    • pp.319-330
    • /
    • 2006
  • A signal in real world usually composes of multiple signals having different scales of frequencies. For example sun-spot data is fluctuated over 11 year and 85 year. Economic data is supposed to be compound of seasonal component, cyclic component and long-term trend. Decomposition of the signal is one of the main topics in time series analysis. However when the signal is subject to nonstationarity, traditional time series analysis such as spectral analysis is not suitable. Huang et. at(1998) proposed data-adaptive method called empirical mode decomposition (EMD) . Due to its robustness to nonstationarity, EMD has been applied to various fields. Huang et. at, however, have not considered denoising when data is contaminated by error. In this paper we propose efficient denoising method utilizing cross-validation.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taek-Soo;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support for multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To data, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Control Limits of Time Series Data using Hilbert-Huang Transform : Dealing with Nested Periods (힐버트-황 변환을 이용한 시계열 데이터 관리한계 : 중첩주기의 사례)

  • Suh, Jung-Yul;Lee, Sae Jae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.4
    • /
    • pp.35-41
    • /
    • 2014
  • Real-life time series characteristic data has significant amount of non-stationary components, especially periodic components in nature. Extracting such components has required many ad-hoc techniques with external parameters set by users in a case-by-case manner. In this study, we used Empirical Mode Decomposition Method from Hilbert-Huang Transform to extract them in a systematic manner with least number of ad-hoc parameters set by users. After the periodic components are removed, the remaining time-series data can be analyzed with traditional methods such as ARIMA model. Then we suggest a different way of setting control chart limits for characteristic data with periodic components in addition to ARIMA components.

Measuring the Degree of Integration into the Global Production Network by the Decomposition of Gross Output and Imports: Korea 1970-2018

  • KIM, DONGSEOK
    • KDI Journal of Economic Policy
    • /
    • v.43 no.3
    • /
    • pp.33-53
    • /
    • 2021
  • The import content of exports (ICE) is defined as the amount of foreign input embodied in one unit of export, and it has been used as a measure of the degree of integration into the global production network. In this paper, we suggest an alternative measure based on the decomposition of gross output and imports into the contributions of final demand terms. This measure considers the manner in which a country manages its domestic production base (gross output) and utilizes the foreign sector (imports) simultaneously and can thus be regarded as a more comprehensive measure than ICE. Korea's input-output tables in 1970-2018 are used in this paper. These tables were rearranged according to the same 26-industry classification so that these measures can be computed with time-series continuity and so that the results can be interpreted clearly. The results obtained in this paper are based on extended time-series data and are expected to be reliable and robust. The suggested indicators were applied to these tables, and, based on the results we conclude that the overall importance of the global economy in Korea's economic strategy has risen and that the degree of Korea's integration into the global production network increased over the entire period. This paper also shows that ICE incorrectly measures the movement of the degree of integration into the global production network in some periods.

Quantification of Cerebral Blood Flow Measurements by Magnetic Resonance Imaging Bolus Tracking

  • Park Byung-Rae
    • Biomedical Science Letters
    • /
    • v.11 no.2
    • /
    • pp.129-134
    • /
    • 2005
  • Three different deconvolution techniques for quantifying cerebral blood flow (CBF) from whole brain $T2^{\ast}-weighted$ bolus tracking images were implemented (parametric Fourier transform P-FT, parametric single value decomposition P-SVD and nonparametric single value decomposition NP-SVD). The techniques were tested on 206 regions from 38 hyperacute stroke patients. In the P-FT and P-SVD techniques, the tissue and arterial concentration time curves were fit to a gamma variate function and the resulting CBF values correlated very well $(CBF_{P-FT}\;=\;1.02{\cdot}CBF_{p-SVD},\;r^2\;=\;0.96)$. The NP-SVD CBF values correlated well with the P-FT CBF values only when a sufficient number of time series volumes were acquired to minimize tracer time curve truncation $(CBF_{P-FT}\;=\;0.92{\cdot}CBF_{NP-SVD},\;r^2\;=\;0.88)$. The correlation between the fitted CBV and the unfitted CBV values was also maximized in regions with minimal tracer time curve truncation $(CBV_{fit}\;=\;1.00{\cdot}CBV_{ Unfit},\;^r^2\;=\;0.89)$. When a sufficient number of time series volumes could not be acquired (due to scanner limitations) to avoid tracer time curve truncation, the P-FT and P-SVD techniques gave more reliable estimates of CBF than the NP-SVD technique.

  • PDF

A Study of Short Term Forecasting of Daily Water Demand Using SSA (SSA를 이용한 일 단위 물수요량 단기 예측에 관한 연구)

  • Kwon, Hyun-Han;Moon, Young-Il
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.18 no.6
    • /
    • pp.758-769
    • /
    • 2004
  • The trends and seasonalities of most time series have a large variability. The result of the Singular Spectrum Analysis(SSA) processing is a decomposition of the time series into several components, which can often be identified as trends, seasonalities and other oscillatory series, or noise components. Generally, forecasting by the SSA method should be applied to time series governed (may be approximately) by linear recurrent formulae(LRF). This study examined forecasting ability of SSA-LRF model. These methods are applied to daily water demand data. These models indicate that most cases have good ability of forecasting to some extent by considering statistical and visual assessment, in particular forecasting validity shows good results during 15 days.

A Study on Demanding forecasting Model of a Cadastral Surveying Operation by analyzing its primary factors (지적측량업무 영향요인 분석을 통한 수요예측모형 연구)

  • Song, Myeong-Suk
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2007.11a
    • /
    • pp.477-481
    • /
    • 2007
  • The purpose of this study is to provide the ideal forecasting model of cadastral survey work load through the Economeatric Analysis of Time Series, Granger Causality and VAR Model Analysis, it suggested the forecasting reference materials for the total amount of cadastral survey general work load. The main result is that the derive of the environment variables which affect cadastral survey general work load and the outcome of VAR(vector auto regression) analysis materials(impulse response function and forecast error variance decomposition analysis materials), which explain the change of general work load depending on altering the environment variables. And also, For confirming the stability of time series data, we took a unit root test, ADF(Augmented Dickey-Fuller) analysis and the time series model analysis derives the best cadastral forecasting model regarding on general cadastral survey work load. And also, it showed up the various standards that are applied the statistical method of econometric analysis so it enhanced the prior aggregate system of cadastral survey work load forecasting.

  • PDF

River Stage Forecasting Model Combining Wavelet Packet Transform and Artificial Neural Network (웨이블릿 패킷변환과 신경망을 결합한 하천수위 예측모델)

  • Seo, Youngmin
    • Journal of Environmental Science International
    • /
    • v.24 no.8
    • /
    • pp.1023-1036
    • /
    • 2015
  • A reliable streamflow forecasting is essential for flood disaster prevention, reservoir operation, water supply and water resources management. This study proposes a hybrid model for river stage forecasting and investigates its accuracy. The proposed model is the wavelet packet-based artificial neural network(WPANN). Wavelet packet transform(WPT) module in WPANN model is employed to decompose an input time series into approximation and detail components. The decomposed time series are then used as inputs of artificial neural network(ANN) module in WPANN model. Based on model performance indexes, WPANN models are found to produce better efficiency than ANN model. WPANN-sym10 model yields the best performance among all other models. It is found that WPT improves the accuracy of ANN model. The results obtained from this study indicate that the conjunction of WPT and ANN can improve the efficiency of ANN model and can be a potential tool for forecasting river stage more accurately.