• Title/Summary/Keyword: exponentially weighted moving average

Search Result 72, Processing Time 0.019 seconds

Volatility Analysis for Multivariate Time Series via Dimension Reduction (차원축소를 통한 다변량 시계열의 변동성 분석 및 응용)

  • Song, Eu-Gine;Choi, Moon-Sun;Hwang, S.Y.
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.825-835
    • /
    • 2008
  • Multivariate GARCH(MGARCH) has been useful in financial studies and econometrics for modeling volatilities and correlations between components of multivariate time series. An obvious drawback lies in that the number of parameters increases rapidly with the number of variables involved. This thesis tries to resolve the problem by using dimension reduction technique. We briefly review both factor models for dimension reduction and the MGARCH models including EWMA (Exponentially weighted moving-average model), DVEC(Diagonal VEC model), BEKK and CCC(Constant conditional correlation model). We create meaningful portfolios obtained after reducing dimension through statistical factor models and fundamental factor models and in turn these portfolios are applied to MGARCH. In addition, we compare portfolios by assessing MSE, MAD(Mean absolute deviation) and VaR(Value at Risk). Various financial time series are analyzed for illustration.

A Study for Improvement Performance on Using Exponentially Weighted Moving Average at IPv6 networks (IPv6 네트웍 환경에서 지수가중적 이동평균 기법을 이용한 성능향상에 관한 연구)

  • Oh, Ji-Hyun;Jeong, Choong-Kyo
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.323-326
    • /
    • 2007
  • Mobility Anchor Points are used for the mobility management in HMIPv6 networks. Currently a mobile node selects the MAP farthest away from itself as a new MAP among available candidates when it undertakes a macro handoff. With this technique, however, the traffic tends to be concentrated at a MAP with the largest domain size and the communication cost increases due to the distance between the mobile node and the MAP. In this work, we proposed a cost effective MAP selection scheme. When leaving the current MAP domain. the mobile node calculates the optimum MAP domain size to minimize the local mobility cost at the new MAP domain considering mobile node's velocity and packet transmission rate. The mobile node then selects a MAP domain of size close to the optimum domain size calculated among the candidate MAP domains. In this way, it is possible for the mobile node to select an optimal MAP adaptively taking the network and node states into account, thus reducing the communication cost.

  • PDF

An Adaptive Power Saving Mechanism in IEEE 802.11 Wireless IP Networks

  • Pack Sangheon;Choi Yanghee
    • Journal of Communications and Networks
    • /
    • v.7 no.2
    • /
    • pp.126-134
    • /
    • 2005
  • Reducing energy consumption in mobile hosts (MHs) is one of the most critical issues in wireles/mobile networks. IP paging protocol at network layer and power saving mechanism (PSM) at link layer are two core technologies to reduce the energy consumption of MHs. First, we investigate the energy efficiency of the current IEEE 802.11 power saving mechanism (PSM) when IP paging protocol is deployed over IEEE 802.11 networks. The result reveal that the current IEEE 802.11 PSM with a fixed wakeup interval (i.e., the static PSM) exhibits a degraded performance when it is integrated with IP paging protocol. Therefore, we propose an adaptive power saving mechanism in IEEE 802.11-based wireless IP networks. Unlike the static PSM, the adaptive PSM adjusts the wake-up interval adaptively depending on the session activity at IP layer. Specifically, the MH estimates the idle periods for incoming sessions based on the exponentially weighted moving average (EWMA) scheme and sets its wake-up interval dynamically by considering the estimated idle period and paging delay bound. For performance evaluation, we have conducted comprehensive simulations and compared the total cost and energy consumption, which are incurred in IP paging protocol in conjunction with various power saving mechanisms: The static PSM, the adaptive PSM, and the optimum PSM. Simulation results show that the adaptive PSM provides a closer performance to the optimum PSM than the static PSM.

A Clustering-Based Fault Detection Method for Steam Boiler Tube in Thermal Power Plant

  • Yu, Jungwon;Jang, Jaeyel;Yoo, Jaeyeong;Park, June Ho;Kim, Sungshin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.4
    • /
    • pp.848-859
    • /
    • 2016
  • System failures in thermal power plants (TPPs) can lead to serious losses because the equipment is operated under very high pressure and temperature. Therefore, it is indispensable for alarm systems to inform field workers in advance of any abnormal operating conditions in the equipment. In this paper, we propose a clustering-based fault detection method for steam boiler tubes in TPPs. For data clustering, k-means algorithm is employed and the number of clusters are systematically determined by slope statistic. In the clustering-based method, it is assumed that normal data samples are close to the centers of clusters and those of abnormal are far from the centers. After partitioning training samples collected from normal target systems, fault scores (FSs) are assigned to unseen samples according to the distances between the samples and their closest cluster centroids. Alarm signals are generated if the FSs exceed predefined threshold values. The validity of exponentially weighted moving average to reduce false alarms is also investigated. To verify the performance, the proposed method is applied to failure cases due to boiler tube leakage. The experiment results show that the proposed method can detect the abnormal conditions of the target system successfully.

Average run length calculation of the EWMA control chart using the first passage time of the Markov process (Markov 과정의 최초통과시간을 이용한 지수가중 이동평균 관리도의 평균런길이의 계산)

  • Park, Changsoon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • Many stochastic processes satisfy the Markov property exactly or at least approximately. An interested property in the Markov process is the first passage time. Since the sequential analysis by Wald, the approximation of the first passage time has been studied extensively. The Statistical computing technique due to the development of high-speed computers made it possible to calculate the values of the properties close to the true ones. This article introduces an exponentially weighted moving average (EWMA) control chart as an example of the Markov process, and studied how to calculate the average run length with problematic issues that should be cautioned for correct calculation. The results derived for approximation of the first passage time in this research can be applied to any of the Markov processes. Especially the approximation of the continuous time Markov process to the discrete time Markov chain is useful for the studies of the properties of the stochastic process and makes computational approaches easy.

An Economic Design of the Integrated Process Control Procedure with Repeated Adjustments and EWMA Monitoring

  • Park Changsoon;Jeong Yoonjoon
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.179-184
    • /
    • 2004
  • Statistical process control (SPC) and engineering process control (EPC) are based on different strategies for process quality improvement. SPC reduces process variability by detecting and eliminating special causes of process variation, while EPC reduces process variability by adjusting compensatory variables to keep the quality variable close to target. Recently there has been need for an integrated process control (IPC) procedure which combines the two strategies. This article considers a scheme that simultaneously applies SPC and EPC techniques to reduce the variation of a process. The process disturbance model under consideration is an IMA(1,1) model with a location shift. The EPC part of the scheme adjusts the process, while the SPC part of the scheme detects the occurrence of a special cause. For adjusting the process repeated adjustment is applied by compensating the predicted deviation from target. For detecting special causes the two kinds of exponentially weighted moving average (EWMA) control chart are applied to the observed deviations: One for detecting location shift and the other for detecting increment of variability. It was assumed that the adjustment of the process under the presence of a special cause may change any of the process parameters as well as the system gain. The effectiveness of the IPC scheme is evaluated in the context of the average cost per unit time (ACU) during the operation of the scheme. One major objective of this article is to investigate the effects of the process parameters to the ACU. Another major objective is to give a practical guide for the efficient selection of the parameters of the two EWMA control charts.

  • PDF

A Selectively Cumulative Sum (S-CUSUM) Control Chart with Variable Sampling Intervals (VSI) (가변 샘플링 간격(VSI)을 갖는 선택적 누적합 (S-CUSUM) 관리도)

  • Im, Tae-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.11a
    • /
    • pp.560-570
    • /
    • 2006
  • This paper proposes a selectively cumulative sum (S-CUSUM) control chart with variable sampling intervals (VSI) for detecting shifts in the process mean. The basic idea of the VSI S-CUSUM chart is to adjust sampling intervals and to accumulate previous samples selectively in order to increase the sensitivity. The VSI S-CUSUM chart employs a threshold limit to determine whether to increase sampling rate as well as to accumulate previous samples or not. If a standardized control statistic falls outside the threshold limit, the next sample is taken with higher sampling rate and is accumulated to calculate the next control statistic. If the control statistic falls within the threshold limit, the next sample is taken with lower sampling rate and only the sample is used to get the control statistic. The VSI S-CUSUM chart produces an 'out-of-control' signal either when any control statistic falls outside the control limit or when L-consecutive control statistics fall outside the threshold limit. The number L is a decision variable and is called a 'control length'. A Markov chain model is employed to describe the VSI S-CUSUM sampling process. Some useful formulae related to the steady state average time-to signal (ATS) for an in-control state and out-of-control state are derived in closed forms. A statistical design procedure for the VSI S-CUSUM chart is proposed. Comparative studies show that the proposed VSI S-CUSUM chart is uniformly superior to the VSI CUSUM chart or to the Exponentially Weighted Moving Average (EWMA) chart with respect to the ATS performance.

  • PDF

Echo Canceller with Improved Performance in Noisy Environments (잡음에 강인한 반향 제거기 연구)

  • 이세원;박호종
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.261-268
    • /
    • 2003
  • Conventional acoustic echo cancellers using ES algorithm have simple structure and fast convergence speed compared with those using NLMS algorithm, but they are very weak to external noise because ES algorithm updates the adaptive filter taps based on average energy reduction rate of room impulse response in specific acoustical condition. To solve this problem, in this paper, a new update algorithm for acoustic echo canceller with stepsize matrix generator is proposed. A set of stepsizes is determined based on residual error energy which is estimated by two moving average operators, and applied to the echo canceller in matrix from, resulting in improved convergence speed. Simulations in various noise condition show that the proposed algorithm improves the robustness of acoustic echo canceller to external noise.

Selection of the economically optimal parameters in the EWMA control chart (지수가중이동평균관리도의 경제적 최적모수의 선정)

  • 박창순;원태연
    • The Korean Journal of Applied Statistics
    • /
    • v.9 no.1
    • /
    • pp.91-109
    • /
    • 1996
  • Exponentially weighted moving averae(EWMA) control chart has been used widely for process monitoring and process adjustment recently, but there has not been many studies about the selection of the parameters. Design of the control chart can be classified into the statistical design and the economic design. The purpose of the economic design is to minimize the cost function in which all the possible costs occurring during the process are probability given the Type I error probability. In this paper the optimal parameters of the EWMA chart are selected for the economic design as well as for the statistical design. The optimal parameters for the economic design show significantly different from those of the statistical design, and especially the weight is always larger than that used in the statistical design. In the economic design, we divide the model into the single assignable cause model and the multiple assignable causes model caacording to number of which is used as the average context of the multiple assignable causes, it shows that the selection of the parameters may be misleading when the multiple assignable causes exist in practice.

  • PDF

Deep Learning Based Group Synchronization for Networked Immersive Interactions (네트워크 환경에서의 몰입형 상호작용을 위한 딥러닝 기반 그룹 동기화 기법)

  • Lee, Joong-Jae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.373-380
    • /
    • 2022
  • This paper presents a deep learning based group synchronization that supports networked immersive interactions between remote users. The goal of group synchronization is to enable all participants to synchronously interact with others for increasing user presence Most previous methods focus on NTP-based clock synchronization to enhance time accuracy. Moving average filters are used to control media playout time on the synchronization server. As an example, the exponentially weighted moving average(EWMA) would be able to track and estimate accurate playout time if the changes in input data are not significant. However it needs more time to be stable for any given change over time due to codec and system loads or fluctuations in network status. To tackle this problem, this work proposes the Deep Group Synchronization(DeepGroupSync), a group synchronization based on deep learning that models important features from the data. This model consists of two Gated Recurrent Unit(GRU) layers and one fully-connected layer, which predicts an optimal playout time by utilizing the sequential playout delays. The experiments are conducted with an existing method that uses the EWMA and the proposed method that uses the DeepGroupSync. The results show that the proposed method are more robust against unpredictable or rapid network condition changes than the existing method.