• Title/Summary/Keyword: Variance based method

Search Result 948, Processing Time 0.028 seconds

Effect Analysis of Sample Size and Sampling Periods on Accuracy of Reliability Estimation Methods for One-shot Systems using Multiple Comparisons (다중비교를 이용한 샘플수와 샘플링 시점수의 원샷 시스템 신뢰도 추정방법 정확성에 대한 영향 분석)

  • Son, Young-Kap
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.4
    • /
    • pp.435-441
    • /
    • 2012
  • This paper provides simulation-based results of effect analysis of sample size and sampling periods on accuracy of reliability estimation methods using multiple comparisons with analysis of variance. Sum of squared errors in estimated reliability measures were evaluated through applying seven estimation methods for one-shot systems to simulated quantal-response data. Analysis of variance was implemented to investigate change in these errors according to variations of sample size and sampling periods for each estimation method, and then the effect analysis on accuracy in reliability estimation was performed using multiple comparisons based on sample size and sampling periods. An efficient way to allocate both sample size and sampling periods for reliability estimation tests of one-shot systems is proposed in this paper from the effect analysis results.

Course Variance Clustering for Traffic Route Waypoint Extraction

  • Onyango Shem Otoi
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.277-279
    • /
    • 2022
  • Rapid Development and adoption of AIS as a survailance tool has resulted in widespread application of data analysis technology, in addition to AIS ship trajectory clustering. AIS data-based clustering has become an increasingly popular method for marine traffic pattern recognition, ship route prediction and anomaly detection in recent year. In this paper we propose a route waypoint extraction by clustering ships CoG variance trajectory using Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm in both port approach channel and coastal waters. The algorithm discovers route waypoint effectively. The result of the study could be used in traffic route extraction, and more-so develop a maritime anomaly detection tool.

  • PDF

Subsurface Characterization using the Simultaneous Search based Pilot Point Method (SSBM) in Various Data Applications (지하수 흐름특성 분석을 위한 동시 검색기반 파일럿 포인트 방법 적용 - 다양한 데이터 활용 기반)

  • Jung, Yong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.579-586
    • /
    • 2019
  • Pilot Point Method (PPM) is one of the popular methods to search hydraulic conductivities in the inverse method using groundwater flow equations. In this study, the Simultaneous Search based Pilot Point Method (SSBM) was applied with diverse information (e.g. hydraulic heads and/or tracer concentration) applications over previously developed sensitivity based Pilot Point Method (e.g. D-optimality based Pilot Point Method: DBM). In the case of DBM, due to the minimized the variance size, tracer concentration can be recognized as a tool to control the searching space of hydraulic conductivities. SSBM reduced the procedure of hydraulic conductivity searching, though it produced more variance for exploring hydraulic conductivities. In addition, SSBM was dependent on the initial hydraulic conductivity values for search finalized hydraulic conductivities. When tracer concentration was applied, searching hydraulic conductivities was more preferable than only when hydraulic head was applied. Applications of various data for searching hydraulic conductivities is recommended as a more efficient way.

Development of a method of the data generation with maintaining quantile of the sample data

  • Joohyung Lee;Young-Oh Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.244-244
    • /
    • 2023
  • Both the frequency and the magnitude of hydrometeorological extreme events such as severe floods and droughts are increasing. In order to prevent a damage from the climatic disaster, hydrological models are often simulated under various meteorological conditions. While performing the simulations, a synthetic data generated through time series models which maintains the key statistical characteristics of the sample data are widely applied. However, the synthetic data can easily maintains both the average and the variance of the sample data, but the quantile is not maintained well. In this study, we proposes a data generation method which maintains the quantile of the sample data well. The equations of the former maintenance of variance extension (MOVE) are expanded to maintain quantile rather than the average or the variance of the sample data. The equations are derived and the coefficients are determined based on the characteristics of the sample data that we aim to preserve. Monte Carlo simulation is utilized to assess the performance of the proposed data generation method. A time series data (data length of 500) is regarded as the sample data and selected randomly from the sample data to create the data set (data length of 30) for simulation. Data length of the selected data set is expanded from 30 to 500 by using the proposed method. Then, the average, the variance, and the quantile difference between the sample data, and the expanded data are evaluated with relative root mean square error for each simulation. As a result of the simulation, each equation which is designed to maintain the characteristic of data performs well. Moreover, expanded data can preserve the quantile of sample data more precisely than that those expanded through the conventional time series model.

  • PDF

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.

A Study on a Measure for Non-Normal Process Capability (비정규 공정능력 측도에 관한 연구)

  • 김홍준;김진수;조남호
    • Proceedings of the Korean Reliability Society Conference
    • /
    • 2001.06a
    • /
    • pp.311-319
    • /
    • 2001
  • All indices that are now in use assume normally distributed data, and any use of the indices on non-normal data results in inaccurate capability measurements. Therefore, $C_{s}$ is proposed which extends the most useful index to date, the Pearn-Kotz-Johnson $C_{pmk}$, by not only taking into account that the process mean may not lie midway between the specification limits and incorporating a penalty when the mean deviates from its target, but also incorporating a penalty for skewness. Therefore we propose, a new process capability index $C_{psk}$( WV) applying the weighted variance control charting method for non-normally distributed. The main idea of the weighted variance method(WVM) is to divide a skewed or asymmetric distribution into two normal distribution from its mean to create two new distributions which have the same mean but different standard distributions. In this paper we propose an example, a distribution generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and the Wright methods. This example shows that the weighted valiance-based indices are more consistent than the other two methods In terms of sensitivity to departure to the process mean/median from the target value for non-normal process.s.s.s.

  • PDF

Prediction of Conditional Variance under GARCH Model Based on Bootstrap Methods (붓스트랩 방법을 이용한 일반화 자기회귀 조건부 이분산모형에서의 조건부 분산 예측)

  • Kim, Hee-Young;Park, Man-Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.2
    • /
    • pp.287-297
    • /
    • 2009
  • In terms of generalized autoregressive conditional heteroscedastic(GARCH) model, estimation of prediction interval based on likelihood is quite sensitive to distribution of error. Moveover, it is not an easy job to construct prediction interval for conditional variance. Recent studies show that the bootstrap method can be one of the alternatives for solving the problems. In this paper, we introduced the bootstrap approach proposed by Pascual et al. (2006). We employed it to Korean stock price data set.

Filtering of Filter-Bank Energies for Robust Speech Recognition

  • Jung, Ho-Young
    • ETRI Journal
    • /
    • v.26 no.3
    • /
    • pp.273-276
    • /
    • 2004
  • We propose a novel feature processing technique which can provide a cepstral liftering effect in the log-spectral domain. Cepstral liftering aims at the equalization of variance of cepstral coefficients for the distance-based speech recognizer, and as a result, provides the robustness for additive noise and speaker variability. However, in the popular hidden Markov model based framework, cepstral liftering has no effect in recognition performance. We derive a filtering method in log-spectral domain corresponding to the cepstral liftering. The proposed method performs a high-pass filtering based on the decorrelation of filter-bank energies. We show that in noisy speech recognition, the proposed method reduces the error rate by 52.7% to conventional feature.

  • PDF

Pattern Selection Using the Bias and Variance of Ensemble (앙상블의 편기와 분산을 이용한 패턴 선택)

  • Shin, Hyunjung;Cho, Sungzoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.1
    • /
    • pp.112-127
    • /
    • 2002
  • A useful pattern is a pattern that contributes much to learning. For a classification problem those patterns near the class boundary surfaces carry more information to the classifier. For a regression problem the ones near the estimated surface carry more information. In both cases, the usefulness is defined only for those patterns either without error or with negligible error. Using only the useful patterns gives several benefits. First, computational complexity in memory and time for learning is decreased. Second, overfitting is avoided even when the learner is over-sized. Third, learning results in more stable learners. In this paper, we propose a pattern 'utility index' that measures the utility of an individual pattern. The utility index is based on the bias and variance of a pattern trained by a network ensemble. In classification, the pattern with a low bias and a high variance gets a high score. In regression, on the other hand, the one with a low bias and a low variance gets a high score. Based on the distribution of the utility index, the original training set is divided into a high-score group and a low-score group. Only the high-score group is then used for training. The proposed method is tested on synthetic and real-world benchmark datasets. The proposed approach gives a better or at least similar performance.

Residual-based Robust CUSUM Control Charts for Autocorrelated Processes (자기상관 공정 적용을 위한 잔차 기반 강건 누적합 관리도)

  • Lee, Hyun-Cheol
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.3
    • /
    • pp.52-61
    • /
    • 2012
  • The design method for cumulative sum (CUSUM) control charts, which can be robust to autoregressive moving average (ARMA) modeling errors, has not been frequently proposed so far. This is because the CUSUM statistic involves a maximum function, which is intractable in mathematical derivations, and thus any modification on the statistic can not be favorably made. We propose residual-based robust CUSUM control charts for monitoring autocorrelated processes. In order to incorporate the effects of ARMA modeling errors into the design method, we modify parameters (reference value and decision interval) of CUSUM control charts using the approximate expected variance of residuals generated in model uncertainty, rather than directly modify the form of the CUSUM statistic. The expected variance of residuals is derived using a second-order Taylor approximation and the general form is represented using the order of ARMA models with the sample size for ARMA modeling. Based on the Monte carlo simulation, we demonstrate that the proposed method can be effectively used for statistical process control (SPC) charts, which are robust to ARMA modeling errors.