• 제목/요약/키워드: mean and variance

검색결과 2,053건 처리시간 0.026초

로버스트 변수모형의 비선형 목표계획법 접근방법 (Nonlinear Goal Programming Approach for Robust Parameter Experiments)

  • 이상헌
    • 한국국방경영분석학회지
    • /
    • 제28권1호
    • /
    • pp.47-66
    • /
    • 2002
  • Instead of using signal-to-noise ratio, we attempt to optimize both the mean and variance responses using dual response optimization technique. The alternative experimental strategy analyzes a robust parameter design problem to obtain the best settings that give a target condition on the mean while minimizing its variance. The mean and variance are treated as the two responses of interest to be optimized. Unlike to the crossed array and combined array approaches, our experimental setup requires replicated runs for each control factor's treatment under noise sampling. When the postulated response models are true, they enable the coefficients to be estimated and the desired performance measure to be analyzed more efficiently. The procedure and illustrative example are given for the dual response optimization techniques of nonlinear goal programming.

Efficient Use of Auxiliary Variables in Estimating Finite Population Variance in Two-Phase Sampling

  • Singh, Housila P.;Singh, Sarjinder;Kim, Jong-Min
    • Communications for Statistical Applications and Methods
    • /
    • 제17권2호
    • /
    • pp.165-181
    • /
    • 2010
  • This paper presents some chain ratio-type estimators for estimating finite population variance using two auxiliary variables in two phase sampling set up. The expressions for biases and mean squared errors of the suggested c1asses of estimators are given. Asymptotic optimum estimators(AOE's) in each class are identified with their approximate mean squared error formulae. The theoretical and empirical properties of the suggested classes of estimators are investigated. In the simulation study, we took a real dataset related to pulmonary disease available on the CD with the book by Rosner, (2005).

이중추출에서 모평균 추정 (Mean Estimation in Two-phase Sampling)

  • 김규성;김진석;이선순
    • 응용통계연구
    • /
    • 제14권1호
    • /
    • pp.13-24
    • /
    • 2001
  • 이중추출에서 모평균 추정방법을 고찰하였다. 전통적으로 널리 쓰이는 비추정량과 회귀추정량 그리고 비례배분 및 Rao 배분을 한 후의 층화평균에 대하여 주어진 기대 비용에서 최적의 표본수, 최소분산 및 분산추정량을 살펴보았다. 또한 비추정 및 층화의 효과를 모두 내포하는 결합비 추정량을 제안하고 주어진 기대 비용에서 최적의 표본수 및 최소분산을 유도하였고 분산추정량을 구하였다. 그리고 제한된 모의실험을 통하여 비추정량, 층화평균 및 결합비 추정량의 효율을 비교하였다. 모의실험 결과 비추정량과 층화평균은 경우에 따라 효율이 다르게 나타난 반면, 결합비 추정량은 대체로 두 방법보다 효율이 우수하게 나타나 결합비 추정량이 이중추출에 유용하게 쓰일 수 있음을 보였다.

  • PDF

Optimal Portfolio Models for an Inefficient Market

  • GINTING, Josep;GINTING, Neshia Wilhelmina;PUTRI, Leonita;NIDAR, Sulaeman Rahman
    • The Journal of Asian Finance, Economics and Business
    • /
    • 제8권2호
    • /
    • pp.57-64
    • /
    • 2021
  • This research attempts to formulate a new mean-risk model to replace the Markowitz mean-variance model by altering the risk measurement using ARCH variance instead of the original variance. In building the portfolio, samples used are closing prices of Indonesia Composite Stock Index and Indonesia Composite Bonds Index from 2013 to 2018. This study is a qualitative study using secondary data from the Indonesia Stock Exchange and Indonesia Bonds Pricing Agency. This research found that Markowitz's model is still superior when utilized in daily data, while the mean-ARCH model is appropriate with wider gap data like monthly observation. The Historical return has also proven to be more appropriate as a benchmark in selecting an optimal portfolio rather than a risk-free rate in an inefficient market. Therefore Mean-ARCH is more appropriate when utilized under data that have a wider gap between the period. The research findings show that the portfolio combination produced is inefficient due to the market inefficiency indicated by the meager return of the stock, while bears notable standard deviation. Therefore, the researcher of this study proposed to replace the risk-free rate as a benchmark with the historical return. The Historical return proved to be more realistic than the risk-free rate in inefficient market conditions.

존슨 시스템에 의한 비정규 공정능력의 평가 (Evaluation of Non - Normal Process Capability by Johnson System)

  • 김진수;김홍준
    • 대한안전경영과학회지
    • /
    • 제3권3호
    • /
    • pp.175-190
    • /
    • 2001
  • We propose, a new process capability index $C_{psk}$(WV) applying the weighted variance control charting method for non-normally distributed. The main idea of the weighted variance method(WVM) is to divide a skewed or asymmetric distribution into two normal distributions from its mean to create two new distributions which have the same mean but different standard deviations. In this paper we propose an example, a distributions generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and the Wright methods. This example shows that the weighted valiance-based indices are more consistent than the other two methods in terms of sensitivity to departure to the process mean/median from the target value for non-normal processes. Second method show using the percentage nonconforming by the Pearson, Johnson and Burr systems. This example shows a little difference between the Pearson system and Burr system, but Johnson system underestimated than the two systems for process capability.

  • PDF

비정규 공정능력 측도에 관한 연구 (A Study on a Measure for Non-Normal Process Capability)

  • 김홍준;김진수;조남호
    • 한국신뢰성학회:학술대회논문집
    • /
    • 한국신뢰성학회 2001년도 정기학술대회
    • /
    • pp.311-319
    • /
    • 2001
  • All indices that are now in use assume normally distributed data, and any use of the indices on non-normal data results in inaccurate capability measurements. Therefore, $C_{s}$ is proposed which extends the most useful index to date, the Pearn-Kotz-Johnson $C_{pmk}$, by not only taking into account that the process mean may not lie midway between the specification limits and incorporating a penalty when the mean deviates from its target, but also incorporating a penalty for skewness. Therefore we propose, a new process capability index $C_{psk}$( WV) applying the weighted variance control charting method for non-normally distributed. The main idea of the weighted variance method(WVM) is to divide a skewed or asymmetric distribution into two normal distribution from its mean to create two new distributions which have the same mean but different standard distributions. In this paper we propose an example, a distribution generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and the Wright methods. This example shows that the weighted valiance-based indices are more consistent than the other two methods In terms of sensitivity to departure to the process mean/median from the target value for non-normal process.s.s.s.

  • PDF

Stationary bootstrapping for structural break tests for a heterogeneous autoregressive model

  • Hwang, Eunju;Shin, Dong Wan
    • Communications for Statistical Applications and Methods
    • /
    • 제24권4호
    • /
    • pp.367-382
    • /
    • 2017
  • We consider an infinite-order long-memory heterogeneous autoregressive (HAR) model, which is motivated by a long-memory property of realized volatilities (RVs), as an extension of the finite order HAR-RV model. We develop bootstrap tests for structural mean or variance changes in the infinite-order HAR model via stationary bootstrapping. A functional central limit theorem is proved for stationary bootstrap sample, which enables us to develop stationary bootstrap cumulative sum (CUSUM) tests: a bootstrap test for mean break and a bootstrap test for variance break. Consistencies of the bootstrap null distributions of the CUSUM tests are proved. Consistencies of the bootstrap CUSUM tests are also proved under alternative hypotheses of mean or variance changes. A Monte-Carlo simulation shows that stationary bootstrapping improves the sizes of existing tests.

평균-분산 최적화 모형을 이용한 로버스트 선박운항 일정계획 (A Robust Ship Scheduling Based on Mean-Variance Optimization Model)

  • 박나래;김시화
    • 한국경영과학회지
    • /
    • 제41권2호
    • /
    • pp.129-139
    • /
    • 2016
  • This paper presented a robust ship scheduling model using the quadratic programming problem. Given a set of available carriers under control and a set of cargoes to be transported from origin to destination, a robust ship scheduling that can minimize the mean-variance objective function with the required level of profit can be modeled. Computational experiments concerning relevant maritime transportation problems are performed on randomly generated configurations of tanker scheduling in bulk trade. In the first stage, the optimal transportation problem to achieve maximum revenue is solved through the traditional set-packing model that includes all feasible schedules for each carrier. In the second stage, the robust ship scheduling problem is formulated as mentioned in the quadratic programming. Single index model is used to efficiently calculate the variance-covariance matrix of objective function. Significant results are reported to validate that the proposed model can be utilized in the decision problem of ship scheduling after considering robustness and the required level of profit.

통계계산에서의 갱신 알고리즘에 관한 연구 (Updating algorithms in statistical computations)

  • 전홍석
    • 응용통계연구
    • /
    • 제5권2호
    • /
    • pp.283-292
    • /
    • 1992
  • 개인용 컴퓨터의 보급이 급격히 늘어남에 따라 자료의 통계분석에 개인용 컴퓨터가 많이 이용되고 있다. 컴퓨터의 하드웨어가 하루가 다르게 발전하고 있음으로 웬만큼 많은 양의 자료를 분석하는 데에는 컴퓨터의 기억용량이나 처리속도등이 문제되지는 않는다. 자료가 축차적(sequentially)으로 주어질 때 어떤 통계량을 계산하기 위하여 매번 전체 자료를 다시 읽어야 한다면 이는 번거로운 작업이 될 것이며 기억용량의 낭비임에 틀림없다. 이러한 문제점을 S/W 적인 입장에서 해결하고자 하는 노력이 바로 갱신 알고리즘(Updating Algorithm)이다. 이 연구에서는 몇가지 통계량에 대한 갱신 알고리즘들을 알아보고 그들의 특성을 밝힘으로써 소형 및 개인용 컴퓨터를 이용하여서도 많은 양의 자료분석이 가능하도록 하고자 한다.

  • PDF

심층신경망 기반의 음성인식을 위한 절충된 특징 정규화 방식 (Compromised feature normalization method for deep neural network based speech recognition)

  • 김민식;김형순
    • 말소리와 음성과학
    • /
    • 제12권3호
    • /
    • pp.65-71
    • /
    • 2020
  • 특징 정규화는 음성 특징 파라미터들의 통계적인 특성의 정규화를 통해 훈련 및 테스트 조건 사이의 환경 불일치의 영향을 감소시키는 방법으로서 기존의 Gaussian mixture model-hidden Markov model(GMM-HMM) 기반의 음성인식 시스템에서 우수한 성능개선을 입증한 바 있다. 하지만 심층신경망(deep neural network, DNN) 기반의 음성인식 시스템에서는 환경 불일치의 영향을 최소화 하는 것이 반드시 최고의 성능 개선으로 연결되지는 않는다. 본 논문에서는 이러한 현상의 원인을 과도한 특징 정규화로 인한 정보손실 때문이라 보고, 음향모델을 훈련 하는데 유용한 정보는 보존하면서 환경 불일치의 영향은 적절히 감소시켜 음성인식 성능을 최대화 하는 특징 정규화 방식이 있는 지 검토해보고자 한다. 이를 위해 평균 정규화(mean normalization, MN)와 평균 및 분산 정규화(mean and variance normalization, MVN)의 절충 방식인 평균 및 지수적 분산 정규화(mean and exponentiated variance normalization, MEVN)를 도입하여, 잡음 및 잔향 환경에서 분산에 대한 정규화의 정도에 따른 DNN 기반의 음성인식 시스템의 성능을 비교한다. 실험 결과, 성능 개선의 폭이 크지는 않으나 분산 정규화의 정도에 따라 MEVN이 MN과 MVN보다 성능이 우수함을 보여준다.