• Title/Summary/Keyword: relative squared error loss

Search Result 7, Processing Time 0.02 seconds

Hierarchical Bayes Estimators of the Error Variance in Two-Way ANOVA Models

  • Chang, In Hong;Kim, Byung Hwee
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.2
    • /
    • pp.315-324
    • /
    • 2002
  • For estimating the error variance under the relative squared error loss in two-way analysis of variance models, we provide a class of hierarchical Bayes estimators and then derive a subclass of the hierarchical Bayes estimators, each member of which dominates the best multiple of the error sum of squares which is known to be minimax. We also identify a subclass of non-minimax hierarchical Bayes estimators.

The relationship to Expected Relative Loss and Cpm by Using Loss Function (손실함수에 의한 기대상대손실과 Cpm의 관련성)

  • 구본철;고수철;김종수
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.41
    • /
    • pp.213-220
    • /
    • 1997
  • Process capability Indices compare the actual performance of manufacturing process to the desired performance. The relationship between the capability index Cpm and the expected squared error loss provides an intuitive interpretation of Cpm. By putting the loss in relative terms a user needs only to specify the target and the distance from the target at which the product would have zero worth, or alternatively, the loss at the specification limits. Confidence limits for the expected relative loss are discussed, and numerical illustration is given.

  • PDF

Hierarchical Bayes Estimators of the Error Variance in Balanced Fixed-Effects Two-Way ANOVA Models

  • Kim, Byung-Hwee;Dong, Kyung-Hwa
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.2
    • /
    • pp.487-500
    • /
    • 1999
  • We propose a class of hierarchical Bayes estimators of the error variance under the relative squared error loss in balanced fixed-effects two-way analysis of variance models. Also we provide analytic expressions for the risk improvement of the hierarchical Bayes estimators over multiples of the error sum of squares. Using these expressions we identify a subclass of the hierarchical Bayes estimators each member of which dominates the best multiple of the error sum of squares which is known to be minimax. Numerical values of the percentage risk improvement are given in some special cases.

  • PDF

ON THE ADMISSIBILITY OF HIERARCHICAL BAYES ESTIMATORS

  • Kim Byung-Hwee;Chang In-Hong
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.3
    • /
    • pp.317-329
    • /
    • 2006
  • In the problem of estimating the error variance in the balanced fixed- effects one-way analysis of variance (ANOVA) model, Ghosh (1994) proposed hierarchical Bayes estimators and raised a conjecture for which all of his hierarchical Bayes estimators are admissible. In this paper we prove this conjecture is true by representing one-way ANOVA model to the distributional form of a multiparameter exponential family.

SOME POINT ESTIMATES FOR THE SHAPE PARAMETERS OF EXPONENTIATED-WEIBULL FAMILY

  • Singh Umesh;Gupta Pramod K.;Upadhyay S.K.
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.1
    • /
    • pp.63-77
    • /
    • 2006
  • Maximum product of spacings estimator is proposed in this paper as a competent alternative of maximum likelihood estimator for the parameters of exponentiated-Weibull distribution, which does work even when the maximum likelihood estimator does not exist. In addition, a Bayes type estimator known as generalized maximum likelihood estimator is also obtained for both of the shape parameters of the aforesaid distribution. Though, the closed form solutions for these proposed estimators do not exist yet these can be obtained by simple appropriate numerical techniques. The relative performances of estimators are compared on the basis of their relative risk efficiencies obtained under symmetric and asymmetric losses. An example based on simulated data is considered for illustration.

Minimum risk point estimation of two-stage procedure for mean

  • Choi, Ki-Heon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.887-894
    • /
    • 2009
  • The two-stage minimum risk point estimation of mean, the probability of success in a sequence of Bernoulli trials, is considered for the case where loss is taken to be symmetrized relative squared error of estimation, plus a fixed cost per observation. First order asymptotic expansions are obtained for large sample properties of two-stage procedure. Monte Carlo simulation is carried out to obtain the expected sample size that minimizes the risk and to examine its finite sample behavior.

  • PDF

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.