• Title/Summary/Keyword: Minimax estimator

Search Result 12, Processing Time 0.023 seconds

Minimax Choice and Convex Combinations of Generalized Pickands Estimator of the Extreme Value Index

  • Yun, Seokhoon
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.3
    • /
    • pp.315-328
    • /
    • 2002
  • As an extension of the well-known Pickands (1975) estimate. for the extreme value index, Yun (2002) introduced a generalized Pickands estimator. This paper searches for a minimax estimator in the sense of minimizing the maximum asymptotic relative efficiency of the Pickands estimator with respect to the generalized one. To reduce the asymptotic variance of the resulting estimator, convex combinations of the minimax estimator are also considered and their asymptotic normality is established. Finally, the optimal combination is determined and proves to be superior to the generalized Pickands estimator.

ON THE MINIMAX VARIANCE ESTIMATORS OF SCALE IN TIME TO FAILURE MODELS

  • Lee, Jae-Won;Shevlyakov, Georgy-L.
    • Bulletin of the Korean Mathematical Society
    • /
    • v.39 no.1
    • /
    • pp.23-31
    • /
    • 2002
  • A scale parameter is the principal parameter to be estimated, since it corresponds to one of the main reliability characteristics, namely the average time to failure. To provide robustness of scale estimators to gross errors in the data, we apply the Huber minimax approach in time to failure models of the statistical reliability theory. The minimax valiance estimator of scale is obtained in the important particular case of the exponential distribution.

A Study on Nonlinear Noise Removal for Images Corrupted with ${\alpha}$-Stable Random Noise (${\alpha}$-stable 랜덤잡음에 노출된 이미지에 적용하기 위한 비선형 잡음제거 알고리즘에 관한 연구)

  • Hahn, Hee-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.93-99
    • /
    • 2007
  • Robust nonlinear image denoising algorithms for the class of ${\alpha}$-stable distribution are introduced. The proposed amplitude-limited sample average filter(ALSAF) proves to be the maximum likelihood estimator under the heavy-tailed Gaussian noise environments. The error norm for this estimator is equivalent to Huber#s minimax norm. It is optimal in the respect of maximizing the efficacy under the above noise environment. It is mired with the myriad filter to propose an amplitude-limited myriad filter(ALMF). The behavior and performance of the ALSAF and ALMF in ${\alpha}$-stable noise environment are illustrated and analyzed through simulation.

Nonlinear Image Denoising Algorithm in the Presence of Heavy-Tailed Noise (Heavy-tailed 잡음에 노출된 이미지에서의 비선형 잡음제거 알고리즘)

  • Hahn, Hee-Il
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.18-20
    • /
    • 2006
  • The statistics for the neighbor differences between the particular pixels and their neighbors are introduced. They are incorporated into the filter to remove additive Gaussian noise contaminating images. The derived denoising method corresponds to the maximum likelihood estimator for the heavy-tailed Gaussian distribution. The error norm corresponding to our estimator from the robust statistics is equivalent to Huber's minimax norm. Our estimator is also optimal in the respect of maximizing the efficacy under the above noise environment.

  • PDF

Estimation of the Parameter of a Bernoulli Distribution Using a Balanced Loss Function

  • Farsipour, N.Sanjari;Asgharzadeh, A.
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.889-898
    • /
    • 2002
  • In decision theoretic estimation, the loss function usually emphasizes precision of estimation. However, one may have interest in goodness of fit of the overall model as well as precision of estimation. From this viewpoint, Zellner(1994) proposed the balanced loss function which takes account of both "goodness of fit" and "precision of estimation". This paper considers estimation of the parameter of a Bernoulli distribution using Zellner's(1994) balanced loss function. It is shown that the sample mean $\overline{X}$, is admissible. More general results, concerning the admissibility of estimators of the form $a\overline{X}+b$ are also presented. Finally, minimax estimators and some numerical results are given at the end of paper,at the end of paper.

An Empiricla Bayes Estimation of Multivariate nNormal Mean Vector

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.15 no.2
    • /
    • pp.97-106
    • /
    • 1986
  • Assume that $X_1, X_2, \cdots, X_N$ are iid p-dimensional normal random vectors ($p \geq 3$) with unknown covariance matrix. The problem of estimating multivariate normal mean vector in an empirical Bayes situation is considered. Empirical Bayes estimators, obtained by Bayes treatmetn of the covariance matrix, are presented. It is shown that the estimators are minimax, each of which domainates teh maximum likelihood estimator (MLE), when the loss is nonsingular quadratic loss. We also derive approximate credibility region for the mean vector that takes advantage of the fact that the MLE is not the best estimator.

  • PDF

On Convex Combination of Local Constant Regression

  • Mun, Jung-Won;Kim, Choong-Rak
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.379-387
    • /
    • 2006
  • Local polynomial regression is widely used because of good properties such as such as the adaptation to various types of designs, the absence of boundary effects and minimax efficiency Choi and Hall (1998) proposed an estimator of regression function using a convex combination idea. They showed that a convex combination of three local linear estimators produces an estimator which has the same order of bias as a local cubic smoother. In this paper we suggest another estimator of regression function based on a convex combination of five local constant estimates. It turned out that this estimator has the same order of bias as a local cubic smoother.

An Additive Sparse Penalty for Variable Selection in High-Dimensional Linear Regression Model

  • Lee, Sangin
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.147-157
    • /
    • 2015
  • We consider a sparse high-dimensional linear regression model. Penalized methods using LASSO or non-convex penalties have been widely used for variable selection and estimation in high-dimensional regression models. In penalized regression, the selection and prediction performances depend on which penalty function is used. For example, it is known that LASSO has a good prediction performance but tends to select more variables than necessary. In this paper, we propose an additive sparse penalty for variable selection using a combination of LASSO and minimax concave penalties (MCP). The proposed penalty is designed for good properties of both LASSO and MCP.We develop an efficient algorithm to compute the proposed estimator by combining a concave convex procedure and coordinate descent algorithm. Numerical studies show that the proposed method has better selection and prediction performances compared to other penalized methods.