• 제목/요약/키워드: Minimax estimator

검색결과 12건 처리시간 0.021초

Minimax Choice and Convex Combinations of Generalized Pickands Estimator of the Extreme Value Index

  • Yun, Seokhoon
    • Journal of the Korean Statistical Society
    • /
    • 제31권3호
    • /
    • pp.315-328
    • /
    • 2002
  • As an extension of the well-known Pickands (1975) estimate. for the extreme value index, Yun (2002) introduced a generalized Pickands estimator. This paper searches for a minimax estimator in the sense of minimizing the maximum asymptotic relative efficiency of the Pickands estimator with respect to the generalized one. To reduce the asymptotic variance of the resulting estimator, convex combinations of the minimax estimator are also considered and their asymptotic normality is established. Finally, the optimal combination is determined and proves to be superior to the generalized Pickands estimator.

ON THE MINIMAX VARIANCE ESTIMATORS OF SCALE IN TIME TO FAILURE MODELS

  • Lee, Jae-Won;Shevlyakov, Georgy-L.
    • 대한수학회보
    • /
    • 제39권1호
    • /
    • pp.23-31
    • /
    • 2002
  • A scale parameter is the principal parameter to be estimated, since it corresponds to one of the main reliability characteristics, namely the average time to failure. To provide robustness of scale estimators to gross errors in the data, we apply the Huber minimax approach in time to failure models of the statistical reliability theory. The minimax valiance estimator of scale is obtained in the important particular case of the exponential distribution.

${\alpha}$-stable 랜덤잡음에 노출된 이미지에 적용하기 위한 비선형 잡음제거 알고리즘에 관한 연구 (A Study on Nonlinear Noise Removal for Images Corrupted with ${\alpha}$-Stable Random Noise)

  • 한희일
    • 대한전자공학회논문지SP
    • /
    • 제44권6호
    • /
    • pp.93-99
    • /
    • 2007
  • 본 논문에서는 ${\alpha}$-stable 확률분포를 갖는 잡음에 열화된 이미지의 화질을 개선하는 알고리즘을 제안한다. 제안한 진폭제한 평균필터(amplitude-limited sample average filter)는 heavy-tailed 가우시안 잡음환경 하에서 maximum likelihood estimator (MLE)임을 증명한다. 그리고, 이 알고리즘에 해당하는 error norm은 Huber의 minimax norm과 일치하고, 위에서 언급한 잡음 환경 하에서 efficacy를 최대화한다는 점에서 최적의 필터임을 보인다. 이 개념을 미리어드(myriad) 필터와 결합하여 진폭제한 미리어드 필터(amplitude-limited myriad filter)를 제안하고 실험을 통하여 이의 성능을 확인한다.

Heavy-tailed 잡음에 노출된 이미지에서의 비선형 잡음제거 알고리즘 (Nonlinear Image Denoising Algorithm in the Presence of Heavy-Tailed Noise)

  • 한희일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.18-20
    • /
    • 2006
  • The statistics for the neighbor differences between the particular pixels and their neighbors are introduced. They are incorporated into the filter to remove additive Gaussian noise contaminating images. The derived denoising method corresponds to the maximum likelihood estimator for the heavy-tailed Gaussian distribution. The error norm corresponding to our estimator from the robust statistics is equivalent to Huber's minimax norm. Our estimator is also optimal in the respect of maximizing the efficacy under the above noise environment.

  • PDF

Estimation of the Parameter of a Bernoulli Distribution Using a Balanced Loss Function

  • Farsipour, N.Sanjari;Asgharzadeh, A.
    • Communications for Statistical Applications and Methods
    • /
    • 제9권3호
    • /
    • pp.889-898
    • /
    • 2002
  • In decision theoretic estimation, the loss function usually emphasizes precision of estimation. However, one may have interest in goodness of fit of the overall model as well as precision of estimation. From this viewpoint, Zellner(1994) proposed the balanced loss function which takes account of both "goodness of fit" and "precision of estimation". This paper considers estimation of the parameter of a Bernoulli distribution using Zellner's(1994) balanced loss function. It is shown that the sample mean $\overline{X}$, is admissible. More general results, concerning the admissibility of estimators of the form $a\overline{X}+b$ are also presented. Finally, minimax estimators and some numerical results are given at the end of paper,at the end of paper.

An Empiricla Bayes Estimation of Multivariate nNormal Mean Vector

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • 제15권2호
    • /
    • pp.97-106
    • /
    • 1986
  • Assume that $X_1, X_2, \cdots, X_N$ are iid p-dimensional normal random vectors ($p \geq 3$) with unknown covariance matrix. The problem of estimating multivariate normal mean vector in an empirical Bayes situation is considered. Empirical Bayes estimators, obtained by Bayes treatmetn of the covariance matrix, are presented. It is shown that the estimators are minimax, each of which domainates teh maximum likelihood estimator (MLE), when the loss is nonsingular quadratic loss. We also derive approximate credibility region for the mean vector that takes advantage of the fact that the MLE is not the best estimator.

  • PDF

On Convex Combination of Local Constant Regression

  • Mun, Jung-Won;Kim, Choong-Rak
    • Communications for Statistical Applications and Methods
    • /
    • 제13권2호
    • /
    • pp.379-387
    • /
    • 2006
  • Local polynomial regression is widely used because of good properties such as such as the adaptation to various types of designs, the absence of boundary effects and minimax efficiency Choi and Hall (1998) proposed an estimator of regression function using a convex combination idea. They showed that a convex combination of three local linear estimators produces an estimator which has the same order of bias as a local cubic smoother. In this paper we suggest another estimator of regression function based on a convex combination of five local constant estimates. It turned out that this estimator has the same order of bias as a local cubic smoother.

An Additive Sparse Penalty for Variable Selection in High-Dimensional Linear Regression Model

  • Lee, Sangin
    • Communications for Statistical Applications and Methods
    • /
    • 제22권2호
    • /
    • pp.147-157
    • /
    • 2015
  • We consider a sparse high-dimensional linear regression model. Penalized methods using LASSO or non-convex penalties have been widely used for variable selection and estimation in high-dimensional regression models. In penalized regression, the selection and prediction performances depend on which penalty function is used. For example, it is known that LASSO has a good prediction performance but tends to select more variables than necessary. In this paper, we propose an additive sparse penalty for variable selection using a combination of LASSO and minimax concave penalties (MCP). The proposed penalty is designed for good properties of both LASSO and MCP.We develop an efficient algorithm to compute the proposed estimator by combining a concave convex procedure and coordinate descent algorithm. Numerical studies show that the proposed method has better selection and prediction performances compared to other penalized methods.