• 제목/요약/키워드: asymptotically efficient estimates

검색결과 10건 처리시간 0.023초

A Modification of the W Test for Exponentiality

  • Kim, Nam-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • 제8권1호
    • /
    • pp.159-171
    • /
    • 2001
  • Shapiro and Wilk (1972) developed a test for exponentiality with origin and scale unknown. The procedure consists of comparing the generalized least squares estimate of scale with the estimate of scale given by the sample variance. However the test statistic is inconsistent ; that is, the power of the test will not approach 1 as the sample size increases. Hence we give a test based on the ratio of two asymptotically efficient estimates of scale. We also have conducted a power study to compare the test procedures, using Monte Carlo samples from a wide range of alternatives. It is found that the suggested statistics have higher power for the alternatives with the coefficient of variation greater that or equal to 1.

  • PDF

Bayesian Confidence Intervals in Penalized Likelihood Regression

  • Kim Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • 제13권1호
    • /
    • pp.141-150
    • /
    • 2006
  • Penalized likelihood regression for exponential families have been considered by Kim (2005) through smoothing parameter selection and asymptotically efficient low dimensional approximations. We derive approximate Bayesian confidence intervals based on Bayes model associated with lower dimensional approximations to provide interval estimates in penalized likelihood regression and conduct empirical studies to access their properties.

Efficient Score Estimation and Adaptive Rank and M-estimators from Left-Truncated and Right-Censored Data

  • Chul-Ki Kim
    • Communications for Statistical Applications and Methods
    • /
    • 제3권3호
    • /
    • pp.113-123
    • /
    • 1996
  • Data-dependent (adaptive) choice of asymptotically efficient score functions for rank estimators and M-estimators of regression parameters in a linear regression model with left-truncated and right-censored data are developed herein. The locally adaptive smoothing techniques of Muller and Wang (1990) and Uzunogullari and Wang (1992) provide good estimates of the hazard function h and its derivative h' from left-truncated and right-censored data. However, since we need to estimate h'/h for the asymptotically optimal choice of score functions, the naive estimator, which is just a ratio of estimated h' and h, turns out to have a few drawbacks. An altermative method to overcome these shortcomings and also to speed up the algorithms is developed. In particular, we use a subroutine of the PPR (Projection Pursuit Regression) method coded by Friedman and Stuetzle (1981) to find the nonparametric derivative of log(h) for the problem of estimating h'/h.

  • PDF

The Limit Distribution of a Modified W-Test Statistic for Exponentiality

  • Kim, Namhyun
    • Communications for Statistical Applications and Methods
    • /
    • 제8권2호
    • /
    • pp.473-481
    • /
    • 2001
  • Shapiro and Wilk (1972) developed a test for exponentiality with origin and scale unknown. The procedure consists of comparing the generalized least squares estimate of scale with the estimate of scale given by the sample variance. However the test statistic is inconsistent. Kim(2001) proposed a modified Shapiro-Wilk's test statistic based on the ratio of tow asymptotically efficient estimates of scale. In this paper, we study the asymptotic behavior of the statistic using the approximation of the quantile process by a sequence of Brownian bridges and represent the limit null distribution as an integral of a Brownian bridge.

  • PDF

Quadratic inference functions in marginal models for longitudinal data with time-varying stochastic covariates

  • Cho, Gyo-Young;Dashnyam, Oyunchimeg
    • Journal of the Korean Data and Information Science Society
    • /
    • 제24권3호
    • /
    • pp.651-658
    • /
    • 2013
  • For the marginal model and generalized estimating equations (GEE) method there is important full covariates conditional mean (FCCM) assumption which is pointed out by Pepe and Anderson (1994). With longitudinal data with time-varying stochastic covariates, this assumption may not necessarily hold. If this assumption is violated, the biased estimates of regression coefficients may result. But if a diagonal working correlation matrix is used, irrespective of whether the assumption is violated, the resulting estimates are (nearly) unbiased (Pan et al., 2000).The quadratic inference functions (QIF) method proposed by Qu et al. (2000) is the method based on generalized method of moment (GMM) using GEE. The QIF yields a substantial improvement in efficiency for the estimator of ${\beta}$ when the working correlation is misspecified, and equal efficiency to the GEE when the working correlation is correct (Qu et al., 2000).In this paper, we interest in whether the QIF can improve the results of the GEE method in the case of FCCM is violated. We show that the QIF with exchangeable and AR(1) working correlation matrix cannot be consistent and asymptotically normal in this case. Also it may not be efficient than GEE with independence working correlation. Our simulation studies verify the result.

Negative Exponential Disparity Based Robust Estimates of Ordered Means in Normal Models

  • Bhattacharya, Bhaskar;Sarkar, Sahadeb;Jeong, Dong-Bin
    • Communications for Statistical Applications and Methods
    • /
    • 제7권2호
    • /
    • pp.371-383
    • /
    • 2000
  • Lindsay (1994) and Basu et al (1997) show that another density-based distance called the negative exponential disparity (NED) is an excellent competitor to the Hellinger distance (HD) in generating an asymptotically fully efficient and robust estimator. Bhattacharya and Basu (1996) consider estimation of the locations of several normal populations when an order relation between them is known to be true. They empirically show that the robust HD based weighted likelihood estimators compare favorably with the M-estimators based on Huber's $\psi$ function, the Gastworth estimator, and the trimmed mean estimator. In this paper we investigate the performance of the weighted likelihood estimator based on the NED as a robust alternative relative to that based on the HD. The NED based estimator is found to be quite competitive in the settings considered by Bhattacharya and Basu.

  • PDF

Penalizing the Negative Exponential Disparity in Discrete Models

  • Sahadeb Sarkar;Song, Kijoung-Song;Jeong, Dong-Bin
    • Communications for Statistical Applications and Methods
    • /
    • 제5권2호
    • /
    • pp.517-529
    • /
    • 1998
  • When the sample size is small the robust minimum Hellinger distance (HD) estimator can have substantially poor relative efficiency at the true model. Similarly, approximating the exact null distributions of the ordinary Hellinger distance tests with the limiting chi-square distributions can be quite inappropriate in small samples. To overcome these problems Harris and Basu (1994) and Basu et at. (1996) recommended using a modified HD called penalized Hellinger distance (PHD). Lindsay (1994) and Basu et al. (1997) showed that another density based distance, namely the negative exponential disparity (NED), is a major competitor to the Hellinger distance in producing an asymptotically fully efficient and robust estimator. In this paper we investigate the small sample performance of the estimates and tests based on the NED and penalized NED (PNED). Our results indicate that, in the settings considered here, the NED, unlike the HD, produces estimators that perform very well in small samples and penalizing the NED does not help. However, in testing of hypotheses, the deviance test based on a PNED appears to achieve the best small-sample level compared to tests based on the NED, HD and PHD.

  • PDF

중도절단자료에 대한 수정된 SHAPIRO-WILK 지수 검정 (A Modification of the Shapiro-Wilk Test for Exponentiality Based on Censored Data)

  • 김남현
    • 응용통계연구
    • /
    • 제21권2호
    • /
    • pp.265-273
    • /
    • 2008
  • 본 논문에서는 Kim (2001a)에서 제안한 지수분포에서의 수정된 Shapiro와 Wilk (1972) $W_E$-통계량을 중도절단자료에 적용하였다. 검정통계량은 Samanta와 Schwarz (1988)에서 $W_E$-통계량을 중도절단자료에 대해 수정한 것과 같은 방법으로 정규화 등간격(normalized spacings)을 이용하여 수정하였다. 그 결과 제안된 통계량은 귀무가설에서 중도절단이 없는 경 우와 같은 분포를 갖고 표본크기만 변하게 된다. 제안된 통계량의 검정력을 Samanta와 Schwarz (1988)의 통계량과 비교한 결과, 중도절단이 없는 경우와 마찬가지로 중도절단이 있는 경우에도 변동계수가 1보다 크거나 같은 대립가설에서 제안된 통계량은 더 좋은 검정력을 나타내었다.

전진 제 2종 중도절단자료에 대한 Shapiro-Wilk 형태의 지수검정 (The Shapiro-Wilk Type Test for Exponentiality Based on Progressively Type II Censored Data)

  • 김남현
    • 응용통계연구
    • /
    • 제23권3호
    • /
    • pp.487-495
    • /
    • 2010
  • 본 논문에서는 지수분포의 검정에 자주 쓰이는 Shapiro와 Wilk (1972) 통계량과 이의 단점을 보완한 Kim (2001a)의 통계량을 위치모수가 주어지고 척도모수가 미지인 지수분포에서의 전진 제 2종 중도절단자료에 적용하였다. 이를 위하여 각각의 통계량을 Stephens (1978)을 이용하여 위치모수가 주어진 경우의 검정통계량으로 수정하고, 자료를 정규화 간격(normalized spacings)을 이용하여 변환하는 방법을 사용하였다. 모의실험을 통하여 검정력을 비교한 결과 Shapiro-Wilk 통계량보다 Kim (2001a)의 통계량을 이용할 때 고려한 거의 모든 경우 더 우수한 검정력을 나타내었다.

수학교과의 동형고사 문항에서 양호도 향상에 유효한 최적정답율 산정에 관한 연구 (Study on Estimating the Optimal Number-right Score in Two Equivalent Mathematics-test by Linear Score Equating)

  • 홍석강
    • 한국수학교육학회지시리즈A:수학교육
    • /
    • 제37권1호
    • /
    • pp.1-13
    • /
    • 1998
  • In this paper, we have represented the efficient way how to enumerate the optimal number-right scores to adjust the item difficulty and to improve item discrimination. To estimate the optimal number-right scores in two equivalent math-tests by linear score equating a measurement error model was applied to the true scores observed from a pair of equivalent math-tests assumed to measure same trait. The model specification for true scores which is represented by the bivariate model is a simple regression model to inference the optimal number-right scores and we assume again that the two simple regression lines of raw scores and true scores are independent each other in their error models. We enumerated the difference between mean value of $\chi$* and ${\mu}$$\_$$\chi$/ and the difference between the mean value of y*and a+b${\mu}$$\_$$\chi$/ by making an inference the estimates from 2 error variable regression model. Furthermore, so as to distinguish from the original score points, the estimated number-right scores y’$\^$*/ as the estimated regression values of true scores with the same coordinate were moved to center points that were composed of such difference values with result of such parallel score moving procedure as above mentioned. We got the asymptotically normal distribution in Figure 5 that was represented as the optimal distribution of the optimal number-right scores so that we could decide the optimal proportion of number-right score in each item. Also by assumption that equivalence of two tests is closely connected to unidimensionality of a student’s ability. we introduce new definition of trait score to evaluate such ability in each item. In this study there are much limitations in getting the real true scores and in analyzing data of the bivariate error model. However, even with these limitations we believe that this study indicates that the estimation of optimal number right scores by using this enumeration procedure could be easily achieved.

  • PDF