• Title/Summary/Keyword: asymptotically efficient estimates

Search Result 10, Processing Time 0.019 seconds

A Modification of the W Test for Exponentiality

  • Kim, Nam-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.159-171
    • /
    • 2001
  • Shapiro and Wilk (1972) developed a test for exponentiality with origin and scale unknown. The procedure consists of comparing the generalized least squares estimate of scale with the estimate of scale given by the sample variance. However the test statistic is inconsistent ; that is, the power of the test will not approach 1 as the sample size increases. Hence we give a test based on the ratio of two asymptotically efficient estimates of scale. We also have conducted a power study to compare the test procedures, using Monte Carlo samples from a wide range of alternatives. It is found that the suggested statistics have higher power for the alternatives with the coefficient of variation greater that or equal to 1.

  • PDF

Bayesian Confidence Intervals in Penalized Likelihood Regression

  • Kim Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.141-150
    • /
    • 2006
  • Penalized likelihood regression for exponential families have been considered by Kim (2005) through smoothing parameter selection and asymptotically efficient low dimensional approximations. We derive approximate Bayesian confidence intervals based on Bayes model associated with lower dimensional approximations to provide interval estimates in penalized likelihood regression and conduct empirical studies to access their properties.

Efficient Score Estimation and Adaptive Rank and M-estimators from Left-Truncated and Right-Censored Data

  • Chul-Ki Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.3
    • /
    • pp.113-123
    • /
    • 1996
  • Data-dependent (adaptive) choice of asymptotically efficient score functions for rank estimators and M-estimators of regression parameters in a linear regression model with left-truncated and right-censored data are developed herein. The locally adaptive smoothing techniques of Muller and Wang (1990) and Uzunogullari and Wang (1992) provide good estimates of the hazard function h and its derivative h' from left-truncated and right-censored data. However, since we need to estimate h'/h for the asymptotically optimal choice of score functions, the naive estimator, which is just a ratio of estimated h' and h, turns out to have a few drawbacks. An altermative method to overcome these shortcomings and also to speed up the algorithms is developed. In particular, we use a subroutine of the PPR (Projection Pursuit Regression) method coded by Friedman and Stuetzle (1981) to find the nonparametric derivative of log(h) for the problem of estimating h'/h.

  • PDF

The Limit Distribution of a Modified W-Test Statistic for Exponentiality

  • Kim, Namhyun
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.2
    • /
    • pp.473-481
    • /
    • 2001
  • Shapiro and Wilk (1972) developed a test for exponentiality with origin and scale unknown. The procedure consists of comparing the generalized least squares estimate of scale with the estimate of scale given by the sample variance. However the test statistic is inconsistent. Kim(2001) proposed a modified Shapiro-Wilk's test statistic based on the ratio of tow asymptotically efficient estimates of scale. In this paper, we study the asymptotic behavior of the statistic using the approximation of the quantile process by a sequence of Brownian bridges and represent the limit null distribution as an integral of a Brownian bridge.

  • PDF

Quadratic inference functions in marginal models for longitudinal data with time-varying stochastic covariates

  • Cho, Gyo-Young;Dashnyam, Oyunchimeg
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.651-658
    • /
    • 2013
  • For the marginal model and generalized estimating equations (GEE) method there is important full covariates conditional mean (FCCM) assumption which is pointed out by Pepe and Anderson (1994). With longitudinal data with time-varying stochastic covariates, this assumption may not necessarily hold. If this assumption is violated, the biased estimates of regression coefficients may result. But if a diagonal working correlation matrix is used, irrespective of whether the assumption is violated, the resulting estimates are (nearly) unbiased (Pan et al., 2000).The quadratic inference functions (QIF) method proposed by Qu et al. (2000) is the method based on generalized method of moment (GMM) using GEE. The QIF yields a substantial improvement in efficiency for the estimator of ${\beta}$ when the working correlation is misspecified, and equal efficiency to the GEE when the working correlation is correct (Qu et al., 2000).In this paper, we interest in whether the QIF can improve the results of the GEE method in the case of FCCM is violated. We show that the QIF with exchangeable and AR(1) working correlation matrix cannot be consistent and asymptotically normal in this case. Also it may not be efficient than GEE with independence working correlation. Our simulation studies verify the result.

Negative Exponential Disparity Based Robust Estimates of Ordered Means in Normal Models

  • Bhattacharya, Bhaskar;Sarkar, Sahadeb;Jeong, Dong-Bin
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.371-383
    • /
    • 2000
  • Lindsay (1994) and Basu et al (1997) show that another density-based distance called the negative exponential disparity (NED) is an excellent competitor to the Hellinger distance (HD) in generating an asymptotically fully efficient and robust estimator. Bhattacharya and Basu (1996) consider estimation of the locations of several normal populations when an order relation between them is known to be true. They empirically show that the robust HD based weighted likelihood estimators compare favorably with the M-estimators based on Huber's $\psi$ function, the Gastworth estimator, and the trimmed mean estimator. In this paper we investigate the performance of the weighted likelihood estimator based on the NED as a robust alternative relative to that based on the HD. The NED based estimator is found to be quite competitive in the settings considered by Bhattacharya and Basu.

  • PDF

Penalizing the Negative Exponential Disparity in Discrete Models

  • Sahadeb Sarkar;Song, Kijoung-Song;Jeong, Dong-Bin
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.2
    • /
    • pp.517-529
    • /
    • 1998
  • When the sample size is small the robust minimum Hellinger distance (HD) estimator can have substantially poor relative efficiency at the true model. Similarly, approximating the exact null distributions of the ordinary Hellinger distance tests with the limiting chi-square distributions can be quite inappropriate in small samples. To overcome these problems Harris and Basu (1994) and Basu et at. (1996) recommended using a modified HD called penalized Hellinger distance (PHD). Lindsay (1994) and Basu et al. (1997) showed that another density based distance, namely the negative exponential disparity (NED), is a major competitor to the Hellinger distance in producing an asymptotically fully efficient and robust estimator. In this paper we investigate the small sample performance of the estimates and tests based on the NED and penalized NED (PNED). Our results indicate that, in the settings considered here, the NED, unlike the HD, produces estimators that perform very well in small samples and penalizing the NED does not help. However, in testing of hypotheses, the deviance test based on a PNED appears to achieve the best small-sample level compared to tests based on the NED, HD and PHD.

  • PDF

A Modification of the Shapiro-Wilk Test for Exponentiality Based on Censored Data (중도절단자료에 대한 수정된 SHAPIRO-WILK 지수 검정)

  • Kim, Nam-Hyun
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.265-273
    • /
    • 2008
  • Kim (2001a) presented a modification of the Shapiro and Wilk (1972) test for exponentiality based on the ratio of two asymptotically efficient estimates of scale. In this paper we modify this test statistic when the sample is censored. We use the normalized spacings based on the sample data, which was used in Samanta and Schwarz (1988) to modify the Shapiro and Wilk (1972) statistic to the censored data. As a result the modified statistics have the same null distribution as the uncensored case with a corresponding reduction in sample size. Through a simulation study it is found that the proposed statistic has higher power than Samanta and Schwarz (1988) statistic especially for the alternatives with the coefficient of variation greater than or equal to 1.

The Shapiro-Wilk Type Test for Exponentiality Based on Progressively Type II Censored Data (전진 제 2종 중도절단자료에 대한 Shapiro-Wilk 형태의 지수검정)

  • Kim, Nam-Hyun
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.487-495
    • /
    • 2010
  • This paper develops a goodness of fit test statistic to test if the progressively Type II censored sample comes from an exponential distribution with origin known. The test is based on normalizing spacings and Stephens (1978)' modified Shapiro and Wilk (1972) test for exponentiality. The modification is for the case where the origin is known. We applied the same modification to Kim (2001a)'s statistic, which is based on the ratio of two asymptotically efficient estimates of scale. The simulation results show that Kim (2001a)'s statistic has higher power than Stephens' modified Shapiro and Wilk statistic for almost all cases.

Study on Estimating the Optimal Number-right Score in Two Equivalent Mathematics-test by Linear Score Equating (수학교과의 동형고사 문항에서 양호도 향상에 유효한 최적정답율 산정에 관한 연구)

  • 홍석강
    • The Mathematical Education
    • /
    • v.37 no.1
    • /
    • pp.1-13
    • /
    • 1998
  • In this paper, we have represented the efficient way how to enumerate the optimal number-right scores to adjust the item difficulty and to improve item discrimination. To estimate the optimal number-right scores in two equivalent math-tests by linear score equating a measurement error model was applied to the true scores observed from a pair of equivalent math-tests assumed to measure same trait. The model specification for true scores which is represented by the bivariate model is a simple regression model to inference the optimal number-right scores and we assume again that the two simple regression lines of raw scores and true scores are independent each other in their error models. We enumerated the difference between mean value of $\chi$* and ${\mu}$$\_$$\chi$/ and the difference between the mean value of y*and a+b${\mu}$$\_$$\chi$/ by making an inference the estimates from 2 error variable regression model. Furthermore, so as to distinguish from the original score points, the estimated number-right scores y’$\^$*/ as the estimated regression values of true scores with the same coordinate were moved to center points that were composed of such difference values with result of such parallel score moving procedure as above mentioned. We got the asymptotically normal distribution in Figure 5 that was represented as the optimal distribution of the optimal number-right scores so that we could decide the optimal proportion of number-right score in each item. Also by assumption that equivalence of two tests is closely connected to unidimensionality of a student’s ability. we introduce new definition of trait score to evaluate such ability in each item. In this study there are much limitations in getting the real true scores and in analyzing data of the bivariate error model. However, even with these limitations we believe that this study indicates that the estimation of optimal number right scores by using this enumeration procedure could be easily achieved.

  • PDF