• Title/Summary/Keyword: asymptotic bias

Search Result 34, Processing Time 0.02 seconds

Improvement of Boundary Bias in Nonparametric Regression via Twicing Technique

  • Jo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.2
    • /
    • pp.445-452
    • /
    • 1997
  • In this paper, twicing technique for the improvement of asymptotic boundary bias in nonparametric regression is considered. Asymptotic mean squared errors of the nonparametric regression estimators are derived at the boundary region by twicing the Nadaraya-Waston and local linear smoothing. Asymptotic biases of the resulting estimators are of order$h^2$and$h^4$ respectively.

  • PDF

On Copas′ Local Likelihood Density Estimator

  • Kim, W.C.;Park, B.U.;Kim, Y.G.
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.1
    • /
    • pp.77-87
    • /
    • 2001
  • Some asymptotic results on the local likelihood density estimator of Copas(1995) are derived when the locally parametric model has several parameters. It turns out that it has the same asymptotic mean squared error as that of Hjort and Jones(1996).

  • PDF

On Estimating the Distributional Parameter and the Complete Sample Size from Incomplete Samples

  • Yeo, Sung-chil
    • Journal of the Korean Statistical Society
    • /
    • v.20 no.2
    • /
    • pp.118-138
    • /
    • 1991
  • Given a random sample of size N(unknown) with density f(x $\theta$), suppose that only n observations which lie outside a region R are recorded. On the basis of n observations, the Bayes estimators of $\theta$ and N are considered and their asymptotic expansions are developed to compare their second order asymptotic properties with those of the maximum likelihood estimators and the Bayes modal estimators. Corrections to bias and median bias of these estimators are made. An example is given to illustrate the results obtained.

  • PDF

Optimal designs for small Poisson regression experiments using second-order asymptotic

  • Mansour, S. Mehr;Niaparast, M.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.527-538
    • /
    • 2019
  • This paper considers the issue of obtaining the optimal design in Poisson regression model when the sample size is small. Poisson regression model is widely used for the analysis of count data. Asymptotic theory provides the basis for making inference on the parameters in this model. However, for small size experiments, asymptotic approximations, such as unbiasedness, may not be valid. Therefore, first, we employ the second order expansion of the bias of the maximum likelihood estimator (MLE) and derive the mean square error (MSE) of MLE to measure the quality of an estimator. We then define DM-optimality criterion, which is based on a function of the MSE. This criterion is applied to obtain locally optimal designs for small size experiments. The effect of sample size on the obtained designs are shown. We also obtain locally DM-optimal designs for some special cases of the model.

On a Transformation Technique for Nonparametric Regression

  • Kim, Woochul;Park, Byeong U.
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.2
    • /
    • pp.217-233
    • /
    • 1996
  • This paper gives a rigorous proof of an asymptotic result about bias and variance for a transformation-based nonparametric regression estimator proposed by Park et al (1995).

  • PDF

On the Bias of Bootstrap Model Selection Criteria

  • Kee-Won Lee;Songyong Sim
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.2
    • /
    • pp.195-203
    • /
    • 1996
  • A bootstrap method is used to correct the apparent downward bias of a naive plug-in bootstrap model selection criterion, which is shown to enjoy a high degree of accuracy. Comparison of bootstrap method with the asymptotic method is made through an illustrative example.

  • PDF

Test for Discontinuities in Nonparametric Regression

  • Park, Dong-Ryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.709-717
    • /
    • 2008
  • The difference of two one-sided kernel estimators is usually used to detect the location of the discontinuity points of regression function. The large absolute value of the statistic imply discontinuity of regression function, so we may use the difference of two one-sided kernel estimators as the test statistic for testing null hypothesis of a smooth regression function. The problem is, however, we only know the asymptotic distribution of the test statistic under $H_0$ and we hardly expect the good performance of test if we rely solely on the asymptotic distribution for determining the critical points. In this paper, we show that if we adjust the bias of test statistic properly, the asymptotic rules hold for even small sample size situation.

Minimum Distance Estimation Based On The Kernels For U-Statistics

  • Park, Hyo-Il
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.1
    • /
    • pp.113-132
    • /
    • 1998
  • In this paper, we consider a minimum distance (M.D.) estimation based on kernels for U-statistics. We use Cramer-von Mises type distance function which measures the discrepancy between U-empirical distribution function(d.f.) and modeled d.f. of kernel. In the distance function, we allow various integrating measures, which can be finite, $\sigma$-finite or discrete. Then we derive the asymptotic normality and study the qualitative robustness of M. D. estimates.

  • PDF

Interval Estimations for Reliablility in Stress-Strength Model by Bootstrap Method

  • Lee, In-Suk;Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.6 no.1
    • /
    • pp.73-83
    • /
    • 1995
  • We construct the approximate bootstrap confidence intervals for reliability (R) when the distributions of strength and stress are both normal. Also we propose percentile, bias correct (BC), bias correct acceleration (BCa), and percentile-t intervals for R. We compare with the accuracy of the proposed bootstrap confidence intervals and classical confidence interval based on asymptotic normal distribution through Monte Carlo simulation. Results indicate that the confidence intervals by bootstrap method work better than classical confidence interval. In particular, confidence intervals by BC and BCa method work well for small sample and/or large value of true reliability.

  • PDF

A FRAMEWORK TO UNDERSTAND THE ASYMPTOTIC PROPERTIES OF KRIGING AND SPLINES

  • Furrer Eva M.;Nychka Douglas W.
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.1
    • /
    • pp.57-76
    • /
    • 2007
  • Kriging is a nonparametric regression method used in geostatistics for estimating curves and surfaces for spatial data. It may come as a surprise that the Kriging estimator, normally derived as the best linear unbiased estimator, is also the solution of a particular variational problem. Thus, Kriging estimators can also be interpreted as generalized smoothing splines where the roughness penalty is determined by the covariance function of a spatial process. We build off the early work by Silverman (1982, 1984) and the analysis by Cox (1983, 1984), Messer (1991), Messer and Goldstein (1993) and others and develop an equivalent kernel interpretation of geostatistical estimators. Given this connection we show how a given covariance function influences the bias and variance of the Kriging estimate as well as the mean squared prediction error. Some specific asymptotic results are given in one dimension for Matern covariances that have as their limit cubic smoothing splines.