• Title/Summary/Keyword: Kernel estimator

Search Result 95, Processing Time 0.01 seconds

A note on nonparametric density deconvolution by weighted kernel estimators

  • Lee, Sungho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.4
    • /
    • pp.951-959
    • /
    • 2014
  • Recently Hazelton and Turlach (2009) proposed a weighted kernel density estimator for the deconvolution problem. In the case of Gaussian kernels and measurement error, they argued that the weighted kernel density estimator is a competitive estimator over the classical deconvolution kernel estimator. In this paper we consider weighted kernel density estimators when sample observations are contaminated by double exponentially distributed errors. The performance of the weighted kernel density estimators is compared over the classical deconvolution kernel estimator and the kernel density estimator based on the support vector regression method by means of a simulation study. The weighted density estimator with the Gaussian kernel shows numerical instability in practical implementation of optimization function. However the weighted density estimates with the double exponential kernel has very similar patterns to the classical kernel density estimates in the simulations, but the shape is less satisfactory than the classical kernel density estimator with the Gaussian kernel.

A Support Vector Method for the Deconvolution Problem

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.451-457
    • /
    • 2010
  • This paper considers the problem of nonparametric deconvolution density estimation when sample observa-tions are contaminated by double exponentially distributed errors. Three different deconvolution density estima-tors are introduced: a weighted kernel density estimator, a kernel density estimator based on the support vector regression method in a RKHS, and a classical kernel density estimator. The performance of these deconvolution density estimators is compared by means of a simulation study.

A note on SVM estimators in RKHS for the deconvolution problem

  • Lee, Sungho
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.1
    • /
    • pp.71-83
    • /
    • 2016
  • In this paper we discuss a deconvolution density estimator obtained using the support vector machines (SVM) and Tikhonov's regularization method solving ill-posed problems in reproducing kernel Hilbert space (RKHS). A remarkable property of SVM is that the SVM leads to sparse solutions, but the support vector deconvolution density estimator does not preserve sparsity as well as we expected. Thus, in section 3, we propose another support vector deconvolution estimator (method II) which leads to a very sparse solution. The performance of the deconvolution density estimators based on the support vector method is compared with the classical kernel deconvolution density estimator for important cases of Gaussian and Laplacian measurement error by means of a simulation study. In the case of Gaussian error, the proposed support vector deconvolution estimator shows the same performance as the classical kernel deconvolution density estimator.

A Study on Kernel Type Discontinuity Point Estimations

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.929-937
    • /
    • 2003
  • Kernel type estimations of discontinuity point at an unknown location in regression function or its derivatives have been developed. It is known that the discontinuity point estimator based on $Gasser-M\ddot{u}ller$ regression estimator with a one-sided kernel function which has a zero value at the point 0 makes a poor asymptotic behavior. Further, the asymptotic variance of $Gasser-M\ddot{u}ller$ regression estimator in the random design case is 1.5 times larger that the one in the corresponding fixed design case, while those two are identical for the local polynomial regression estimator. Although $Gasser-M\ddot{u}ller$ regression estimator with a one-sided kernel function which has a non-zero value at the point 0 for the modification is used, computer simulation show that this phenomenon is also appeared in the discontinuity point estimation.

  • PDF

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

Small Sample Study of Kernel Hazard Ratio Estimator

  • Choi, Myong-Hui
    • Journal of the Korean Data and Information Science Society
    • /
    • v.5 no.2
    • /
    • pp.59-74
    • /
    • 1994
  • The hazard ratio may be useful as a descriptive measure to compare the hazard experience of a treatment group with that of a control group. In this paper, we propose a kernel estimator of hazard ratio with censored survival data. The uniform consistency and asymptotic normality of the proposed estimator are proved by using counting process approach. In order to assess the performance of the proposed estimator, we compare the kernel estimator with Cox estimator and the generalized rank estimators of hazard ratio in terms of MSE by Monte Carlo simulation.

  • PDF

A Note on Deconvolution Estimators when Measurement Errors are Normal

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.517-526
    • /
    • 2012
  • In this paper a support vector method is proposed for use when the sample observations are contaminated by a normally distributed measurement error. The performance of deconvolution density estimators based on the support vector method is explored and compared with kernel density estimators by means of a simulation study. An interesting result was that for the estimation of kurtotic density, the support vector deconvolution estimator with a Gaussian kernel showed a better performance than the classical deconvolution kernel estimator.

A study on bandwith selection based on ASE for nonparametric density estimators

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.307-313
    • /
    • 2000
  • Suppose we have a set of data X1, ···, Xn and employ kernel density estimator to estimate the marginal density of X. in this article bandwith selection problem for kernel density estimator is examined closely. In particular the Kullback-Leibler method (a bandwith selection methods based on average square error (ASE)) is considered.

  • PDF

On Bias Reduction in Kernel Density Estimation

  • Kim Choongrak;Park Byeong-Uk;Kim Woochul
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.65-73
    • /
    • 2000
  • Kernel estimator is very popular in nonparametric density estimation. In this paper we propose an estimator which reduces the bias to the fourth power of the bandwidth, while the variance of the estimator increases only by at most moderate constant factor. The estimator is fully nonparametric in the sense of convex combination of three kernel estimators, and has good numerical properties.

  • PDF

A Local Linear Kernel Estimator for Sparse Multinomial Data

  • Baek, Jangsun
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.4
    • /
    • pp.515-529
    • /
    • 1998
  • Burman (1987) and Hall and Titterington (1987) studied kernel smoothing for sparse multinomial data in detail. Both of their estimators for cell probabilities are sparse asymptotic consistent under some restrictive conditions on the true cell probabilities. Dong and Simonoff (1994) adopted boundary kernels to relieve the restrictive conditions. We propose a local linear kernel estimator which is popular in nonparametric regression to estimate cell probabilities. No boundary adjustment is necessary for this estimator since it adapts automatically to estimation at the boundaries. It is shown that our estimator attains the optimal rate of convergence in mean sum of squared error under sparseness. Some simulation results and a real data application are presented to see the performance of the estimator.

  • PDF