• Title/Summary/Keyword: empirical and kernel estimation

Search Result 20, Processing Time 0.022 seconds

Minimum Distance Estimation Based On The Kernels For U-Statistics

  • Park, Hyo-Il
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.1
    • /
    • pp.113-132
    • /
    • 1998
  • In this paper, we consider a minimum distance (M.D.) estimation based on kernels for U-statistics. We use Cramer-von Mises type distance function which measures the discrepancy between U-empirical distribution function(d.f.) and modeled d.f. of kernel. In the distance function, we allow various integrating measures, which can be finite, $\sigma$-finite or discrete. Then we derive the asymptotic normality and study the qualitative robustness of M. D. estimates.

  • PDF

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

Identification of the associations between genes and quantitative traits using entropy-based kernel density estimation

  • Yee, Jaeyong;Park, Taesung;Park, Mira
    • Genomics & Informatics
    • /
    • v.20 no.2
    • /
    • pp.17.1-17.11
    • /
    • 2022
  • Genetic associations have been quantified using a number of statistical measures. Entropy-based mutual information may be one of the more direct ways of estimating the association, in the sense that it does not depend on the parametrization. For this purpose, both the entropy and conditional entropy of the phenotype distribution should be obtained. Quantitative traits, however, do not usually allow an exact evaluation of entropy. The estimation of entropy needs a probability density function, which can be approximated by kernel density estimation. We have investigated the proper sequence of procedures for combining the kernel density estimation and entropy estimation with a probability density function in order to calculate mutual information. Genotypes and their interactions were constructed to set the conditions for conditional entropy. Extensive simulation data created using three types of generating functions were analyzed using two different kernels as well as two types of multifactor dimensionality reduction and another probability density approximation method called m-spacing. The statistical power in terms of correct detection rates was compared. Using kernels was found to be most useful when the trait distributions were more complex than simple normal or gamma distributions. A full-scale genomic dataset was explored to identify associations using the 2-h oral glucose tolerance test results and γ-glutamyl transpeptidase levels as phenotypes. Clearly distinguishable single-nucleotide polymorphisms (SNPs) and interacting SNP pairs associated with these phenotypes were found and listed with empirical p-values.

On the Equality of Two Distributions Based on Nonparametric Kernel Density Estimator

  • Kim, Dae-Hak;Oh, Kwang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.247-255
    • /
    • 2003
  • Hypothesis testing for the equality of two distributions were considered. Nonparametric kernel density estimates were used for testing equality of distributions. Cross-validatory choice of bandwidth was used in the kernel density estimation. Sampling distribution of considered test statistic were developed by resampling method, called the bootstrap. Small sample Monte Carlo simulation were conducted. Empirical power of considered tests were compared for variety distributions.

  • PDF

Empirical variogram for achieving the best valid variogram

  • Mahdi, Esam;Abuzaid, Ali H.;Atta, Abdu M.A.
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.5
    • /
    • pp.547-568
    • /
    • 2020
  • Modeling the statistical autocorrelations in spatial data is often achieved through the estimation of the variograms, where the selection of the appropriate valid variogram model, especially for small samples, is crucial for achieving precise spatial prediction results from kriging interpolations. To estimate such a variogram, we traditionally start by computing the empirical variogram (traditional Matheron or robust Cressie-Hawkins or kernel-based nonparametric approaches). In this article, we conduct numerical studies comparing the performance of these empirical variograms. In most situations, the nonparametric empirical variable nearest-neighbor (VNN) showed better performance than its competitors (Matheron, Cressie-Hawkins, and Nadaraya-Watson). The analysis of the spatial groundwater dataset used in this article suggests that the wave variogram model, with hole effect structure, fitted to the empirical VNN variogram is the most appropriate choice. This selected variogram is used with the ordinary kriging model to produce the predicted pollution map of the nitrate concentrations in groundwater dataset.

Optimal bandwidth in nonparametric classification between two univariate densities

  • Hall, Peter;Kang, Kee-Hoon
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.05a
    • /
    • pp.1-5
    • /
    • 2002
  • We consider the problem of optimal bandwidth choice for nonparametric classification, based on kernel density estimators, where the problem of interest is distinguishing between two univariate distributions. When the densities intersect at a single point, optimal bandwidth choice depends on curvatures of the densities at that point. The problem of empirical bandwidth selection and classifying data in the tails of a distribution are also addressed.

  • PDF

Estimation of P(X > Y) when X and Y are dependent random variables using different bivariate sampling schemes

  • Samawi, Hani M.;Helu, Amal;Rochani, Haresh D.;Yin, Jingjing;Linder, Daniel
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.5
    • /
    • pp.385-397
    • /
    • 2016
  • The stress-strength models have been intensively investigated in the literature in regards of estimating the reliability ${\theta}$ = P(X > Y) using parametric and nonparametric approaches under different sampling schemes when X and Y are independent random variables. In this paper, we consider the problem of estimating ${\theta}$ when (X, Y) are dependent random variables with a bivariate underlying distribution. The empirical and kernel estimates of ${\theta}$ = P(X > Y), based on bivariate ranked set sampling (BVRSS) are considered, when (X, Y) are paired dependent continuous random variables. The estimators obtained are compared to their counterpart, bivariate simple random sampling (BVSRS), via the bias and mean square error (MSE). We demonstrate that the suggested estimators based on BVRSS are more efficient than those based on BVSRS. A simulation study is conducted to gain insight into the performance of the proposed estimators. A real data example is provided to illustrate the process.

Efficiency of Aggregate Data in Non-linear Regression

  • Huh, Jib
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.2
    • /
    • pp.327-336
    • /
    • 2001
  • This work concerns estimating a regression function, which is not linear, using aggregate data. In much of the empirical research, data are aggregated for various reasons before statistical analysis. In a traditional parametric approach, a linear estimation of the non-linear function with aggregate data can result in unstable estimators of the parameters. More serious consequence is the bias in the estimation of the non-linear function. The approach we employ is the kernel regression smoothing. We describe the conditions when the aggregate data can be used to estimate the regression function efficiently. Numerical examples will illustrate our findings.

  • PDF

A Study on Properties of Crude Oil Based Derivative Linked Security (유가 연계 파생결합증권의 특성에 대한 연구)

  • Sohn, Kyoung-Woo;Chung, Ji-Yeong
    • Asia-Pacific Journal of Business
    • /
    • v.11 no.3
    • /
    • pp.243-260
    • /
    • 2020
  • Purpose - This paper aims to investigate the properties of crude oil based derivative security (DLS) focusing on step-down type for comprehensive understanding of its risk. Design/methodology/approach - Kernel estimation is conducted to figure out statistical feature of the process of oil price. We simulate oil price paths based on kernel estimation results and derive probabilities of hitting the barrier and early redemption. Findings - The amount of issuance for crude oil based DLS is relatively low when base prices are below $40 while it is high when base prices are around $60 or $100, which is not consistent with kernel estimation results showing that oil futures prices tend to revert toward $46.14 and the mean-reverting speed is faster as oil price is lower. The analysis based on simulated oil price paths reveals that probability of early redemption is below 50% for DLS with high base prices and the ratio of the probability of early redemption to the probability of hitting barrier is remarkably low compared to the case for DLS with low base prices, as the chance of early redemption is deferred. Research implications or Originality - Empirical results imply that the level of the base price is a crucial factor of the risk for DLS, thus introducing a time-varying knock-in barrier, which is similar to adjust the base price, merits consideration to enhance protection for DLS investors.

Probability Density Function of the Tidal Residuals in the Korean Coast (한반도 연안 조위편차의 확률밀도함수)

  • Cho, Hong-Yeon;Kang, Ju-Whan
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.24 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Tidal residual is being an important factor by the influence of the climate change in terms of the coastal safety and defense. It is one of the most important factor for the determination of the reference sea level in order to check the safety and performance of the coastal structures in company with the typhoon intensity variation. The probability density function (pdf) of tidal residuals in the Korean coasts have a non-ignorable skewness and high kurtosis. It is highly restricted to the application of the normal pdf assumption as an approximated pdf of tidal residuals. In this study, the pdf of tidal residuals estimated using the Kernel function is suggested as a more reliable and accurate pdf of tidal residuals than the normal function. This suggested pdf shows a good agreement with the empirical cumulative distribution function and histogram. It also gives the more accurate estimation result on the extreme values in comparison with the results based on the normal pdf assumption.