• Title/Summary/Keyword: statistical error

Search Result 1,760, Processing Time 0.02 seconds

Error Rate for the Limiting Poisson-power Function Distribution

  • Joo-Hwan Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.1
    • /
    • pp.243-255
    • /
    • 1996
  • The number of neutron signals from a neutral particle beam(NPB) at the detector, without any errors, obeys Poisson distribution, Under two assumptions that NPB scattering distribution and aiming errors have a circular Gaussian distribution respectively, an exact probability distribution of signals becomes a Poisson-power function distribution. In this paper, we show that the error rate in simple hypothesis testing for the limiting Poisson-power function distribution is not zero. That is, the limit of ${\alpha}+{\beta}$ is zero when Poisson parameter$\kappa\rightarro\infty$, but this limit is not zero (i.e., $\rho\ell$>0)for the Poisson-power function distribution. We also give optimal decision algorithms for a specified error rate.

  • PDF

Partially linear multivariate regression in the presence of measurement error

  • Yalaz, Secil;Tez, Mujgan
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.5
    • /
    • pp.511-521
    • /
    • 2020
  • In this paper, a partially linear multivariate model with error in the explanatory variable of the nonparametric part, and an m dimensional response variable is considered. Using the uniform consistency results found for the estimator of the nonparametric part, we derive an estimator of the parametric part. The dependence of the convergence rates on the errors distributions is examined and demonstrated that proposed estimator is asymptotically normal. In main results, both ordinary and super smooth error distributions are considered. Moreover, the derived estimators are applied to the economic behaviors of consumers. Our method handles contaminated data is founded more effectively than the semiparametric method ignores measurement errors.

Testing for a Unit Root in an ARIMA(p,1,q) Signal Observed with Measurement Error

  • Lee, Jong-Hyup;Shin, Dong-Wan
    • Journal of the Korean Statistical Society
    • /
    • v.24 no.2
    • /
    • pp.481-493
    • /
    • 1995
  • An ARIMA signal observed with measurement error is shown to have another ARIMA representation with nonlinear restrictions on parameters. For this model, the restricted Newton-Raphson estimator(RNRE) of the unit root is shown to have the same limiting distribution as the ordinary least squares estimator of the unit root in an AR(1) model tabulated by Dickey and Fuller (1979). The RNRE of parameters of the ARIMA(p,1,k) process and unit root tests base on the RNRE are developed.

  • PDF

Estimating the Number of Clusters using Hotelling's

  • Choi, Kyung-Mee
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.305-312
    • /
    • 2005
  • In the cluster analysis, Hotelling's $T^2$ can be used to estimate the unknown number of clusters based on the idea of multiple comparison procedure. Especially, its threshold is obtained according to the probability of committing the type one error. Examples are used to compare Hotelling's $T^2$ with other classical location test statistics such as Sum-of-Squared Error and Wilks' $\Lambda$ The hierarchical clustering is used to reveal the underlying structure of the data. Also related criteria are reviewed in view of both the between variance and the within variance.

Accuracy Measures of Empirical Bayes Estimator for Mean Rates

  • Jeong, Kwang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.6
    • /
    • pp.845-852
    • /
    • 2010
  • The outcomes of counts commonly occur in the area of disease mapping for mortality rates or disease rates. A Poisson distribution is usually assumed as a model of disease rates in conjunction with a gamma prior. The small area typically refers to a small geographical area or demographic group for which very little information is available from the sample surveys. Under this situation the model-based estimation is very popular, in which the auxiliary variables from various administrative sources are used. The empirical Bayes estimator under Poissongamma model has been considered with its accuracy measures. An accuracy measure using a bootstrap samples adjust the underestimation incurred by the posterior variance as an estimator of true mean squared error. We explain the suggested method through a practical dataset of hitters in baseball games. We also perform a Monte Carlo study to compare the accuracy measures of mean squared error.

Selection of a Predictive Coverage Growth Function

  • Park, Joong-Yang;Lee, Gye-Min
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.6
    • /
    • pp.909-916
    • /
    • 2010
  • A trend in software reliability engineering is to take into account the coverage growth behavior during testing. A coverage growth function that represents the coverage growth behavior is an essential factor in software reliability models. When multiple competitive coverage growth functions are available, there is a need for a criterion to select the best coverage growth functions. This paper proposes a selection criterion based on the prediction error. The conditional coverage growth function is introduced for predicting future coverage growth. Then the sum of the squares of the prediction error is defined and used for selecting the best coverage growth function.

Linear regression under log-concave and Gaussian scale mixture errors: comparative study

  • Kim, Sunyul;Seo, Byungtae
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.633-645
    • /
    • 2018
  • Gaussian error distributions are a common choice in traditional regression models for the maximum likelihood (ML) method. However, this distributional assumption is often suspicious especially when the error distribution is skewed or has heavy tails. In both cases, the ML method under normality could break down or lose efficiency. In this paper, we consider the log-concave and Gaussian scale mixture distributions for error distributions. For the log-concave errors, we propose to use a smoothed maximum likelihood estimator for stable and faster computation. Based on this, we perform comparative simulation studies to see the performance of coefficient estimates under normal, Gaussian scale mixture, and log-concave errors. In addition, we also consider real data analysis using Stack loss plant data and Korean labor and income panel data.

Penalized maximum likelihood estimation with symmetric log-concave errors and LASSO penalty

  • Seo-Young, Park;Sunyul, Kim;Byungtae, Seo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.641-653
    • /
    • 2022
  • Penalized least squares methods are important tools to simultaneously select variables and estimate parameters in linear regression. The penalized maximum likelihood can also be used for the same purpose assuming that the error distribution falls in a certain parametric family of distributions. However, the use of a certain parametric family can suffer a misspecification problem which undermines the estimation accuracy. To give sufficient flexibility to the error distribution, we propose to use the symmetric log-concave error distribution with LASSO penalty. A feasible algorithm to estimate both nonparametric and parametric components in the proposed model is provided. Some numerical studies are also presented showing that the proposed method produces more efficient estimators than some existing methods with similar variable selection performance.

A Study on Methods of Quality Check for Digital Basemaps using Statistical Methods for the Quality Control (통계적 품질관리기법을 도입한 수치지도의 검수방법에 관한 연구)

  • 김병국;서현덕
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.17 no.1
    • /
    • pp.79-86
    • /
    • 1999
  • In this study, we investigated methods of quality check for digital basemaps and proposed effective methods of quality check. We used new statistical methods for quality control in order to carry out quality check for digital basemaps. We proposed 2-stage complete sampling and 2-stage cluster sampling method to improve present statistical methods of quality check(1-stage complete sampling method). We estimated error rate and number of omitted objects using simulated data about all delivered digital basemaps and estimated variances about it. We could determine confidence interval about error rate and number of omitted objects.

  • PDF

Statistical Methods in Non-Inferiority Trials - A Focus on US FDA Guidelines -

  • Kang, Seung-Ho;Wang, So-Young
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.4
    • /
    • pp.575-587
    • /
    • 2012
  • The effect of a new treatment is proven through the comparison of a new treatment with placebo; however, the number of parent non-inferiority trials tends to grow proportionally to the number of active controls. In a non-inferiority trial a new treatment is approved by proof that the new treatment is not inferior to an active control; however, both additional assumptions and historical trials are needed to show (through the comparison of the new treatment with the active control in a non-inferiority trial) that the new treatment is more efficacious than a putative placebo. The two different methods of using the historical data: frequentist principle method and meta-analytic method. This paper discusses the statistical methods and different Type I error rates obtained through the different methods employed.