• Title/Summary/Keyword: Statistical efficiency

Search Result 1,210, Processing Time 0.022 seconds

On Convex Combination of Local Constant Regression

  • Mun, Jung-Won;Kim, Choong-Rak
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.379-387
    • /
    • 2006
  • Local polynomial regression is widely used because of good properties such as such as the adaptation to various types of designs, the absence of boundary effects and minimax efficiency Choi and Hall (1998) proposed an estimator of regression function using a convex combination idea. They showed that a convex combination of three local linear estimators produces an estimator which has the same order of bias as a local cubic smoother. In this paper we suggest another estimator of regression function based on a convex combination of five local constant estimates. It turned out that this estimator has the same order of bias as a local cubic smoother.

Partially Balanced Resolution IV' Designs in a 2^m-Factorial

  • Paik, U.B.
    • Journal of the Korean Statistical Society
    • /
    • v.11 no.1
    • /
    • pp.1-11
    • /
    • 1982
  • Srivastava and Anderson(1970) illustrate a method of obtaining Balanced (but not orthogonal) Resolution $IV^*$ designs starting with a BIB design. The incidence matrix of a BIB design with parameters (v, b, r, k, and $\lambda$) is utilized to obtain Balanced Resolution $IV^*$ designs with m factors and n=2b runs, where $m \leq v$. In this paper, the same idea is extended to the case of PBIB designs to obtain Partially Balanced Resolution $IV^*$ designs. In the designs obtained here the variances are balanced and the covariances are partially balanced with respect to the main effects. A proof of this property of Partially Balanced Resoultion $IV^*$ designs is given. The efficiency of Partially Balanced Resolution $IV^*$ designs is also considered and examples of Partially Balanced Resoultion $IV^*$ designs are included.

  • PDF

A NOTE ON PROTECTION OF PRIVACY IN RANDOMIZED RESPONSE DEVICES

  • SAHA AMITAVA
    • Journal of the Korean Statistical Society
    • /
    • v.34 no.4
    • /
    • pp.297-309
    • /
    • 2005
  • We consider 'efficiency versus privacy-protection' problem concerned with several well-known randomized response (RR) devices to estimate pro­portion of people bearing a stigmatizing characteristic in a community. The literature of RR on respondent's privacy protection discusses only about response specific jeopardy measures. We propose a measure of jeopardy that is independent of the RR offered by the interviewee and recommend it for using as a technical characteristic of the RR device. For ensuring better cooperation from the interviewees this new measure that depends only on the design parameters of the RR devices may be disclosed to the respondents before producing the RR by implementing the randomization device.

A Comparison Study of the Test for Right Censored and Grouped Data

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.4
    • /
    • pp.313-320
    • /
    • 2015
  • In this research, we compare the efficiency of two test procedures proposed by Prentice and Gloeckler (1978) and Park and Hong (2009) for grouped data with possible right censored observations. Both test statistics were derived using the likelihood ratio principle, but under different semi-parametric models. We review the two statistics with asymptotic normality and consider obtaining empirical powers through a simulation study. The simulation study considers two types of models the location translation model and the scale model. We discuss some interesting features related to the grouped data and obtain null distribution functions with a re-sampling method. Finally we indicate topics for future research.

Test procedures for the mean and variance simultaneously under normality

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.563-574
    • /
    • 2016
  • In this study, we propose several simultaneous tests to detect the difference between means and variances for the two-sample problem when the underlying distribution is normal. For this, we apply the likelihood ratio principle and propose a likelihood ratio test. We then consider a union-intersection test after identifying the likelihood statistic, a product of two individual likelihood statistics, to test the individual sub-null hypotheses. By noting that the union-intersection test can be considered a simultaneous test with combination function, also we propose simultaneous tests with combination functions to combine individual tests for each sub-null hypothesis. We apply the permutation principle to obtain the null distributions. We then provide an example to illustrate our proposed procedure and compare the efficiency among the proposed tests through a simulation study. We discuss some interesting features related to the simultaneous test as concluding remarks. Finally we show the expression of the likelihood ratio statistic with a product of two individual likelihood ratio statistics.

Linear regression under log-concave and Gaussian scale mixture errors: comparative study

  • Kim, Sunyul;Seo, Byungtae
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.633-645
    • /
    • 2018
  • Gaussian error distributions are a common choice in traditional regression models for the maximum likelihood (ML) method. However, this distributional assumption is often suspicious especially when the error distribution is skewed or has heavy tails. In both cases, the ML method under normality could break down or lose efficiency. In this paper, we consider the log-concave and Gaussian scale mixture distributions for error distributions. For the log-concave errors, we propose to use a smoothed maximum likelihood estimator for stable and faster computation. Based on this, we perform comparative simulation studies to see the performance of coefficient estimates under normal, Gaussian scale mixture, and log-concave errors. In addition, we also consider real data analysis using Stack loss plant data and Korean labor and income panel data.

Fixed-accuracy confidence interval estimation of P(X > c) for a two-parameter gamma population

  • Zhuang, Yan;Hu, Jun;Zou, Yixuan
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.6
    • /
    • pp.625-639
    • /
    • 2020
  • The gamma distribution is a flexible right-skewed distribution widely used in many areas, and it is of great interest to estimate the probability of a random variable exceeding a specified value in survival and reliability analysis. Therefore, the study develops a fixed-accuracy confidence interval for P(X > c) when X follows a gamma distribution, Γ(α, β), and c is a preassigned positive constant through: 1) a purely sequential procedure with known shape parameter α and unknown rate parameter β; and 2) a nonparametric purely sequential procedure with both shape and rate parameters unknown. Both procedures enjoy appealing asymptotic first-order efficiency and asymptotic consistency properties. Extensive simulations validate the theoretical findings. Three real-life data examples from health studies and steel manufacturing study are discussed to illustrate the practical applicability of both procedures.

Variational Bayesian inference for binary image restoration using Ising model

  • Jang, Moonsoo;Chung, Younshik
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2022
  • In this paper, the focus on the removal noise in the binary image based on the variational Bayesian method with the Ising model. The observation and the latent variable are the degraded image and the original image, respectively. The posterior distribution is built using the Markov random field and the Ising model. Estimating the posterior distribution is the same as reconstructing a degraded image. MCMC and variational Bayesian inference are two methods for estimating the posterior distribution. However, for the sake of computing efficiency, we adapt the variational technique. When the image is restored, the iterative method is used to solve the recursive problem. Since there are three model parameters in this paper, restoration is implemented using the VECM algorithm to find appropriate parameters in the current state. Finally, the restoration results are shown which have maximum peak signal-to-noise ratio (PSNR) and evidence lower bound (ELBO).

Two tests using more assumptions but lower power

  • Sang Kyu Lee;Hyoung-Moon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.1
    • /
    • pp.109-117
    • /
    • 2023
  • Intuitively, a test with more assumptions has greater power than a test with fewer assumptions. This kind of examples are abundant in the nonparametric tests vs corresponding parametric ones. In general, the nonparametric tests are less efficient in terms of asymptotic relative efficiency (ARE) compared to corresponding parametric tests (Daniel, 1990). However, this is not always true. To test equal means under independent normal samples, the usual test involves using the t-distribution with the pooled estimator of the common variance. Adding the assumption of equal sample size, we may derive another test. In this case, two tests using more assumptions were performed for univariate (multivariate) cases. For these examples, it was found that the power function of a test with more assumptions is less than or equal to that of a test with fewer assumptions. This finding can be used as an expository example in master's mathematical statistics courses.

A Bayesian joint model for continuous and zero-inflated count data in developmental toxicity studies

  • Hwang, Beom Seuk
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.239-250
    • /
    • 2022
  • In many applications, we frequently encounter correlated multiple outcomes measured on the same subject. Joint modeling of such multiple outcomes can improve efficiency of inference compared to independent modeling. For instance, in developmental toxicity studies, fetal weight and number of malformed pups are measured on the pregnant dams exposed to different levels of a toxic substance, in which the association between such outcomes should be taken into account in the model. The number of malformations may possibly have many zeros, which should be analyzed via zero-inflated count models. Motivated by applications in developmental toxicity studies, we propose a Bayesian joint modeling framework for continuous and count outcomes with excess zeros. In our model, zero-inflated Poisson (ZIP) regression model would be used to describe count data, and a subject-specific random effects would account for the correlation across the two outcomes. We implement a Bayesian approach using MCMC procedure with data augmentation method and adaptive rejection sampling. We apply our proposed model to dose-response analysis in a developmental toxicity study to estimate the benchmark dose in a risk assessment.