• Title/Summary/Keyword: selection bias

Search Result 331, Processing Time 0.031 seconds

On the Bias of Bootstrap Model Selection Criteria

  • Kee-Won Lee;Songyong Sim
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.2
    • /
    • pp.195-203
    • /
    • 1996
  • A bootstrap method is used to correct the apparent downward bias of a naive plug-in bootstrap model selection criterion, which is shown to enjoy a high degree of accuracy. Comparison of bootstrap method with the asymptotic method is made through an illustrative example.

  • PDF

Regression Trees with. Unbiased Variable Selection (변수선택 편향이 없는 회귀나무를 만들기 위한 알고리즘)

  • 김진흠;김민호
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.3
    • /
    • pp.459-473
    • /
    • 2004
  • It has well known that an exhaustive search algorithm suggested by Breiman et. a1.(1984) has a trend to select the variable having relatively many possible splits as an splitting rule. We propose an algorithm to overcome this variable selection bias problem and then construct unbiased regression trees based on the algorithm. The proposed algorithm runs two steps of selecting a split variable and determining a split rule for binary split based on the split variable. Simulation studies were performed to compare the proposed algorithm with Breiman et a1.(1984)'s CART(Classification and Regression Tree) in terms of degree of variable selection bias, variable selection power, and MSE(Mean Squared Error). Also, we illustrate the proposed algorithm with real data sets.

A study on bias effect of LASSO regression for model selection criteria (모형 선택 기준들에 대한 LASSO 회귀 모형 편의의 영향 연구)

  • Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.4
    • /
    • pp.643-656
    • /
    • 2016
  • High dimensional data are frequently encountered in various fields where the number of variables is greater than the number of samples. It is usually necessary to select variables to estimate regression coefficients and avoid overfitting in high dimensional data. A penalized regression model simultaneously obtains variable selection and estimation of coefficients which makes them frequently used for high dimensional data. However, the penalized regression model also needs to select the optimal model by choosing a tuning parameter based on the model selection criterion. This study deals with the bias effect of LASSO regression for model selection criteria. We numerically describes the bias effect to the model selection criteria and apply the proposed correction to the identification of biomarkers for lung cancer based on gene expression data.

Selection of Data-adaptive Polynomial Order in Local Polynomial Nonparametric Regression

  • Jo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.1
    • /
    • pp.177-183
    • /
    • 1997
  • A data-adaptive order selection procedure is proposed for local polynomial nonparametric regression. For each given polynomial order, bias and variance are estimated and the adaptive polynomial order that has the smallest estimated mean squared error is selected locally at each location point. To estimate mean squared error, empirical bias estimate of Ruppert (1995) and local polynomial variance estimate of Ruppert, Wand, Wand, Holst and Hossjer (1995) are used. Since the proposed method does not require fitting polynomial model of order higher than the model order, it is simpler than the order selection method proposed by Fan and Gijbels (1995b).

  • PDF

The Use of Propensity Score Matching for Evaluation of the Effects of Nursing Interventions (Propensity Score Matching 방법을 이용한 간호중재 효과 평가)

  • Lee, Suk-Jeong;Yoo, Ji-Soo;Shin, Mi-Kyung;Park, Chang-Gi;Lee, Hyun-Chul;Choi, Eun-Jin
    • Journal of Korean Academy of Nursing
    • /
    • v.37 no.3
    • /
    • pp.414-421
    • /
    • 2007
  • Background: Nursing intervention studies often suffer from a selection bias introduced by failure of random assignment. Evaluation with selection bias could under or over-estimate any intervention's effects. PS matching (PSM) can reduce a selection bias through matching similar Propensity Scores (PS). PS is defined as the conditional probability of being treated given the individual's covariates and it can be reused to balance the covariates of two groups. Purpose: This study was done to assess the significance of PSM as an alternative evaluation method of nursing interventions. Method: An intervention study for patients with some baseline individual characteristic differences between two groups was used for this demonstration. The result of a t-test with PSM was compared with a t-test without matching. Results: The level of HbA1c at 12 months after baseline was different between the two groups in terms of matching or not. Conclusion: This study demonstrated the effects of a quasi-random assignment. Evaluation using PSM can reduce a selection bias impact that affects the result of the nursing intervention. Analyzing nursing research more objectively to reduce selection bias using PSM is needed.

Effects of preselection of genotyped animals on reliability and bias of genomic prediction in dairy cattle

  • Togashi, Kenji;Adachi, Kazunori;Kurogi, Kazuhito;Yasumori, Takanori;Tokunaka, Kouichi;Ogino, Atsushi;Miyazaki, Yoshiyuki;Watanabe, Toshio;Takahashi, Tsutomu;Moribe, Kimihiro
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.2
    • /
    • pp.159-169
    • /
    • 2019
  • Objective: Models for genomic selection assume that the reference population is an unselected population. However, in practice, genotyped individuals, such as progeny-tested bulls, are highly selected, and the reference population is created after preselection. In dairy cattle, the intensity of selection is higher in males than in females, suggesting that cows can be added to the reference population with less bias and loss of accuracy. The objective is to develop formulas applied to any genomic prediction studies or practice with preselected animals as reference population. Methods: We developed formulas for calculating the reliability and bias of genomically enhanced breeding values (GEBV) in the reference population where individuals are preselected on estimated breeding values. Based on the formulas presented, deterministic simulation was conducted by varying heritability, preselection percentage, and the reference population size. Results: The number of bulls equal to a cow regarding the reliability of GEBV was expressed through a simple formula for the reference population consisting of preselected animals. The bull population was vastly superior to the cow population regarding the reliability of GEBV for low-heritability traits. However, the superiority of reliability from the bull reference population over the cow population decreased as heritability increased. Bias was greater for bulls than cows. Bias and reduction in reliability of GEBV due to preselection was alleviated by expanding reference population. Conclusion: Cows are easier in expanding reference population size compared with bulls and alleviate bias and reduction in reliability of GEBV of bulls which are highly preselected than cows by expanding the cow reference population.

Sensitivity analysis in Bayesian nonignorable selection model for binary responses

  • Choi, Seong Mi;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.1
    • /
    • pp.187-194
    • /
    • 2014
  • We consider a Bayesian nonignorable selection model to accommodate the selection bias. Markov chain Monte Carlo methods is known to be very useful to fit the nonignorable selection model. However, sensitivity to prior assumptions on parameters for selection mechanism is a potential problem. To quantify the sensitivity to prior assumption, the deviance information criterion and the conditional predictive ordinate are used to compare the goodness-of-fit under two different prior specifications. It turns out that the 'MLE' prior gives better fit than the 'uniform' prior in viewpoints of goodness-of-fit measures.

A Study on Split Variable Selection Using Transformation of Variables in Decision Trees

  • Chung, Sung-S.;Lee, Ki-H.;Lee, Seung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.2
    • /
    • pp.195-205
    • /
    • 2005
  • In decision tree analysis, C4.5 and CART algorithm have some problems of computational complexity and bias on variable selection. But QUEST algorithm solves these problems by dividing the step of variable selection and split point selection. When input variables are continuous, QUEST algorithm uses ANOVA F-test under the assumption of normality and homogeneity of variances. In this paper, we investigate the influence of violation of normality assumption and effect of the transformation of variables in the QUEST algorithm. In the simulation study, we obtained the empirical powers of variable selection and the empirical bias of variable selection after transformation of variables having various type of underlying distributions.

  • PDF

A Study on Selection of Split Variable in Constructing Classification Tree (의사결정나무에서 분리 변수 선택에 관한 연구)

  • 정성석;김순영;임한필
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.2
    • /
    • pp.347-357
    • /
    • 2004
  • It is very important to select a split variable in constructing the classification tree. The efficiency of a classification tree algorithm can be evaluated by the variable selection bias and the variable selection power. The C4.5 has largely biased variable selection due to the influence of many distinct values in variable selection and the QUEST has low variable selection power when a continuous predictor variable doesn't deviate from normal distribution. In this thesis, we propose the SRT algorithm which overcomes the drawback of the C4.5 and the QUEST. Simulations were performed to compare the SRT with the C4.5 and the QUEST. As a result, the SRT is characterized with low biased variable selection and robust variable selection power.

A Study on Bias Effect on Model Selection Criteria in Graphical Lasso

  • Choi, Young-Geun;Jeong, Seyoung;Yu, Donghyeon
    • Quantitative Bio-Science
    • /
    • v.37 no.2
    • /
    • pp.133-141
    • /
    • 2018
  • Graphical lasso is one of the most popular methods to estimate a sparse precision matrix, which is an inverse of a covariance matrix. The objective function of graphical lasso imposes an ${\ell}_1$-penalty on the (vectorized) precision matrix, where a tuning parameter controls the strength of the penalization. The selection of the tuning parameter is practically and theoretically important since the performance of the estimation depends on an appropriate choice of tuning parameter. While information criteria (e.g. AIC, BIC, or extended BIC) have been widely used, they require an asymptotically unbiased estimator to select optimal tuning parameter. Thus, the biasedness of the ${\ell}_1$-regularized estimate in the graphical lasso may lead to a suboptimal tuning. In this paper, we propose a two-staged bias-correction procedure for the graphical lasso, where the first stage runs the usual graphical lasso and the second stage reruns the procedure with an additional constraint that zero estimates at the first stage remain zero. Our simulation and real data example show that the proposed bias correction improved on both edge recovery and estimation error compared to the single-staged graphical lasso.