• 제목/요약/키워드: Selection Methods

검색결과 4,023건 처리시간 0.022초

Performance Comparison of Classication Methods with the Combinations of the Imputation and Gene Selection Methods

  • Kim, Dong-Uk;Nam, Jin-Hyun;Hong, Kyung-Ha
    • 응용통계연구
    • /
    • 제24권6호
    • /
    • pp.1103-1113
    • /
    • 2011
  • Gene expression data is obtained through many stages of an experiment and errors produced during the process may cause missing values. Due to the distinctness of the data so called 'small n large p', genes have to be selected for statistical analysis, like classification analysis. For this reason, imputation and gene selection are important in a microarray data analysis. In the literature, imputation, gene selection and classification analysis have been studied respectively. However, imputation, gene selection and classification analysis are sequential processing. For this aspect, we compare the performance of classification methods after imputation and gene selection methods are applied to microarray data. Numerical simulations are carried out to evaluate the classification methods that use various combinations of the imputation and gene selection methods.

Ensemble Gene Selection Method Based on Multiple Tree Models

  • Mingzhu Lou
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.652-662
    • /
    • 2023
  • Identifying highly discriminating genes is a critical step in tumor recognition tasks based on microarray gene expression profile data and machine learning. Gene selection based on tree models has been the subject of several studies. However, these methods are based on a single-tree model, often not robust to ultra-highdimensional microarray datasets, resulting in the loss of useful information and unsatisfactory classification accuracy. Motivated by the limitations of single-tree-based gene selection, in this study, ensemble gene selection methods based on multiple-tree models were studied to improve the classification performance of tumor identification. Specifically, we selected the three most representative tree models: ID3, random forest, and gradient boosting decision tree. Each tree model selects top-n genes from the microarray dataset based on its intrinsic mechanism. Subsequently, three ensemble gene selection methods were investigated, namely multipletree model intersection, multiple-tree module union, and multiple-tree module cross-union, were investigated. Experimental results on five benchmark public microarray gene expression datasets proved that the multiple tree module union is significantly superior to gene selection based on a single tree model and other competitive gene selection methods in classification accuracy.

Simulation Optimization with Statistical Selection Method

  • Kim, Ju-Mi
    • Management Science and Financial Engineering
    • /
    • 제13권1호
    • /
    • pp.1-24
    • /
    • 2007
  • I propose new combined randomized methods for global optimization problems. These methods are based on the Nested Partitions(NP) method, a useful method for simulation optimization which guarantees global optimal solution but has several shortcomings. To overcome these shortcomings I hired various statistical selection methods and combined with NP method. I first explain the NP method and statistical selection method. And after that I present a detail description of proposed new combined methods and show the results of an application. As well as, I show how these combined methods can be considered in case of computing budget limit problem.

Selection probability of multivariate regularization to identify pleiotropic variants in genetic association studies

  • Kim, Kipoong;Sun, Hokeun
    • Communications for Statistical Applications and Methods
    • /
    • 제27권5호
    • /
    • pp.535-546
    • /
    • 2020
  • In genetic association studies, pleiotropy is a phenomenon where a variant or a genetic region affects multiple traits or diseases. There have been many studies identifying cross-phenotype genetic associations. But, most of statistical approaches for detection of pleiotropy are based on individual tests where a single variant association with multiple traits is tested one at a time. These approaches fail to account for relations among correlated variants. Recently, multivariate regularization methods have been proposed to detect pleiotropy in analysis of high-dimensional genomic data. However, they suffer a problem of tuning parameter selection, which often results in either too many false positives or too small true positives. In this article, we applied selection probability to multivariate regularization methods in order to identify pleiotropic variants associated with multiple phenotypes. Selection probability was applied to individual elastic-net, unified elastic-net and multi-response elastic-net regularization methods. In simulation studies, selection performance of three multivariate regularization methods was evaluated when the total number of phenotypes, the number of phenotypes associated with a variant, and correlations among phenotypes are different. We also applied the regularization methods to a wild bean dataset consisting of 169,028 variants and 17 phenotypes.

A Study on Unbiased Methods in Constructing Classification Trees

  • Lee, Yoon-Mo;Song, Moon Sup
    • Communications for Statistical Applications and Methods
    • /
    • 제9권3호
    • /
    • pp.809-824
    • /
    • 2002
  • we propose two methods which separate the variable selection step and the split-point selection step. We call these two algorithms as CHITES method and F&CHITES method. They adapted some of the best characteristics of CART, CHAID, and QUEST. In the first step the variable, which is most significant to predict the target class values, is selected. In the second step, the exhaustive search method is applied to find the splitting point based on the selected variable in the first step. We compared the proposed methods, CART, and QUEST in terms of variable selection bias and power, error rates, and training times. The proposed methods are not only unbiased in the null case, but also powerful for selecting correct variables in non-null cases.

Evaluation of Attribute Selection Methods and Prior Discretization in Supervised Learning

  • Cha, Woon Ock;Huh, Moon Yul
    • Communications for Statistical Applications and Methods
    • /
    • 제10권3호
    • /
    • pp.879-894
    • /
    • 2003
  • We evaluated the efficiencies of applying attribute selection methods and prior discretization to supervised learning, modelled by C4.5 and Naive Bayes. Three databases were obtained from UCI data archive, which consisted of continuous attributes except for one decision attribute. Four methods were used for attribute selection : MDI, ReliefF, Gain Ratio and Consistency-based method. MDI and ReliefF can be used for both continuous and discrete attributes, but the other two methods can be used only for discrete attributes. Discretization was performed using the Fayyad and Irani method. To investigate the effect of noise included in the database, noises were introduced into the data sets up to the extents of 10 or 20%, and then the data, including those either containing the noises or not, were processed through the steps of attribute selection, discretization and classification. The results of this study indicate that classification of the data based on selected attributes yields higher accuracy than in the case of classifying the full data set, and prior discretization does not lower the accuracy.

Two variations of cross-distance selection algorithm in hybrid sufficient dimension reduction

  • Jae Keun Yoo
    • Communications for Statistical Applications and Methods
    • /
    • 제30권2호
    • /
    • pp.179-189
    • /
    • 2023
  • Hybrid sufficient dimension reduction (SDR) methods to a weighted mean of kernel matrices of two different SDR methods by Ye and Weiss (2003) require heavy computation and time consumption due to bootstrapping. To avoid this, Park et al. (2022) recently develop the so-called cross-distance selection (CDS) algorithm. In this paper, two variations of the original CDS algorithm are proposed depending on how well and equally the covk-SAVE is treated in the selection procedure. In one variation, which is called the larger CDS algorithm, the covk-SAVE is equally and fairly utilized with the other two candiates of SIR-SAVE and covk-DR. But, for the final selection, a random selection should be necessary. On the other hand, SIR-SAVE and covk-DR are utilized with completely ruling covk-SAVE out, which is called the smaller CDS algorithm. Numerical studies confirm that the original CDS algorithm is better than or compete quite well to the two proposed variations. A real data example is presented to compare and interpret the decisions by the three CDS algorithms in practice.

자질 선정 기준과 가중치 할당 방식간의 관계를 고려한 문서 자동분류의 개선에 대한 연구 (An Empirical Study on Improving the Performance of Text Categorization Considering the Relationships between Feature Selection Criteria and Weighting Methods)

  • 이재윤
    • 한국문헌정보학회지
    • /
    • 제39권2호
    • /
    • pp.123-146
    • /
    • 2005
  • 이 연구에서는 문서 자동분류에서 분류자질 선정과 가중치 할당을 위해서 일관된 전략을 채택하여 kNN 분류기의 성능을 향상시킬 수 있는 방안을 모색하였다. 문서 자동 분류에서 분류자질 선정 방식과 자질 가중치 할당 방식은 자동분류 알고리즘과 함께 분류성능을 좌우하는 중요한 요소이다. 기존 연구에서는 이 두 방식을 결정할 때 상반된 전략을 사용해왔다. 이 연구에서는 색인파일 저장공간과 실행시간에 따른 분류성능을 기준으로 분류자질 선정 결과를 평가해서 기존 연구와 다른 결과를 얻었다. 상호정보량과 같은 저빈도 자질 선호 기준이나 심지어는 역문헌빈도를 이용해서 분류 자질을 선정하는 것이 kNN 분류기의 분류 효과와 효율 면에서 바람직한 것으로 나타났다. 자질 선정기준으로 저빈도 자질 선호 척도를 자질 선정 및 자질 가중치 할당에 일관되게 이용한 결과 분류성능의 저하 없이 kNN 분류기의 처리 속도를 약 3배에서 5배정도 향상시킬 수 있었다.

조건부 상호정보를 이용한 분류분석에서의 변수선택 (Efficient variable selection method using conditional mutual information)

  • 안치경;김동욱
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권5호
    • /
    • pp.1079-1094
    • /
    • 2014
  • 상호정보 (mutual information)를 이용한 변수 선택법은 반응변수와 설명변수간의 선형적인 연관성뿐만 아니라 비선형적인 연관성을 감지하며, 설명변수 사이의 연관성도 고려하는 좋은 변수선택 방법이다. 하지만 고차원 자료에서 상호정보를 추정하기가 쉽지 않아 이에 대한 연구가 필요하다. Cai 등 (2009)은 조건부 상호정보를 이용한 전진선택법과 가지치기법을 이용하여 이러한 문제를 해결하였으며, 마이크로어레이 자료와 같은 고차원 자료에서 조건부 상호정보를 이용한 변수 선택법으로 선택된 변수들로 구성된 SVM의 분류 성능이 SVM-RFE 및 기존의 필터링 방법으로 선택된 변수들로 구성된 SVM의 분류 성능보다 뛰어남을 보였다. 하지만 조건부 상호정보를 추정할 때 사용된 Parzen window 방법은 변수의 수가 많아질수록 변수 선택 시간이 길어지는 단점으로 인해 이에 대한 보완이 필요하다. 본 논문에서는 조건부 상호정보 계산 시 필요한 설명변수의 분포를 다변량 정규분포로 가정함으로써 변수선택을 위한 계산시간을 단축시키며 동시에 변수선택의 성능을 향상시키고자 한다. 반면, 설명변수의 분포를 다변량 정규분포로 가정한다는 것은 강한 제약이 될 수 있으므로 이를 완화시킨 Edgeworth 근사를 이용한 조건부 상호정보 기반의 변수 선택법을 제안한다. 실증분석을 통해 본 논문에서 제안한 방법의 효율성을 살펴보았으며, 기존의 조건부 상호정보 기반 변수 선택법에 비해 계산 속도나 분류 성능 면에서 우수함을 보였다.

Effects of selection index coefficients that ignore reliability on economic weights and selection responses during practical selection

  • Togashi, Kenji;Adachi, Kazunori;Yasumori, Takanori;Kurogi, Kazuhito;Nozaki, Takayoshi;Onogi, Akio;Atagi, Yamato;Takahashi, Tsutomu
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제31권1호
    • /
    • pp.19-25
    • /
    • 2018
  • Objective: In practical breeding, selection is often performed by ignoring the accuracy of evaluations and applying economic weights directly to the selection index coefficients of genetically standardized traits. The denominator of the standardized component trait of estimated genetic evaluations in practical selection varies with its reliability. Whereas theoretical methods for calculating the selection index coefficients of genetically standardized traits account for this variation, practical selection ignores reliability and assumes that it is equal to unity for each trait. The purpose of this study was to clarify the effects of ignoring the accuracy of the standardized component trait in selection criteria on selection responses and economic weights in retrospect. Methods: Theoretical methods were presented accounting for reliability of estimated genetic evaluations for the selection index composed of genetically standardized traits. Results: Selection responses and economic weights in retrospect resulting from practical selection were greater than those resulting from theoretical selection accounting for reliability when the accuracy of the estimated breeding value (EBV) or genomically enhanced breeding value (GEBV) was lower than those of the other traits in the index, but the opposite occurred when the accuracy of the EBV or GEBV was greater than those of the other traits. This trend was more conspicuous for traits with low economic weights than for those with high weights. Conclusion: Failure of the practical index to account for reliability yielded economic weights in retrospect that differed from those obtained with the theoretical index. Our results indicated that practical indices that ignore reliability delay genetic improvement. Therefore, selection practices need to account for reliability, especially when the reliabilities of the traits included in the index vary widely.