• Title/Summary/Keyword: Best subset selection

Search Result 29, Processing Time 0.04 seconds

Ensemble variable selection using genetic algorithm

  • Seogyoung, Lee;Martin Seunghwan, Yang;Jongkyeong, Kang;Seung Jun, Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.629-640
    • /
    • 2022
  • Variable selection is one of the most crucial tasks in supervised learning, such as regression and classification. The best subset selection is straightforward and optimal but not practically applicable unless the number of predictors is small. In this article, we propose directly solving the best subset selection via the genetic algorithm (GA), a popular stochastic optimization algorithm based on the principle of Darwinian evolution. To further improve the variable selection performance, we propose to run multiple GA to solve the best subset selection and then synthesize the results, which we call ensemble GA (EGA). The EGA significantly improves variable selection performance. In addition, the proposed method is essentially the best subset selection and hence applicable to a variety of models with different selection criteria. We compare the proposed EGA to existing variable selection methods under various models, including linear regression, Poisson regression, and Cox regression for survival data. Both simulation and real data analysis demonstrate the promising performance of the proposed method.

Variable Selection Based on Mutual Information

  • Huh, Moon-Y.;Choi, Byong-Su
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.143-155
    • /
    • 2009
  • Best subset selection procedure based on mutual information (MI) between a set of explanatory variables and a dependent class variable is suggested. Derivation of multivariate MI is based on normal mixtures. Several types of normal mixtures are proposed. Also a best subset selection algorithm is proposed. Four real data sets are employed to demonstrate the efficiency of the proposals.

Improvement of Classification Accuracy on Success and Failure Factors in Software Reuse using Feature Selection (특징 선택을 이용한 소프트웨어 재사용의 성공 및 실패 요인 분류 정확도 향상)

  • Kim, Young-Ok;Kwon, Ki-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.4
    • /
    • pp.219-226
    • /
    • 2013
  • Feature selection is the one of important issues in the field of machine learning and pattern recognition. It is the technique to find a subset from the source data and can give the best classification performance. Ie, it is the technique to extract the subset closely related to the purpose of the classification. In this paper, we experimented to select the best feature subset for improving classification accuracy when classify success and failure factors in software reuse. And we compared with existing studies. As a result, we found that a feature subset was selected in this study showed the better classification accuracy.

A Diagnostic Feature Subset Selection of Breast Tumor Based on Neighborhood Rough Set Model (Neighborhood 러프집합 모델을 활용한 유방 종양의 진단적 특징 선택)

  • Son, Chang-Sik;Choi, Rock-Hyun;Kang, Won-Seok;Lee, Jong-Ha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.6
    • /
    • pp.13-21
    • /
    • 2016
  • Feature selection is the one of important issue in the field of data mining and machine learning. It is the technique to find a subset of features which provides the best classification performance, from the source data. We propose a feature subset selection method using the neighborhood rough set model based on information granularity. To demonstrate the effectiveness of proposed method, it was applied to select the useful features associated with breast tumor diagnosis of 298 shape features extracted from 5,252 breast ultrasound images, which include 2,745 benign and 2,507 malignant cases. Experimental results showed that 19 diagnostic features were strong predictors of breast cancer diagnosis and then average classification accuracy was 97.6%.

Microblog User Geolocation by Extracting Local Words Based on Word Clustering and Wrapper Feature Selection

  • Tian, Hechan;Liu, Fenlin;Luo, Xiangyang;Zhang, Fan;Qiao, Yaqiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3972-3988
    • /
    • 2020
  • Existing methods always rely on statistical features to extract local words for microblog user geolocation. There are many non-local words in extracted words, which makes geolocation accuracy lower. Considering the statistical and semantic features of local words, this paper proposes a microblog user geolocation method by extracting local words based on word clustering and wrapper feature selection. First, ordinary words without positional indications are initially filtered based on statistical features. Second, a word clustering algorithm based on word vectors is proposed. The remaining semantically similar words are clustered together based on the distance of word vectors with semantic meanings. Next, a wrapper feature selection algorithm based on sequential backward subset search is proposed. The cluster subset with the best geolocation effect is selected. Words in selected cluster subset are extracted as local words. Finally, the Naive Bayes classifier is trained based on local words to geolocate the microblog user. The proposed method is validated based on two different types of microblog data - Twitter and Weibo. The results show that the proposed method outperforms existing two typical methods based on statistical features in terms of accuracy, precision, recall, and F1-score.

Development and implementation of statistical prediction procedure for field penetration index using ridge regression with best subset selection (최상부분집합이 고려된 능형회귀를 적용한 현장관입지수에 대한 통계적 예측기법 개발 및 적용)

  • Lee, Hang-Lo;Song, Ki-Il;Kim, Kyoung Yul
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.857-870
    • /
    • 2017
  • The use of shield TBM is gradually increasing due to the urbanization of social infrastructures. Reliable estimation of advance rate is very important for accurate construction period and cost. For this purpose, it is required to develop the prediction model of advance rate that can consider the ground properties reasonably. Based on the database collected from field, statistical prediction procedure for field penetration index (FPI) was modularized in this study to calculate penetration rate of shield TBM. As output parameter, FPI was selected and various systems were included in this module such as, procedure of eliminating abnormal dataset, preprocessing of dataset and ridge regression with best subset selection. And it was finally validated by using field dataset.

A Hybrid Feature Selection Method using Univariate Analysis and LVF Algorithm (단변량 분석과 LVF 알고리즘을 결합한 하이브리드 속성선정 방법)

  • Lee, Jae-Sik;Jeong, Mi-Kyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.179-200
    • /
    • 2008
  • We develop a feature selection method that can improve both the efficiency and the effectiveness of classification technique. In this research, we employ case-based reasoning as a classification technique. Basically, this research integrates the two existing feature selection methods, i.e., the univariate analysis and the LVF algorithm. First, we sift some predictive features from the whole set of features using the univariate analysis. Then, we generate all possible subsets of features from these predictive features and measure the inconsistency rate of each subset using the LVF algorithm. Finally, the subset having the lowest inconsistency rate is selected as the best subset of features. We measure the performances of our feature selection method using the data obtained from UCI Machine Learning Repository, and compare them with those of existing methods. The number of selected features and the accuracy of our feature selection method are so satisfactory that the improvements both in efficiency and effectiveness are achieved.

  • PDF

Selection Conditional on Associated Measurements

  • Yeo, Woon-Bang
    • Journal of the Korean Statistical Society
    • /
    • v.12 no.2
    • /
    • pp.110-114
    • /
    • 1983
  • In this paper, a random subset selection procedure for the choice of the k best objects out of n primary measurements $Y_t$ is considered when only the associated measurements $X_t$ are available. In contrast to Yeo and David (1992), where only the ranks of the X's are needed, the present uses the observed X-values. The approach is illustrated numerically when X and Y are bivariate normal and the standard deviation of X is known.

  • PDF

Operating characteristics of a subset selection procedure for selecting the best normal population with common unknown variance (최고의 정규 모집단을 뽑기 위한 부분집합선택절차론의 운용특성에 관한 연구)

  • ;Shanti S. Gupta
    • The Korean Journal of Applied Statistics
    • /
    • v.3 no.1
    • /
    • pp.59-78
    • /
    • 1990
  • The subset selection approach introduced by Gupta plays an important role in the multiple decision procedures. For the normal means problem with common unknown variance, some operating characteristics of the selection procedure have been investigated via Monte Carlo simulation. Also some properties including efficiencies of the selection procedure are examined when the data are contaminated.

  • PDF

Prediction model of hypercholesterolemia using body fat mass based on machine learning (머신러닝 기반 체지방 측정정보를 이용한 고콜레스테롤혈증 예측모델)

  • Lee, Bum Ju
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.413-420
    • /
    • 2019
  • The purpose of the present study is to develop a model for predicting hypercholesterolemia using an integrated set of body fat mass variables based on machine learning techniques, beyond the study of the association between body fat mass and hypercholesterolemia. For this study, a total of six models were created using two variable subset selection methods and machine learning algorithms based on the Korea National Health and Nutrition Examination Survey (KNHANES) data. Among the various body fat mass variables, we found that trunk fat mass was the best variable for predicting hypercholesterolemia. Furthermore, we obtained the area under the receiver operating characteristic curve value of 0.739 and the Matthews correlation coefficient value of 0.36 in the model using the correlation-based feature subset selection and naive Bayes algorithm. Our findings are expected to be used as important information in the field of disease prediction in large-scale screening and public health research.