• 제목/요약/키워드: Subset selection

검색결과 203건 처리시간 0.018초

Ensemble variable selection using genetic algorithm

  • Seogyoung, Lee;Martin Seunghwan, Yang;Jongkyeong, Kang;Seung Jun, Shin
    • Communications for Statistical Applications and Methods
    • /
    • 제29권6호
    • /
    • pp.629-640
    • /
    • 2022
  • Variable selection is one of the most crucial tasks in supervised learning, such as regression and classification. The best subset selection is straightforward and optimal but not practically applicable unless the number of predictors is small. In this article, we propose directly solving the best subset selection via the genetic algorithm (GA), a popular stochastic optimization algorithm based on the principle of Darwinian evolution. To further improve the variable selection performance, we propose to run multiple GA to solve the best subset selection and then synthesize the results, which we call ensemble GA (EGA). The EGA significantly improves variable selection performance. In addition, the proposed method is essentially the best subset selection and hence applicable to a variety of models with different selection criteria. We compare the proposed EGA to existing variable selection methods under various models, including linear regression, Poisson regression, and Cox regression for survival data. Both simulation and real data analysis demonstrate the promising performance of the proposed method.

Self-adaptive and Bidirectional Dynamic Subset Selection Algorithm for Digital Image Correlation

  • Zhang, Wenzhuo;Zhou, Rong;Zou, Yuanwen
    • Journal of Information Processing Systems
    • /
    • 제13권2호
    • /
    • pp.305-320
    • /
    • 2017
  • The selection of subset size is of great importance to the accuracy of digital image correlation (DIC). In the traditional DIC, a constant subset size is used for computing the entire image, which overlooks the differences among local speckle patterns of the image. Besides, it is very laborious to find the optimal global subset size of a speckle image. In this paper, a self-adaptive and bidirectional dynamic subset selection (SBDSS) algorithm is proposed to make the subset sizes vary according to their local speckle patterns, which ensures that every subset size is suitable and optimal. The sum of subset intensity variation (${\eta}$) is defined as the assessment criterion to quantify the subset information. Both the threshold and initial guess of subset size in the SBDSS algorithm are self-adaptive to different images. To analyze the performance of the proposed algorithm, both numerical and laboratory experiments were performed. In the numerical experiments, images with different speckle distribution, different deformation and noise were calculated by both the traditional DIC and the proposed algorithm. The results demonstrate that the proposed algorithm achieves higher accuracy than the traditional DIC. Laboratory experiments performed on a substrate also demonstrate that the proposed algorithm is effective in selecting appropriate subset size for each point.

엔트로피를 기반으로 한 특징 집합 선택 알고리즘 (Feature Subset Selection Algorithm based on Entropy)

  • 홍석미;안종일;정태충
    • 전자공학회논문지CI
    • /
    • 제41권2호
    • /
    • pp.87-94
    • /
    • 2004
  • 특징 집합 선택은 학습 알고리즘의 전처리 과정으로 사용되기도 한다. 수집된 자료가 문제와 관련이 없다거나 중복된 정보를 갖고 있는 경우, 이를 학습 모델생성 이전에 제거함으로써 학습의 성능을 향상시킬 수 있다. 또한 탐색 공간을 감소시킬 수 있으며 저장 공간도 줄일 수 있다. 본 논문에서는 특징 집합의 추출과 추출된 특징 집합의 성능 평가를 위하여 엔트로피를 기반으로 한 휴리스틱 함수를 사용하는 새로운 특징 선택 알고리즘을 제안하였다. 탐색 방법으로는 ACS 알고리즘을 이용하였다. 그 결과 학습에 사용될 특징의 차원을 감소시킴으로써 학습 모델의 크기와 불필요한 계산 시간을 감소시킬 수 있었다.

Variable Selection Based on Mutual Information

  • Huh, Moon-Y.;Choi, Byong-Su
    • Communications for Statistical Applications and Methods
    • /
    • 제16권1호
    • /
    • pp.143-155
    • /
    • 2009
  • Best subset selection procedure based on mutual information (MI) between a set of explanatory variables and a dependent class variable is suggested. Derivation of multivariate MI is based on normal mixtures. Several types of normal mixtures are proposed. Also a best subset selection algorithm is proposed. Four real data sets are employed to demonstrate the efficiency of the proposals.

포아송 모형에서의 설명변수 선택문제 - 정규분포 설명변수하에서 - (Subset Selection in the Poisson Models - A Normal Predictors case -)

  • 박종선
    • 응용통계연구
    • /
    • 제11권2호
    • /
    • pp.247-255
    • /
    • 1998
  • 일반선형 모형의 하나인 포아송모형에서 설명변수들을 선택하는 문제를 고려하여 보았다 설명변수들이 정규분포를 따르는 확률변수일 때 반응변수의 조건부 분포를 통하여 모형에 필요한 설명변수의 부분집합을 선택하는 방범을 제시하였다.

  • PDF

A Robust Subset Selection Procedure for Location Parameter Based on Hodges-Lehmann Estimators

  • Lee, Kang Sup
    • 품질경영학회지
    • /
    • 제19권1호
    • /
    • pp.51-64
    • /
    • 1991
  • This paper deals with a robust subset selection procedure based on Hodges-Lehmann estimators of location parameters. An improved formula for the estimated standard error of Hodges-Lehmann estimators is considered. Also, the degrees of freedom of the studentized Hodges-Lehmann estimators are investigated and it is suggested to use 0.8n instead of n-1. The proposed procedure is compared with the other subset selection procedures and it is shown to have good effciency for heavy-tailed distributions.

  • PDF

상호정보량과 Binary Particle Swarm Optimization을 이용한 속성선택 기법 (Feature Selection Method by Information Theory and Particle S warm Optimization)

  • 조재훈;이대종;송창규;전명근
    • 한국지능시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.191-196
    • /
    • 2009
  • 본 논문에서는 BPSO(Binary Particle Swarm Optimization)방법과 상호정보량을 이용한 속성선택기법을 제안한다. 제안된 방법은 상호정보량을 이용한 후보속성부분집합을 선택하는 단계와 BPSO를 이용한 최적의 속성부분집합을 선택하는 단계로 구성되어 있다. 후보속성부분집합 선택 단계에서는 독립적으로 속성들의 상호정보량을 평가하여 순위별로 설정된 수 만큼 후보속성들을 선택한다. 최적속성부분집합 선택 단계에서는 BPSO를 이용하여 후보속성부분집합에서 최적의 속성부분집합을 탐색한다. BPSO의 목적함수는 분류기의 정확도와 선택된 속성 수를 포함하는 다중목적함수(Multi-Object Function)을 이용하였다. 제안된 기법의 성능을 평가하기 위하여 유전자 데이터를 사용하였으며, 실험결과 기존의 방법들에 비해 우수한 성능을 보임을 알 수 있었다.

ModifiedFAST: A New Optimal Feature Subset Selection Algorithm

  • Nagpal, Arpita;Gaur, Deepti
    • Journal of information and communication convergence engineering
    • /
    • 제13권2호
    • /
    • pp.113-122
    • /
    • 2015
  • Feature subset selection is as a pre-processing step in learning algorithms. In this paper, we propose an efficient algorithm, ModifiedFAST, for feature subset selection. This algorithm is suitable for text datasets, and uses the concept of information gain to remove irrelevant and redundant features. A new optimal value of the threshold for symmetric uncertainty, used to identify relevant features, is found. The thresholds used by previous feature selection algorithms such as FAST, Relief, and CFS were not optimal. It has been proven that the threshold value greatly affects the percentage of selected features and the classification accuracy. A new performance unified metric that combines accuracy and the number of features selected has been proposed and applied in the proposed algorithm. It was experimentally shown that the percentage of selected features obtained by the proposed algorithm was lower than that obtained using existing algorithms in most of the datasets. The effectiveness of our algorithm on the optimal threshold was statistically validated with other algorithms.

A Bayesian Variable Selection Method for Binary Response Probit Regression

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • 제28권2호
    • /
    • pp.167-182
    • /
    • 1999
  • This article is concerned with the selection of subsets of predictor variables to be included in building the binary response probit regression model. It is based on a Bayesian approach, intended to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure reformulates the probit regression setup in a hierarchical normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. The appropriate posterior probability of each subset of predictor variables is obtained through the Gibbs sampler, which samples indirectly from the multinomial posterior distribution on the set of possible subset choices. Thus, in this procedure, the most promising subset of predictors can be identified as the one with highest posterior probability. To highlight the merit of this procedure a couple of illustrative numerical examples are given.

  • PDF

Subset selection in multiple linear regression: An improved Tabu search

  • Bae, Jaegug;Kim, Jung-Tae;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제40권2호
    • /
    • pp.138-145
    • /
    • 2016
  • This paper proposes an improved tabu search method for subset selection in multiple linear regression models. Variable selection is a vital combinatorial optimization problem in multivariate statistics. The selection of the optimal subset of variables is necessary in order to reliably construct a multiple linear regression model. Its applications widely range from machine learning, timeseries prediction, and multi-class classification to noise detection. Since this problem has NP-complete nature, it becomes more difficult to find the optimal solution as the number of variables increases. Two typical metaheuristic methods have been developed to tackle the problem: the tabu search algorithm and hybrid genetic and simulated annealing algorithm. However, these two methods have shortcomings. The tabu search method requires a large amount of computing time, and the hybrid algorithm produces a less accurate solution. To overcome the shortcomings of these methods, we propose an improved tabu search algorithm to reduce moves of the neighborhood and to adopt an effective move search strategy. To evaluate the performance of the proposed method, comparative studies are performed on small literature data sets and on large simulation data sets. Computational results show that the proposed method outperforms two metaheuristic methods in terms of the computing time and solution quality.