• Title/Summary/Keyword: 분류기 앙상블 선택

Search Result 16, Processing Time 0.027 seconds

Optimal Classifier Ensemble for Lymphoma Cancer Using Genetic Algorithm (유전자 알고리즘을 이용한 림프종 암의 최적 분류기 앙상블)

  • 박찬호;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.356-358
    • /
    • 2003
  • DNA microarray기술의 발달로 한꺼번에 수천 개 유전자의 발현 정보를 얻는 것이 가능해졌는데, 이렇게 얻어진 데이터를 효과적으로 분류하는 시스템을 만들어놓으면 새로운 샘플이 정상상태인지, 질병을 가진 상태인지 예측할 수 있다. 분류 시스템을 위하여 여러 가지 특징선택방법들과 분류기법들을 사용할 수 있는데, 모든 상황에서 항상 뛰어난 성능을 보이는 특징선택법이나 분류기를 찾기는 힘들다. 안정되고 개선된 성능을 내기 위해서 특징-분류기의 앙상블을 이용할 수 있는데, 앙상블에 이용될 수 있는 특징선택 방법이나 분류기의 수가 많다면, 앙상블을 만들 수 있는 조합이 많아지기 때문에, 모든 조합에 대하여 앙상블 결과를 구하기는 거의 불가능하다. 이를 해결하기 위하여 본 논문에서는 유전자알고리즘을 이용하여 모든 앙상블 결과를 계산하지 않으면서 최적의 앙상블을 찾아내는 방법을 제안하였으며, 실제로 림프종 암 데이터에 적용한 결과 100%의 결합결과를 보이는 최적의 앙상블을 효과적으로 찾아내었다.

  • PDF

Comparison of ensemble pruning methods using Lasso-bagging and WAVE-bagging (분류 앙상블 모형에서 Lasso-bagging과 WAVE-bagging 가지치기 방법의 성능비교)

  • Kwak, Seungwoo;Kim, Hyunjoong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1371-1383
    • /
    • 2014
  • Classification ensemble technique is a method to combine diverse classifiers to enhance the accuracy of the classification. It is known that an ensemble method is successful when the classifiers that participate in the ensemble are accurate and diverse. However, it is common that an ensemble includes less accurate and similar classifiers as well as accurate and diverse ones. Ensemble pruning method is developed to construct an ensemble of classifiers by choosing accurate and diverse classifiers only. In this article, we proposed an ensemble pruning method called WAVE-bagging. We also compared the results of WAVE-bagging with that of the existing pruning method called Lasso-bagging. We showed that WAVE-bagging method performed better than Lasso-bagging by the extensive empirical comparison using 26 real dataset.

Coarse-to-fine Classifier Ensemble Selection using Clustering and Genetic Algorithms (군집화와 유전 알고리즘을 이용한 거친-섬세한 분류기 앙상블 선택)

  • Kim, Young-Won;Oh, Il-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.9
    • /
    • pp.857-868
    • /
    • 2007
  • The good classifier ensemble should have a high complementarity among classifiers in order to produce a high recognition rate and its size is small in order to be efficient. This paper proposes a classifier ensemble selection algorithm with coarse-to-fine stages. for the algorithm to be successful, the original classifier pool should be sufficiently diverse. This paper produces a large classifier pool by combining several different classification algorithms and lots of feature subsets. The aim of the coarse selection is to reduce the size of classifier pool with little sacrifice of recognition performance. The fine selection finds near-optimal ensemble using genetic algorithms. A hybrid genetic algorithm with improved searching capability is also proposed. The experimentation uses the worldwide handwritten numeral databases. The results showed that the proposed algorithm is superior to the conventional ones.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Hybrid Genetic Algorithm for Classifier Ensemble Selection (분류기 앙상블 선택을 위한 혼합 유전 알고리즘)

  • Kim, Young-Won;Oh, Il-Seok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.5
    • /
    • pp.369-376
    • /
    • 2007
  • This paper proposes a hybrid genetic algorithm(HGA) for the classifier ensemble selection. HGA is added a local search operation for increasing the fine-turning of local area. This paper apply hybrid and simple genetic algorithms(SGA) to the classifier ensemble selection problem in order to show the superiority of HGA. And this paper propose two methods(SSO: Sequential Search Operations, CSO: Combinational Search Operations) of local search operation of hybrid genetic algorithm. Experimental results show that the HGA has better searching capability than SGA. The experiments show that the CSO considering the correlation among classifiers is better than the SSO.

Multiple Optimal Classifiers based on Speciated Evolution for Classifying DNA Microarray Data (DNA 마이크로어레이 데이터의 분류를 위한 종분화 진화 기반의 최적 다중 분류기)

  • 박찬호;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.724-726
    • /
    • 2004
  • DNA 마이크로어레이 기술의 발전은 암의 조기 발견 및 예후 예측을 가능하게 해주었으며, 이와 관련된 많은 연구가 진행 중이다. 마이크로어레이 데이터의 분류에서 관련 유전자들의 선택은 필수적이며, 유전자 선택방법은 분류기와 짝을 이루어 특징-분류기를 형성한다. 이제까지 여러 가지 특징-분류기를 사용하여 마이크로어레이 데이터를 분류해 왔지만, 알고리즘의 한계와 데이터의 결함 등으로 인하여 최적의 특징-분류기를 찾기 어려웠다. 따라서 앙상블 분류기를 이용하여 높은 분류성능을 얻는 방법이 시도되어왔으며. 최적의 것을 찾기 위하여 유전자 알고리즘이 사용되기도 했다. 본 논문에서는 이를 발전시켜 다양한 최적의 앙상블을 생성하기 위해 종분화 방법을 사용한다. 림프종 암 데이터에 대하여 leave-one-out cross-validation을 적용한 결과, 제안한 방법으로 다양한 최적해를 탐색하는 것을 확인할 수 있었다.

  • PDF

Searching for Optimal Ensemble of Feature-classifier Pairs in Gene Expression Profile using Genetic Algorithm (유전알고리즘을 이용한 유전자발현 데이타상의 특징-분류기쌍 최적 앙상블 탐색)

  • 박찬호;조성배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.525-536
    • /
    • 2004
  • Gene expression profile is numerical data of gene expression level from organism, measured on the microarray. Generally, each specific tissue indicates different expression levels in related genes, so that we can classify disease with gene expression profile. Because all genes are not related to disease, it is needed to select related genes that is called feature selection, and it is needed to classify selected genes properly. This paper Proposes GA based method for searching optimal ensemble of feature-classifier pairs that are composed with seven feature selection methods based on correlation, similarity, and information theory, and six representative classifiers. In experimental results with leave-one-out cross validation on two gene expression Profiles related to cancers, we can find ensembles that produce much superior to all individual feature-classifier fairs for Lymphoma dataset and Colon dataset.

Review on Genetic Algorithms for Pattern Recognition (패턴 인식을 위한 유전 알고리즘의 개관)

  • Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.58-64
    • /
    • 2007
  • In pattern recognition field, there are many optimization problems having exponential search spaces. To solve of sequential search algorithms seeking sub-optimal solutions have been used. The algorithms have limitations of stopping at local optimums. Recently lots of researches attempt to solve the problems using genetic algorithms. This paper explains the huge search spaces of typical problems such as feature selection, classifier ensemble selection, neural network pruning, and clustering, and it reviews the genetic algorithms for solving them. Additionally we present several subjects worthy of noting as future researches.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Speaker Identification on Various Environments Using an Ensemble of Kernel Principal Component Analysis (커널 주성분 분석의 앙상블을 이용한 다양한 환경에서의 화자 식별)

  • Yang, Il-Ho;Kim, Min-Seok;So, Byung-Min;Kim, Myung-Jae;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.188-196
    • /
    • 2012
  • In this paper, we propose a new approach to speaker identification technique which uses an ensemble of multiple classifiers (speaker identifiers). KPCA (kernel principal component analysis) enhances features for each classifier. To reduce the processing time and memory requirements, we select limited number of samples randomly which are used as estimation set for each KPCA basis. The experimental result shows that the proposed approach gives a higher identification accuracy than GKPCA (greedy kernel principal component analysis).