• Title/Summary/Keyword: combining multiple classifiers

Search Result 25, Processing Time 0.029 seconds

Selecting Classifiers using Mutual Information between Classifiers (인식기 간의 상호정보를 이용한 인식기 선택)

  • Kang, Hee-Joong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.3
    • /
    • pp.326-330
    • /
    • 2008
  • The study on combining multiple classifiers in the field of pattern recognition has mainly focused on how to combine multiple classifiers, but it has gradually turned to the study on how to select multiple classifiers from a classifier pool recently. Actually, the performance of multiple classifier system depends on the selected classifiers as well as the combination method of classifiers. Therefore, it is necessary to select a classifier set showing good performance, and an approach based on information theory has been tried to select the classifier set. In this paper, a classifier set candidate is made by the selection of classifiers, on the basis of mutual information between classifiers, and the classifier set candidate is compared with the other classifier sets chosen by the different selection methods in experiments.

Combining Multiple Classifiers using Product Approximation based on Third-order Dependency (3차 의존관계에 기반한 곱 근사를 이용한 다수 인식기의 결합)

  • 강희중
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.577-585
    • /
    • 2004
  • Storing and estimating the high order probability distribution of classifiers and class labels is exponentially complex and unmanageable without an assumption or an approximation, so we rely on an approximation scheme using the dependency. In this paper, as an extended study of the second-order dependency-based approximation, the probability distribution is optimally approximated by the third-order dependency. The proposed third-order dependency-based approximation is applied to the combination of multiple classifiers recognizing handwritten numerals from Concordia University and the University of California, Irvine and its usefulness is demonstrated through the experiments.

Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS (GAVaPS를 이용한 다수 K-Nearest Neighbor classifier들의 Feature 선택)

  • Lee, Hee-Sung;Lee, Jae-Hun;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.871-875
    • /
    • 2008
  • This paper deals with the feature selection for multiple k-nearest neighbor (k-NN) classifiers using Genetic Algorithm with Varying reputation Size (GAVaPS). Because we use multiple k-NN classifiers, the feature selection problem for them is vary hard and has large search region. To solve this problem, we employ the GAVaPS which outperforms comparison with simple genetic algorithm (SGA). Further, we propose the efficient combining method for multiple k-NN classifiers using GAVaPS. Experiments are performed to demonstrate the efficiency of the proposed method.

Integrating Multiple Classifiers in a GA-based Inductive Learning Environment (유전 알고리즘 기반 귀납적 학습 환경에서 분류기의 통합)

  • Kim, Yeong-Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.3
    • /
    • pp.614-621
    • /
    • 2006
  • We have implemented a multiclassifier learning approach in a GA-based inductive learning environment that learns classification rules that are similar to rules used in PROSPECTOR. In the multiclassifier learning approach, a classification system is constructed with several classifiers that are obtained by running a GA-based learning system several times to improve the overall performance of a classification system. To implement the multiclassifier learning approach, we need a decision-making scheme that can draw a decision using multiple classifiers. In this paper, we introduce two decision-making schemes: one is based on combining posterior odds given by classifiers to each class and the other one is a voting scheme based on ranking assigned to each class by classifiers. We also present empirical results that evaluate the effect of the multiclassifier learning approach on the GA-based inductive teaming environment.

Combining Multiple Classifiers for Automatic Classification of Email Documents (전자우편 문서의 자동분류를 위한 다중 분류기 결합)

  • Lee, Jae-Haeng;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.192-201
    • /
    • 2002
  • Automated text classification is considered as an important method to manage and process a huge amount of documents in digital forms that are widespread and continuously increasing. Recently, text classification has been addressed with machine learning technologies such as k-nearest neighbor, decision tree, support vector machine and neural networks. However, only few investigations in text classification are studied on real problems but on well-organized text corpus, and do not show their usefulness. This paper proposes and analyzes text classification methods for a real application, email document classification task. First, we propose a combining method of multiple neural networks that improves the performance through the combinations with maximum and neural networks. Second, we present another strategy of combining multiple machine learning classifiers. Voting, Borda count and neural networks improve the overall classification performance. Experimental results show the usefulness of the proposed methods for a real application domain, yielding more than 90% precision rates.

A Genetic Algorithm-based Classifier Ensemble Optimization for Activity Recognition in Smart Homes

  • Fatima, Iram;Fahim, Muhammad;Lee, Young-Koo;Lee, Sungyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2853-2873
    • /
    • 2013
  • Over the last few years, one of the most common purposes of smart homes is to provide human centric services in the domain of u-healthcare by analyzing inhabitants' daily living. Currently, the major challenges in activity recognition include the reliability of prediction of each classifier as they differ according to smart homes characteristics. Smart homes indicate variation in terms of performed activities, deployed sensors, environment settings, and inhabitants' characteristics. It is not possible that one classifier always performs better than all the other classifiers for every possible situation. This observation has motivated towards combining multiple classifiers to take advantage of their complementary performance for high accuracy. Therefore, in this paper, a method for activity recognition is proposed by optimizing the output of multiple classifiers with Genetic Algorithm (GA). Our proposed method combines the measurement level output of different classifiers for each activity class to make up the ensemble. For the evaluation of the proposed method, experiments are performed on three real datasets from CASAS smart home. The results show that our method systematically outperforms single classifier and traditional multiclass models. The significant improvement is achieved from 0.82 to 0.90 in the F-measures of recognized activities as compare to existing methods.

Fuzzy Behavior Knowledge Space for Integration of Multiple Classifiers (다중 분류기 통합을 위한 퍼지 행위지식 공간)

  • 김봉근;최형일
    • Korean Journal of Cognitive Science
    • /
    • v.6 no.2
    • /
    • pp.27-45
    • /
    • 1995
  • In this paper, we suggest the "Fuzzy Behavior Knowledge Space(FBKS)" and explain how to utilize the FBKS when aggregating decisions of individual classifiers. The concept of "Behavior Knowledge Space(BKS)" is known to be the best method in the context that each classifier offers only one class label as its decision. However. the BKS does not considers measurement value of class label. Furthermore, it does not allow the heuristic knowledge of human experts to be embedded when combining multiple decisions. The FBKS eliminates such drawbacks of the BKS by adapting the fwzy concepts. Our method applies to the classification results that contain both class labels and associated measurement values. Experimental results confirm that the FBKS could be a very promising tool in pattern recognition areas.

  • PDF

Handwritten Numeral Recognition Using Karhunen-Loeve Transform Based Subspace Classifier and Combined Multiple Novelty Classifiers (Karhunen-Loeve 변환 기반의 부분공간 인식기와 결합된 다중 노벨티 인식기를 이용한 필기체 숫자 인식)

  • 임길택;진성일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.88-98
    • /
    • 1998
  • Subspace classifier is a popular pattern recognition method based on Karhunen-Loeve transform. This classifier describes a high dimensional pattern by using a reduced dimensional subspace. Because of the loss of information induced by dimensionality reduction, however, a subspace classifier sometimes shows unsatisfactory recognition performance to the patterns having quite similar principal components each other. In this paper, we propose the use of multiple novelty neural network classifiers constructed on novelty vectors to adopt minor components usually ignored and present a method of improving recognition performance through combining those with the subspace classifier. We develop the proposed classifier on handwritten numeral database and analyze its properties. Our proposed classifier shows better recognition performance compared with other classifiers, though it requires more weight links.

  • PDF

Developing an Ensemble Classifier for Bankruptcy Prediction (부도 예측을 위한 앙상블 분류기 개발)

  • Min, Sung-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.139-148
    • /
    • 2012
  • An ensemble of classifiers is to employ a set of individually trained classifiers and combine their predictions. It has been found that in most cases the ensembles produce more accurate predictions than the base classifiers. Combining outputs from multiple classifiers, known as ensemble learning, is one of the standard and most important techniques for improving classification accuracy in machine learning. An ensemble of classifiers is efficient only if the individual classifiers make decisions as diverse as possible. Bagging is the most popular method of ensemble learning to generate a diverse set of classifiers. Diversity in bagging is obtained by using different training sets. The different training data subsets are randomly drawn with replacement from the entire training dataset. The random subspace method is an ensemble construction technique using different attribute subsets. In the random subspace, the training dataset is also modified as in bagging. However, this modification is performed in the feature space. Bagging and random subspace are quite well known and popular ensemble algorithms. However, few studies have dealt with the integration of bagging and random subspace using SVM Classifiers, though there is a great potential for useful applications in this area. The focus of this paper is to propose methods for improving SVM performance using hybrid ensemble strategy for bankruptcy prediction. This paper applies the proposed ensemble model to the bankruptcy prediction problem using a real data set from Korean companies.

Classification of Multi-temporal SAR Data by Using Data Transform Based Features and Multiple Classifiers (자료변환 기반 특징과 다중 분류자를 이용한 다중시기 SAR자료의 분류)

  • Yoo, Hee Young;Park, No-Wook;Hong, Sukyoung;Lee, Kyungdo;Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this study, a novel land-cover classification framework for multi-temporal SAR data is presented that can combine multiple features extracted through data transforms and multiple classifiers. At first, data transforms using principle component analysis (PCA) and 3D wavelet transform are applied to multi-temporal SAR dataset for extracting new features which were different from original dataset. Then, three different classifiers including maximum likelihood classifier (MLC), neural network (NN) and support vector machine (SVM) are applied to three different dataset including data transform based features and original backscattering coefficients, and as a result, the diverse preliminary classification results are generated. These results are combined via a majority voting rule to generate a final classification result. From an experiment with a multi-temporal ENVISAT ASAR dataset, every preliminary classification result showed very different classification accuracy according to the used feature and classifier. The final classification result combining nine preliminary classification results showed the best classification accuracy because each preliminary classification result provided complementary information on land-covers. The improvement of classification accuracy in this study was mainly attributed to the diversity from combining not only different features based on data transforms, but also different classifiers. Therefore, the land-cover classification framework presented in this study would be effectively applied to the classification of multi-temporal SAR data and also be extended to multi-sensor remote sensing data fusion.