• Title/Summary/Keyword: Multiple Classifiers

Search Result 99, Processing Time 0.018 seconds

Selecting Classifiers using Mutual Information between Classifiers (인식기 간의 상호정보를 이용한 인식기 선택)

  • Kang, Hee-Joong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.3
    • /
    • pp.326-330
    • /
    • 2008
  • The study on combining multiple classifiers in the field of pattern recognition has mainly focused on how to combine multiple classifiers, but it has gradually turned to the study on how to select multiple classifiers from a classifier pool recently. Actually, the performance of multiple classifier system depends on the selected classifiers as well as the combination method of classifiers. Therefore, it is necessary to select a classifier set showing good performance, and an approach based on information theory has been tried to select the classifier set. In this paper, a classifier set candidate is made by the selection of classifiers, on the basis of mutual information between classifiers, and the classifier set candidate is compared with the other classifier sets chosen by the different selection methods in experiments.

CREATING MULTIPLE CLASSIFIERS FOR THE CLASSIFICATION OF HYPERSPECTRAL DATA;FEATURE SELECTION OR FEATURE EXTRACTION

  • Maghsoudi, Yasser;Rahimzadegan, Majid;Zoej, M.J.Valadan
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.6-10
    • /
    • 2007
  • Classification of hyperspectral images is challenging. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. In other words in order to obtain statistically reliable classification results, the number of necessary training samples increases exponentially as the number of spectral bands increases. However, in many situations, acquisition of the large number of training samples for these high-dimensional datasets may not be so easy. This problem can be overcome by using multiple classifiers. In this paper we compared the effectiveness of two approaches for creating multiple classifiers, feature selection and feature extraction. The methods are based on generating multiple feature subsets by running feature selection or feature extraction algorithm several times, each time for discrimination of one of the classes from the rest. A maximum likelihood classifier is applied on each of the obtained feature subsets and finally a combination scheme was used to combine the outputs of individual classifiers. Experimental results show the effectiveness of feature extraction algorithm for generating multiple classifiers.

  • PDF

Construction of Multiple Classifier Systems based on a Classifiers Pool (인식기 풀 기반의 다수 인식기 시스템 구축방법)

  • Kang, Hee-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.595-603
    • /
    • 2002
  • Only a few studies have been conducted on how to select multiple classifiers from the pool of available classifiers for showing the good classification performance. Thus, the selection problem if classifiers on how to select or how many to select still remains an important research issue. In this paper, provided that the number of selected classifiers is constrained in advance, a variety of selection criteria are proposed and applied to tile construction of multiple classifier systems, and then these selection criteria will be evaluated by the performance of the constructed multiple classifier systems. All the possible sets of classifiers are trammed by the selection criteria, and some of these sets are selected as the candidates of multiple classifier systems. The multiple classifier system candidates were evaluated by the experiments recognizing unconstrained handwritten numerals obtained both from Concordia university and UCI machine learning repository. Among the selection criteria, particularly the multiple classifier system candidates by the information-theoretic selection criteria based on conditional entropy showed more promising results than those by the other selection criteria.

Appliance identification algorithm using multiple classifier system (다중 분류 시스템을 이용한 가전기기 식별 알고리즘)

  • Park, Yong-Soon;Chung, Tae-Yun;Park, Sung-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.4
    • /
    • pp.213-219
    • /
    • 2015
  • Real-time energy monitoring systems is a demand-response system which is reported to be effective in saving energy up to 12%. Real-time energy monitoring system is commonly composed of smart-plugs which sense how much electrical power is consumed and IHD(In-Home Display device) which displays power consumption patterns. Even though the monitoring system is effective, users should themselves match which smart plus is connected to which appliance. In order to make the matching work to be automatic, the monitoring system need to have appliance identification algorithm, and some works have made under the name of NILM(Non-Intrusive Load Monitoring). This paper proposed an algorithm which utilizes multiple classifiers to improve accuracy of appliance identification. The algorithm proposes to understand each classifiers performance, that is, when a classifier make a result how much the result is reliable, and utilize it in choosing the final result among result candidates from many classifiers. By using the proposed algorithm this paper make 4.5% of improved accuracy with respect to using single best classifier, and 2.9% of improved accuracy with respect to other method using multiple classifiers, so called CDM(Commitee Decision Mechanism) method.

A Multi-Level Integrator with Programming Based Boosting for Person Authentication Using Different Biometrics

  • Kundu, Sumana;Sarker, Goutam
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1114-1135
    • /
    • 2018
  • A multiple classification system based on a new boosting technique has been approached utilizing different biometric traits, that is, color face, iris and eye along with fingerprints of right and left hands, handwriting, palm-print, gait (silhouettes) and wrist-vein for person authentication. The images of different biometric traits were taken from different standard databases such as FEI, UTIRIS, CASIA, IAM and CIE. This system is comprised of three different super-classifiers to individually perform person identification. The individual classifiers corresponding to each super-classifier in their turn identify different biometric features and their conclusions are integrated together in their respective super-classifiers. The decisions from individual super-classifiers are integrated together through a mega-super-classifier to perform the final conclusion using programming based boosting. The mega-super-classifier system using different super-classifiers in a compact form is more reliable than single classifier or even single super-classifier system. The system has been evaluated with accuracy, precision, recall and F-score metrics through holdout method and confusion matrix for each of the single classifiers, super-classifiers and finally the mega-super-classifier. The different performance evaluations are appreciable. Also the learning and the recognition time is fairly reasonable. Thereby making the system is efficient and effective.

Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS (GAVaPS를 이용한 다수 K-Nearest Neighbor classifier들의 Feature 선택)

  • Lee, Hee-Sung;Lee, Jae-Hun;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.871-875
    • /
    • 2008
  • This paper deals with the feature selection for multiple k-nearest neighbor (k-NN) classifiers using Genetic Algorithm with Varying reputation Size (GAVaPS). Because we use multiple k-NN classifiers, the feature selection problem for them is vary hard and has large search region. To solve this problem, we employ the GAVaPS which outperforms comparison with simple genetic algorithm (SGA). Further, we propose the efficient combining method for multiple k-NN classifiers using GAVaPS. Experiments are performed to demonstrate the efficiency of the proposed method.

Multiple Classifier System for Activity Recognition

  • Han, Yong-Koo;Lee, Sung-Young;Lee, young-Koo;Lee, Jae-Won
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.11a
    • /
    • pp.439-443
    • /
    • 2007
  • Nowadays, activity recognition becomes a hot topic in context-aware computing. In activity recognition, machine learning techniques have been widely applied to learn the activity models from labeled activity samples. Most of the existing work uses only one learning method for activity learning and is focused on how to effectively utilize the labeled samples by refining the learning method. However, not much attention has been paid to the use of multiple classifiers for boosting the learning performance. In this paper, we use two methods to generate multiple classifiers. In the first method, the basic learning algorithms for each classifier are the same, while the training data is different (ASTD). In the second method, the basic learning algorithms for each classifier are different, while the training data is the same (ADTS). Experimental results indicate that ADTS can effectively improve activity recognition performance, while ASTD cannot achieve any improvement of the performance. We believe that the classifiers in ADTS are more diverse than those in ASTD.

  • PDF

A Multiple Classifier System based on Dynamic Classifier Selection having Local Property (지역적 특성을 갖는 동적 선택 방법에 기반한 다중 인식기 시스템)

  • 송혜정;김백섭
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.339-346
    • /
    • 2003
  • This paper proposes a multiple classifier system having massive micro classifiers. The micro classifiers are trained by using a local set of training patterns. The k nearest neighboring training patterns of one training pattern comprise the local region for training a micro classifier. Each training pattern is incorporated with one or more micro classifiers. Two types of micro classifiers are adapted in this paper. SVM with linear kernel and SVM with RBF kernel. Classification is done by selecting the best micro classifier among the micro classifiers in vicinity of incoming test pattern. To measure the goodness of each micro classifier, the weighted sum of correctly classified training patterns in vicinity of the test pattern is used. Experiments have been done on Elena database. Results show that the proposed method gives better classification accuracy than any conventional classifiers like SVM, k-NN and the conventional classifier combination/selection scheme.

Combining Multiple Classifiers using Product Approximation based on Third-order Dependency (3차 의존관계에 기반한 곱 근사를 이용한 다수 인식기의 결합)

  • 강희중
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.577-585
    • /
    • 2004
  • Storing and estimating the high order probability distribution of classifiers and class labels is exponentially complex and unmanageable without an assumption or an approximation, so we rely on an approximation scheme using the dependency. In this paper, as an extended study of the second-order dependency-based approximation, the probability distribution is optimally approximated by the third-order dependency. The proposed third-order dependency-based approximation is applied to the combination of multiple classifiers recognizing handwritten numerals from Concordia University and the University of California, Irvine and its usefulness is demonstrated through the experiments.

A credit scoring model of a capital company's customers using genetic algorithm based integration of multiple classifiers (유전자알고리즘 기반 복수 분류모형 통합에 의한 캐피탈고객의 신용 스코어링 모형)

  • Kim Kap-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.279-286
    • /
    • 2005
  • The objective of this study is to suggest a credit scoring model of a capital company's customers by integration of multiple classifiers using genetic algorithm. For this purpose , an integrated model is derived in two phases. In first phase, three types of classifiers MLP (Multi-Layered Perceptron), RBF (Radial Basis Function) and linear models - are trained, in which each type has three ones respectively so htat we have nine classifiers totally. In second phase, genetic algorithm is applied twice for integration of classifiers. That is, after htree models are derived from each group, a final one is from these three, In result, our suggested model shows a superior accuracy to any single ones.

  • PDF