• 제목/요약/키워드: Binary Classifier

검색결과 131건 처리시간 0.02초

THE PERFORMANCE OF THE BINARY TREE CLASSIFIER AND DATA CHARACTERISTICS

  • Park, Jeong-sun
    • Management Science and Financial Engineering
    • /
    • 제3권1호
    • /
    • pp.39-56
    • /
    • 1997
  • This paper applies the binary tree classifier and discriminant analysis methods to predicting failures of banks and insurance companies. In this study, discriminant analysis is generally better than the binary tree classifier in the classification of bank defaults; the binary tree is generally better than discriminant analysis in the classification of insurance company defaults. This situation can be explained that the performance of a classifier depends on the characteristics of the data. If the data are dispersed appropriately for the classifier, the classifier will show a good performance. Otherwise, it may show a poor performance. The two data sets (bank and insurance) are analyzed to explain the better performance of the binary tree in insurance and the worse performance in bank; the better performance of discriminant analysis in bank and the worse performance in insurance.

  • PDF

An Improvement of AdaBoost using Boundary Classifier

  • 이원주;천민규;현창호;박민용
    • 한국지능시스템학회논문지
    • /
    • 제23권2호
    • /
    • pp.166-171
    • /
    • 2013
  • The method proposed in this paper can improve the performance of the Boosting algorithm in machine learning. The proposed Boundary AdaBoost algorithm can make up for the weak points of Normal binary classifier using threshold boundary concepts. The new proposed boundary can be located near the threshold of the binary classifier. The proposed algorithm improves classification in areas where Normal binary classifier is weak. Thus, the optimal boundary final classifier can decrease error rates classified with more reasonable features. Finally, this paper derives the new algorithm's optimal solution, and it demonstrates how classifier accuracy can be improved using the proposed Boundary AdaBoost in a simulation experiment of pedestrian detection using 10-fold cross validation.

DESIGN OF A BINARY DECISION TREE FOR RECOGNITION OF THE DEFECT PATTERNS OF COLD MILL STRIP USING GENETIC ALGORITHM

  • Lee, Byung-Jin;Kyoung Lyou;Park, Gwi-Tae;Kim, Kyoung-Min
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.208-212
    • /
    • 1998
  • This paper suggests the method to recognize the various defect patterns of cold mill strip using binary decision tree constructed by genetic algorithm automatically. In case of classifying the complex the complex patterns with high similarity like the defect patterns of cold mill strip, the selection of the optimal feature set and the structure of recognizer is important for high recognition rate. In this paper genetic algorithm is used to select a subset of the suitable features at each node in binary decision tree. The feature subset of maximum fitness is chosen and the patterns are classified into two classes by linear decision function. After this process is repeated at each node until all the patterns are classified respectively into individual classes. In this way , binary decision tree classifier is constructed automatically. After construction binary decision tree, the final recognizer is accomplished by the learning process of neural network using a set of standard p tterns at each node. In this paper, binary decision tree classifier is applied to recognition of the defect patterns of cold mill strip and the experimental results are given to show the usefulness of the proposed scheme.

  • PDF

유전 알고리듬을 이용한 이진 트리 분류기의 설계와 냉연 흠 분류에의 적용 (Design of a binary decision tree using genetic algorithm for recognition of the defect patterns of cold mill strip)

  • 김경민;이병진;류경;박귀태
    • 제어로봇시스템학회논문지
    • /
    • 제6권1호
    • /
    • pp.98-103
    • /
    • 2000
  • This paper suggests a method to recognize the various defect patterns of a cold mill strip using a binary decision tree automatically constructed by a genetic algorithm(GA). In classifying complex patterns with high similarity like the defect patterns of a cold mill stirp, the selection of an optimal feature set and an appropriate recognizer is important to achieve high recognition rate. In this paper a GA is used to select a subset of the suitable features at each node in the binary decision tree. The feature subset with maximum fitness is chosen and the patterns are classified into two classes using a linear decision function. This process is repeated at each node until all the patterns are classified into individual classes. In this way, the classifier using the binary decision tree is constructed automatically. After constructing the binary decision tree, the final recognizer is accomplished by having neural network learning sits of standard patterns at each node. In this paper, the classifier using the binary decision tree is applied to the recognition of defect patterns of a cold mill strip, and the experimental results are given to demonstrate the usefulness of the proposed scheme.

  • PDF

Modifying linearly non-separable support vector machine binary classifier to account for the centroid mean vector

  • Mubarak Al-Shukeili;Ronald Wesonga
    • Communications for Statistical Applications and Methods
    • /
    • 제30권3호
    • /
    • pp.245-258
    • /
    • 2023
  • This study proposes a modification to the objective function of the support vector machine for the linearly non-separable case of a binary classifier yi ∈ {-1, 1}. The modification takes into account the position of each data item xi from its corresponding class centroid. The resulting optimization function involves the centroid mean vector, and the spread of data besides the support vectors, which should be minimized by the choice of hyper-plane β. Theoretical assumptions have been tested to derive an optimal separable hyperplane that yields the minimal misclassification rate. The proposed method has been evaluated using simulation studies and real-life COVID-19 patient outcome hospitalization data. Results show that the proposed method performs better than the classical linear SVM classifier as the sample size increases and is preferred in the presence of correlations among predictors as well as among extreme values.

A Framework for Semantic Interpretation of Noun Compounds Using Tratz Model and Binary Features

  • Zaeri, Ahmad;Nematbakhsh, Mohammad Ali
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.743-752
    • /
    • 2012
  • Semantic interpretation of the relationship between noun compound (NC) elements has been a challenging issue due to the lack of contextual information, the unbounded number of combinations, and the absence of a universally accepted system for the categorization. The current models require a huge corpus of data to extract contextual information, which limits their usage in many situations. In this paper, a new semantic relations interpreter for NCs based on novel lightweight binary features is proposed. Some of the binary features used are novel. In addition, the interpreter uses a new feature selection method. By developing these new features and techniques, the proposed method removes the need for any huge corpuses. Implementing this method using a modular and plugin-based framework, and by training it using the largest and the most current fine-grained data set, shows that the accuracy is better than that of previously reported upon methods that utilize large corpuses. This improvement in accuracy and the provision of superior efficiency is achieved not only by improving the old features with such techniques as semantic scattering and sense collocation, but also by using various novel features and classifier max entropy. That the accuracy of the max entropy classifier is higher compared to that of other classifiers, such as a support vector machine, a Na$\ddot{i}$ve Bayes, and a decision tree, is also shown.

A Multi-Class Classifier of Modified Convolution Neural Network by Dynamic Hyperplane of Support Vector Machine

  • Nur Suhailayani Suhaimi;Zalinda Othman;Mohd Ridzwan Yaakub
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we focused on the problem of evaluating multi-class classification accuracy and simulation of multiple classifier performance metrics. Multi-class classifiers for sentiment analysis involved many challenges, whereas previous research narrowed to the binary classification model since it provides higher accuracy when dealing with text data. Thus, we take inspiration from the non-linear Support Vector Machine to modify the algorithm by embedding dynamic hyperplanes representing multiple class labels. Then we analyzed the performance of multi-class classifiers using macro-accuracy, micro-accuracy and several other metrics to justify the significance of our algorithm enhancement. Furthermore, we hybridized Enhanced Convolution Neural Network (ECNN) with Dynamic Support Vector Machine (DSVM) to demonstrate the effectiveness and efficiency of the classifier towards multi-class text data. We performed experiments on three hybrid classifiers, which are ECNN with Binary SVM (ECNN-BSVM), and ECNN with linear Multi-Class SVM (ECNN-MCSVM) and our proposed algorithm (ECNNDSVM). Comparative experiments of hybrid algorithms yielded 85.12 % for single metric accuracy; 86.95 % for multiple metrics on average. As for our modified algorithm of the ECNN-DSVM classifier, we reached 98.29 % micro-accuracy results with an f-score value of 98 % at most. For the future direction of this research, we are aiming for hyperplane optimization analysis.

특징 추출 알고리즘과 Adaboost를 이용한 이진분류기 (Binary classification by the combination of Adaboost and feature extraction methods)

  • 함승록;곽노준
    • 전자공학회논문지CI
    • /
    • 제49권4호
    • /
    • pp.42-53
    • /
    • 2012
  • 패턴 인식과 기계 학습 분야에서 분류는 가장 기본적으로 해결해야 하는 문제의 유형이다. Adaboost 알고리즘은 Boosting 알고리즘의 아이디어를 실제 데이터분석에 이용할 수 있도록 개량한 방법으로써, 단계를 반복하여 나온 여러 개의 약한 분류기와 가중치 값들의 조합으로 강한 분류기를 생성하는 두 개의 클래스를 분류하는 분류기이다. 주성분 분석법과 선형 판별 분석법은 높은 차원의 특징 벡터를 낮은 차원의 특징 벡터로 축소하는 특징 벡터의 차원 감소와 데이터의 특징 추출에도 유용하게 사용되는 방법들이다. 본 논문에서는, 주성분 분석법과 선형 판별 분석법을 이용하여 추출한 특징을 Adaboost 알고리즘의 약 분류기로 사용함으로써, 특징 추출과 분류를 동시에 하고, 인식률을 높이는 효율적인 Boosted-PCA와 Boosted-LDA 알고리즘을 제안한다. 마지막 장에서는, 제안하는 알고리즘으로 UCI Data-Set 중 2 Class-Data와 FRGC Data의 남자와 여자 영상에 대해서 분류 실험을 진행하였다. 실험의 결과로 제안한 Boosted-PCA와 Boosted-LDA 알고리즘이 기존의 특징 추출 알고리즘과 최근접 이웃 분류기, SVM을 이용한 분류기 방법과 비교하여 인식률이 향상됨을 보인다.

A Novel Posterior Probability Estimation Method for Multi-label Naive Bayes Classification

  • Kim, Hae-Cheon;Lee, Jaesung
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권6호
    • /
    • pp.1-7
    • /
    • 2018
  • A multi-label classification is to find multiple labels associated with the input pattern. Multi-label classification can be achieved by extending conventional single-label classification. Common extension techniques are known as Binary relevance, Label powerset, and Classifier chains. However, most of the extended multi-label naive bayes classifier has not been able to accurately estimate posterior probabilities because it does not reflect the label dependency. And the remaining extended multi-label naive bayes classifier has a problem that it is unstable to estimate posterior probability according to the label selection order. To estimate posterior probability well, we propose a new posterior probability estimation method that reflects the probability between all labels and labels efficiently. The proposed method reflects the correlation between labels. And we have confirmed through experiments that the extended multi-label naive bayes classifier using the proposed method has higher accuracy then the existing multi-label naive bayes classifiers.

라벨 스무딩을 활용한 치은염 이진 분류기 캘리브레이션 (Calibration for Gingivitis Binary Classifier via Epoch-wise Decaying Label-Smoothing)

  • 이상현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.594-596
    • /
    • 2021
  • Future healthcare systems will heavily rely on ill-labeled data due to scarcity of the experts who are trained enough to label the data. Considering the contamination of the dataset, it is not desirable to make the neural network being overconfident to the dataset, but rather giving them some margins for the prediction is preferable. In this paper, we propose a novel epoch-wise decaying label-smoothing function to alleviate the model over-confidency, and it outperforms the neural network trained with conventional cross entropy by 6.0%.

  • PDF