• Title/Summary/Keyword: 클래스 불균형 데이터

Search Result 85, Processing Time 0.028 seconds

Adversarial Training Method for Handling Class Imbalance Problems in Dialog Datasets (대화 데이터셋의 클래스 불균형 문제 보정을 위한 적대적 학습 기법)

  • Cho, Su-Phil;Choi, Yong Suk
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.434-439
    • /
    • 2019
  • 딥러닝 기반 분류 모델에 있어 데이터의 클래스 불균형 문제는 소수 클래스의 분류 성능을 크게 저하시킨다. 본 논문에서는 앞서 언급한 클래스 불균형 문제를 보완하기 위한 방안으로 적대적 학습 기법을 제안한다. 적대적 학습 기법의 성능 향상 여부를 확인하기 위해 총 4종의 딥러닝 기반 분류 모델을 정의하였으며, 해당 모델 간 분류 성능을 비교하였다. 실험 결과, 대화 데이터셋을 이용한 모델 학습 시 적대적 학습 기법을 적용할 경우 다수 클래스의 분류 성능은 유지하면서 동시에 소수 클래스의 분류 성능을 크게 향상시킬 수 있음을 확인하였다.

  • PDF

Machine Learning Based Intrusion Detection Systems for Class Imbalanced Datasets (클래스 불균형 데이터에 적합한 기계 학습 기반 침입 탐지 시스템)

  • Cheong, Yun-Gyung;Park, Kinam;Kim, Hyunjoo;Kim, Jonghyun;Hyun, Sangwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.6
    • /
    • pp.1385-1395
    • /
    • 2017
  • This paper aims to develop an IDS (Intrusion Detection System) that takes into account class imbalanced datasets. For this, we first built a set of training data sets from the Kyoto 2006+ dataset in which the amounts of normal data and abnormal (intrusion) data are not balanced. Then, we have run a number of tests to evaluate the effectiveness of machine learning techniques for detecting intrusions. Our evaluation results demonstrated that the Random Forest algorithm achieved the best performances.

Classification of Imbalanced Data Using Multilayer Perceptrons (다층퍼셉트론에 의한 불균현 데이터의 학습 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.141-148
    • /
    • 2009
  • Recently there have been many research efforts focused on imbalanced data classification problems, since they are pervasive but hard to be solved. Approaches to the imbalanced data problems can be categorized into data level approach using re-sampling, algorithmic level one using cost functions, and ensembles of basic classifiers for performance improvement. As an algorithmic level approach, this paper proposes to use multilayer perceptrons with higher-order error functions. The error functions intensify the training of minority class patterns and weaken the training of majority class patterns. Mammography and thyroid data-sets are used to verify the superiority of the proposed method over the other methods such as mean-squared error, two-phase, and threshold moving methods.

Methods For Resolving Challenges In Multi-class Korean Sentiment Analysis (다중클래스 한국어 감성분석에서 클래스 불균형과 손실 스파이크 문제 해결을 위한 기법)

  • Park, Jeiyoon;Yang, Kisu;Park, Yewon;Lee, Moongi;Lee, Sangwon;Lim, Sooyeon;Cho, Jaehoon;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.507-511
    • /
    • 2020
  • 오픈 도메인 대화에서 텍스트에 나타난 태도나 성향과 같은 화자의 주관적인 감정정보를 분석하는 것은 사용자들에게서 풍부한 응답을 이끌어 내고 동시에 제공하는 목적으로 사용될 수 있다. 하지만 한국어 감성분석에서 기존의 대부분의 연구들은 긍정과 부정 두개의 클래스 분류만을 다루고 있고 이는 현실 화자의 감정 정보를 정확하게 분석하기에는 어려움이 있다. 또한 최근에 오픈한 다중클래스로된 한국어 대화 감성분석 데이터셋은 중립 클래스가 전체 데이터셋의 절반을 차지하고 일부 클래스는 사용하기에 매우 적은, 다시 말해 클래스 간의 데이터 불균형 문제가 있어 다루기 굉장히 까다롭다. 이 논문에서 우리는 일곱개의 클래스가 존재하는 한국어 대화에서 세션들을 효율적으로 분류하는 기법들에 대해 논의한다. 우리는 극심한 클래스 불균형에도 불구하고 76.56 micro F1을 기록하였다.

  • PDF

Support Vector Machine Algorithm for Imbalanced Data Learning (불균형 데이터 학습을 위한 지지벡터기계 알고리즘)

  • Kim, Kwang-Seong;Hwang, Doo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.11-17
    • /
    • 2010
  • This paper proposes an improved SMO solving a quadratic optmization problem for class imbalanced learning. The SMO algorithm is aproporiate for solving the optimization problem of a support vector machine that assigns the different regularization values to the two classes, and the prosoposed SMO learning algorithm iterates the learning steps to find the current optimal solutions of only two Lagrange variables selected per class. The proposed algorithm is tested with the UCI benchmarking problems and compared to the experimental results of the SMO algorithm with the g-mean measure that considers class imbalanced distribution for gerneralization performance. In comparison to the SMO algorithm, the proposed algorithm is effective to improve the prediction rate of the minority class data and could shorthen the training time.

A Deep Learning Based Over-Sampling Scheme for Imbalanced Data Classification (불균형 데이터 분류를 위한 딥러닝 기반 오버샘플링 기법)

  • Son, Min Jae;Jung, Seung Won;Hwang, Een Jun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.7
    • /
    • pp.311-316
    • /
    • 2019
  • Classification problem is to predict the class to which an input data belongs. One of the most popular methods to do this is training a machine learning algorithm using the given dataset. In this case, the dataset should have a well-balanced class distribution for the best performance. However, when the dataset has an imbalanced class distribution, its classification performance could be very poor. To overcome this problem, we propose an over-sampling scheme that balances the number of data by using Conditional Generative Adversarial Networks (CGAN). CGAN is a generative model developed from Generative Adversarial Networks (GAN), which can learn data characteristics and generate data that is similar to real data. Therefore, CGAN can generate data of a class which has a small number of data so that the problem induced by imbalanced class distribution can be mitigated, and classification performance can be improved. Experiments using actual collected data show that the over-sampling technique using CGAN is effective and that it is superior to existing over-sampling techniques.

Improved Focused Sampling for Class Imbalance Problem (클래스 불균형 문제를 해결하기 위한 개선된 집중 샘플링)

  • Kim, Man-Sun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Cheah, Wooi Ping
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.287-294
    • /
    • 2007
  • Many classification algorithms for real world data suffer from a data class imbalance problem. To solve this problem, various methods have been proposed such as altering the training balance and designing better sampling strategies. The previous methods are not satisfy in the distribution of the input data and the constraint. In this paper, we propose a focused sampling method which is more superior than previous methods. To solve the problem, we must select some useful data set from all training sets. To get useful data set, the proposed method devide the region according to scores which are computed based on the distribution of SOM over the input data. The scores are sorted in ascending order. They represent the distribution or the input data, which may in turn represent the characteristics or the whole data. A new training dataset is obtained by eliminating unuseful data which are located in the region between an upper bound and a lower bound. The proposed method gives a better or at least similar performance compare to classification accuracy of previous approaches. Besides, it also gives several benefits : ratio reduction of class imbalance; size reduction of training sets; prevention of over-fitting. The proposed method has been tested with kNN classifier. An experimental result in ecoli data set shows that this method achieves the precision up to 2.27 times than the other methods.

Class Imbalance Resolution Method and Classification Algorithm Suggesting Based on Dataset Type Segmentation (데이터셋 유형 분류를 통한 클래스 불균형 해소 방법 및 분류 알고리즘 추천)

  • Kim, Jeonghun;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.23-43
    • /
    • 2022
  • In order to apply AI (Artificial Intelligence) in various industries, interest in algorithm selection is increasing. Algorithm selection is largely determined by the experience of a data scientist. However, in the case of an inexperienced data scientist, an algorithm is selected through meta-learning based on dataset characteristics. However, since the selection process is a black box, it was not possible to know on what basis the existing algorithm recommendation was derived. Accordingly, this study uses k-means cluster analysis to classify types according to data set characteristics, and to explore suitable classification algorithms and methods for resolving class imbalance. As a result of this study, four types were derived, and an appropriate class imbalance resolution method and classification algorithm were recommended according to the data set type.

Processing Method of Unbalanced Data for a Fault Detection System Based Motor Gear Sound (모터 동작음 기반 불량 검출 시스템을 위한 불균형 데이터 처리 방안 연구)

  • Lee, Younghwa;Choi, Geonyoung;Park, Gooman
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1305-1307
    • /
    • 2022
  • 자동차 부품의 결함은 시스템 전체의 성능 저하 및 인적 물적 손실이 발생할 수 있으므로 생산라인에서의 불량 검출은 매우 중요하다. 따라서 정확하고 균일한 결과의 불량 검출을 위해 딥러닝 기반의 고장 진단 시스템이 다양하게 연구되고 있다. 하지만 제조현장에서는 정상 샘플보다 비정상 샘플의 발생 빈도가 현저히 낮다. 이는 학습 데이터의 클래스 불균형 문제로 이어지게 되고, 이러한 불균형 문제는 고장을 판별하는 분류 모델의 성능에 영향을 끼치게 된다. 이에 본 연구에서는 모터의 동작음으로부터 불량 모터를 판별하는 불량 검출 시스템 설계를 위한 데이터 불균형 해결 방법을 제안한다. 자동차 사이드 미러 모터의 동작음을 학습 및 테스트를 위한 데이터 셋으로 사용하였으며 손실함수 계산 시 학습 데이터 셋의 클래스별 샘플 수 가 반영되는 label-distribution-aware margin(LDAM) loss 와 Inception, ResNet, DenseNet 신경망 모델의 비교 분석을 통해 불균형 데이터를 처리할 수 있는 가능성을 보여주었다.

  • PDF

Learning Behavior Analysis of Bayesian Algorithm Under Class Imbalance Problems (클래스 불균형 문제에서 베이지안 알고리즘의 학습 행위 분석)

  • Hwang, Doo-Sung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.179-186
    • /
    • 2008
  • In this paper we analyse the effects of Bayesian algorithm in teaming class imbalance problems and compare the performance evaluation methods. The teaming performance of the Bayesian algorithm is evaluated over the class imbalance problems generated by priori data distribution, imbalance data rate and discrimination complexity. The experimental results are calculated by the AUC(Area Under the Curve) values of both ROC(Receiver Operator Characteristic) and PR(Precision-Recall) evaluation measures and compared according to imbalance data rate and discrimination complexity. In comparison and analysis, the Bayesian algorithm suffers from the imbalance rate, as the same result in the reported researches, and the data overlapping caused by discrimination complexity is the another factor that hampers the learning performance. As the discrimination complexity and class imbalance rate of the problems increase, the learning performance of the AUC of a PR measure is much more variant than that of the AUC of a ROC measure. But the performances of both measures are similar with the low discrimination complexity and class imbalance rate of the problems. The experimental results show 4hat the AUC of a PR measure is more proper in evaluating the learning of class imbalance problem and furthermore gets the benefit in designing the optimal learning model considering a misclassification cost.