• Title/Summary/Keyword: 데이터 불균형 문제

Search Result 211, Processing Time 0.031 seconds

Active Learning for Prediction of Potential Customers (잠재 고객 예측을 위한 능동 학습 기법)

  • 박상욱;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.96-98
    • /
    • 2000
  • 본 논문에서는 상거래 환경에서 구매자와 비구매자들에 대한 데이터를 학습한 후, 잠재고객들 중에서 구매 확률이 높은 사람을 예측하는 문제에 효율적으로 접근하기 위해 능동적인 데이터 선택 기법을 이용한다. 실험 데이터는 ColL Challenge 2000에서 얻은 데이터로서, 구매자들의 정보보다 비구매자들의 정보가 더 많기 때문에 상당히 균형이 맞지 않는다. 따라서 모든 데이터를 한꺼번에 학습하는 경우에 성능이 좋지 않다. 본 논문에서는 이러한 불균형 분포를 갖는 실제적인 문제에 있어서 성능이 좋지 않다. 본 논문에서는 이러한 불균형 분포를 갖는 실제적인 문제에 있어서 RBF 기반의 신경망을 가지고 능동 학습을 함으로써 기존의 뱃치학습 보다 예측의 정확도를 향상시킬 수 있음을 보인다.

  • PDF

Comparison of Loss Function for Multi-Class Classification of Collision Events in Imbalanced Black-Box Video Data (불균형 블랙박스 동영상 데이터에서 충돌 상황의 다중 분류를 위한 손실 함수 비교)

  • Euisang Lee;Seokmin Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.49-54
    • /
    • 2024
  • Data imbalance is a common issue encountered in classification problems, stemming from a significant disparity in the number of samples between classes within the dataset. Such data imbalance typically leads to problems in classification models, including overfitting, underfitting, and misinterpretation of performance metrics. Methods to address this issue include resampling, augmentation, regularization techniques, and adjustment of loss functions. In this paper, we focus on loss function adjustment, particularly comparing the performance of various configurations of loss functions (Cross Entropy, Balanced Cross Entropy, two settings of Focal Loss: 𝛼 = 1 and 𝛼 = Balanced, Asymmetric Loss) on Multi-Class black-box video data with imbalance issues. The comparison is conducted using the I3D, and R3D_18 models.

The Optimization of Ensembles for Bankruptcy Prediction (기업부도 예측 앙상블 모형의 최적화)

  • Myoung Jong Kim;Woo Seob Yun
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.39-57
    • /
    • 2022
  • This paper proposes the GMOPTBoost algorithm to improve the performance of the AdaBoost algorithm for bankruptcy prediction in which class imbalance problem is inherent. AdaBoost algorithm has the advantage of providing a robust learning opportunity for misclassified samples. However, there is a limitation in addressing class imbalance problem because the concept of arithmetic mean accuracy is embedded in AdaBoost algorithm. GMOPTBoost can optimize the geometric mean accuracy and effectively solve the category imbalance problem by applying Gaussian gradient descent. The samples are constructed according to the following two phases. First, five class imbalance datasets are constructed to verify the effect of the class imbalance problem on the performance of the prediction model and the performance improvement effect of GMOPTBoost. Second, class balanced data are constituted through data sampling techniques to verify the performance improvement effect of GMOPTBoost. The main results of 30 times of cross-validation analyzes are as follows. First, the class imbalance problem degrades the performance of ensembles. Second, GMOPTBoost contributes to performance improvements of AdaBoost ensembles trained on imbalanced datasets. Third, Data sampling techniques have a positive impact on performance improvement. Finally, GMOPTBoost contributes to significant performance improvement of AdaBoost ensembles trained on balanced datasets.

Oversampling-Based Ensemble Learning Methods for Imbalanced Data (불균형 데이터 처리를 위한 과표본화 기반 앙상블 학습 기법)

  • Kim, Kyung-Min;Jang, Ha-Young;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.10
    • /
    • pp.549-554
    • /
    • 2014
  • Handwritten character recognition data is usually imbalanced because it is collected from the natural language sentences written by different writers. The imbalanced data can cause seriously negative effect on the performance of most of machine learning algorithms. But this problem is typically ignored in handwritten character recognition, because it is considered that most of difficulties in handwritten character recognition is caused by the high variance in data set and similar shapes between characters. We propose the oversampling-based ensemble learning methods to solve imbalanced data problem in handwritten character recognition and to improve the recognition accuracy. Also we show that proposed method achieved improvements in recognition accuracy of minor classes as well as overall recognition accuracy empirically.

Ensemble Learning for Solving Data Imbalance in Bankruptcy Prediction (기업부실 예측 데이터의 불균형 문제 해결을 위한 앙상블 학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.1-15
    • /
    • 2009
  • In a classification problem, data imbalance occurs when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. This paper proposes a Geometric Mean-based Boosting (GM-Boost) to resolve the problem of data imbalance. Since GM-Boost introduces the notion of geometric mean, it can perform learning process considering both majority and minority sides, and reinforce the learning on misclassified data. An empirical study with bankruptcy prediction on Korea companies shows that GM-Boost has the higher classification accuracy than previous methods including Under-sampling, Over-Sampling, and AdaBoost, used in imbalanced data and robust learning performance regardless of the degree of data imbalance.

  • PDF

Improved Focused Sampling for Class Imbalance Problem (클래스 불균형 문제를 해결하기 위한 개선된 집중 샘플링)

  • Kim, Man-Sun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Cheah, Wooi Ping
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.287-294
    • /
    • 2007
  • Many classification algorithms for real world data suffer from a data class imbalance problem. To solve this problem, various methods have been proposed such as altering the training balance and designing better sampling strategies. The previous methods are not satisfy in the distribution of the input data and the constraint. In this paper, we propose a focused sampling method which is more superior than previous methods. To solve the problem, we must select some useful data set from all training sets. To get useful data set, the proposed method devide the region according to scores which are computed based on the distribution of SOM over the input data. The scores are sorted in ascending order. They represent the distribution or the input data, which may in turn represent the characteristics or the whole data. A new training dataset is obtained by eliminating unuseful data which are located in the region between an upper bound and a lower bound. The proposed method gives a better or at least similar performance compare to classification accuracy of previous approaches. Besides, it also gives several benefits : ratio reduction of class imbalance; size reduction of training sets; prevention of over-fitting. The proposed method has been tested with kNN classifier. An experimental result in ecoli data set shows that this method achieves the precision up to 2.27 times than the other methods.

Classification of Imbalanced Data Using Multilayer Perceptrons (다층퍼셉트론에 의한 불균현 데이터의 학습 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.141-148
    • /
    • 2009
  • Recently there have been many research efforts focused on imbalanced data classification problems, since they are pervasive but hard to be solved. Approaches to the imbalanced data problems can be categorized into data level approach using re-sampling, algorithmic level one using cost functions, and ensembles of basic classifiers for performance improvement. As an algorithmic level approach, this paper proposes to use multilayer perceptrons with higher-order error functions. The error functions intensify the training of minority class patterns and weaken the training of majority class patterns. Mammography and thyroid data-sets are used to verify the superiority of the proposed method over the other methods such as mean-squared error, two-phase, and threshold moving methods.

Processing Method of Unbalanced Data for a Fault Detection System Based Motor Gear Sound (모터 동작음 기반 불량 검출 시스템을 위한 불균형 데이터 처리 방안 연구)

  • Lee, Younghwa;Choi, Geonyoung;Park, Gooman
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1305-1307
    • /
    • 2022
  • 자동차 부품의 결함은 시스템 전체의 성능 저하 및 인적 물적 손실이 발생할 수 있으므로 생산라인에서의 불량 검출은 매우 중요하다. 따라서 정확하고 균일한 결과의 불량 검출을 위해 딥러닝 기반의 고장 진단 시스템이 다양하게 연구되고 있다. 하지만 제조현장에서는 정상 샘플보다 비정상 샘플의 발생 빈도가 현저히 낮다. 이는 학습 데이터의 클래스 불균형 문제로 이어지게 되고, 이러한 불균형 문제는 고장을 판별하는 분류 모델의 성능에 영향을 끼치게 된다. 이에 본 연구에서는 모터의 동작음으로부터 불량 모터를 판별하는 불량 검출 시스템 설계를 위한 데이터 불균형 해결 방법을 제안한다. 자동차 사이드 미러 모터의 동작음을 학습 및 테스트를 위한 데이터 셋으로 사용하였으며 손실함수 계산 시 학습 데이터 셋의 클래스별 샘플 수 가 반영되는 label-distribution-aware margin(LDAM) loss 와 Inception, ResNet, DenseNet 신경망 모델의 비교 분석을 통해 불균형 데이터를 처리할 수 있는 가능성을 보여주었다.

  • PDF

A Deep Learning Based Over-Sampling Scheme for Imbalanced Data Classification (불균형 데이터 분류를 위한 딥러닝 기반 오버샘플링 기법)

  • Son, Min Jae;Jung, Seung Won;Hwang, Een Jun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.7
    • /
    • pp.311-316
    • /
    • 2019
  • Classification problem is to predict the class to which an input data belongs. One of the most popular methods to do this is training a machine learning algorithm using the given dataset. In this case, the dataset should have a well-balanced class distribution for the best performance. However, when the dataset has an imbalanced class distribution, its classification performance could be very poor. To overcome this problem, we propose an over-sampling scheme that balances the number of data by using Conditional Generative Adversarial Networks (CGAN). CGAN is a generative model developed from Generative Adversarial Networks (GAN), which can learn data characteristics and generate data that is similar to real data. Therefore, CGAN can generate data of a class which has a small number of data so that the problem induced by imbalanced class distribution can be mitigated, and classification performance can be improved. Experiments using actual collected data show that the over-sampling technique using CGAN is effective and that it is superior to existing over-sampling techniques.

Application of Random Over Sampling Examples(ROSE) for an Effective Bankruptcy Prediction Model (효과적인 기업부도 예측모형을 위한 ROSE 표본추출기법의 적용)

  • Ahn, Cheolhwi;Ahn, Hyunchul
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.525-535
    • /
    • 2018
  • If the frequency of a particular class is excessively higher than the frequency of other classes in the classification problem, data imbalance problems occur, which make machine learning distorted. Corporate bankruptcy prediction often suffers from data imbalance problems since the ratio of insolvent companies is generally very low, whereas the ratio of solvent companies is very high. To mitigate these problems, it is required to apply a proper sampling technique. Until now, oversampling techniques which adjust the class distribution of a data set by sampling minor class with replacement have popularly been used. However, they are a risk of overfitting. Under this background, this study proposes ROSE(Random Over Sampling Examples) technique which is proposed by Menardi and Torelli in 2014 for the effective corporate bankruptcy prediction. The ROSE technique creates new learning samples by synthesizing the samples for learning, so it leads to better prediction accuracy of the classifiers while avoiding the risk of overfitting. Specifically, our study proposes to combine the ROSE method with SVM(support vector machine), which is known as the best binary classifier. We applied the proposed method to a real-world bankruptcy prediction case of a Korean major bank, and compared its performance with other sampling techniques. Experimental results showed that ROSE contributed to the improvement of the prediction accuracy of SVM in bankruptcy prediction compared to other techniques, with statistical significance. These results shed a light on the fact that ROSE can be a good alternative for resolving data imbalance problems of the prediction problems in social science area other than bankruptcy prediction.