DOI QR코드

DOI QR Code

Boosting the Performance of the Predictive Model on the Imbalanced Dataset Using SVM Based Bagging and Out-of-Distribution Detection

SVM 기반 Bagging과 OoD 탐색을 활용한 제조공정의 불균형 Dataset에 대한 예측모델의 성능향상

  • Received : 2021.11.23
  • Accepted : 2022.04.07
  • Published : 2022.11.30

Abstract

There are two unique characteristics of the datasets from a manufacturing process. They are the severe class imbalance and lots of Out-of-Distribution samples. Some good strategies such as the oversampling over the minority class, and the down-sampling over the majority class, are well known to handle the class imbalance. In addition, SMOTE has been chosen to address the issue recently. But, Out-of-Distribution samples have been studied just with neural networks. It seems to be hardly shown that Out-of-Distribution detection is applied to the predictive model using conventional machine learning algorithms such as SVM, Random Forest and KNN. It is known that conventional machine learning algorithms are much better than neural networks in prediction performance, because neural networks are vulnerable to over-fitting and requires much bigger dataset than conventional machine learning algorithms does. So, we suggests a new approach to utilize Out-of-Distribution detection based on SVM algorithm. In addition to that, bagging technique will be adopted to improve the precision of the model.

제조업의 공정에서 생성되는 데이터셋은 크게 두 가지 특징을 가진다. 타겟 클래스의 심각한 불균형과 지속적인 Out-of-Distribution(OoD) 샘플의 발생이다. 클래스 불균형은 SMOTE 및 다양한 샘플링 전략을 통해서 대응할 수 있다. 그러나, OoD 탐색은 현재까지 인공신경망 영역에서만 다뤄져 왔다. OoD 탐색의 적용이 가능한 인공신경망은 제조공정 데이터셋에 대해서 만족스러운 성능을 발현하지 못한다. 원인은 제조공정의 데이터셋이 인공신경망에서 일반적으로 다루는 이미지, 텍스트 데이터셋과 비교해서 크기가 매우 작고, 노이즈가 심하다는 것이다. 또한 인공신경망의 과적합(overfitting) 문제도 제조업 데이터셋에서 인공신경망의 성능을 저하하는 원인으로 지적된다. 이에 현재까지 시도된 바 없는 SVM 알고리즘과 OoD 탐색의 접목을 시도하였다. 또한 예측모델의 정밀도 향상을 위해 배깅(Bagging) 알고리즘을 모델링에 반영하였다.

Keywords

References

  1. E. A. Zanaty, "Support Vector Machines (SVMs) versus Multilayer Perceptron (MLP) in data classification," Egyptian Informatics Journal, Vol.13, Iss.3, pp.177-183, 2012. https://doi.org/10.1016/j.eij.2012.08.002
  2. S. Bulusu, B. Kailkhura, P. K. Varshney, B. Li, and D. Song, "Anomalous example detection in deep learning: A survey," IEEE Access, Vol.8, pp.132330-132347, 2020. https://doi.org/10.1109/ACCESS.2020.3010274
  3. Y. B. Hur, E. H. Yang, and S. J. Hwang, "A simple framework for robust out-of-distribution detection," IEEE Access, Vol.10, pp.23086-23097, 2022. https://doi.org/10.1109/ACCESS.2022.3153723
  4. Y. Liu, A. An, and X. Huang, "Boosting prediction accuracy on imbalanced datasets with SVM ensembles," 10th Pacific-Asia Conference, Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp.107-118, 2006.
  5. D. Hendrycks and K. Gimpel, "A baseline for detecting misclassified and out-of-distribution examples in neural networks," International Conference on Learning Representations, 2017.
  6. S. Liang, Y. Li, and R. Srikant, "Enhancing the reliability of Out-of-Distribution image detection in neural networks," International Conference on Learning Representations, 2018.
  7. K. M. Lee, H. L. Lee, K. B. Lee, and J. W. Shin, "Training confidence-calibrated classifiers for detecting Out-ofDistribution samples," International Conference on Learning Representations, 2018.
  8. K. Hansson, S. Yella, M. Doughherty, and H. Fleyeh, "Machine learning algorithms in heavy process manufacturing," American Journal of Intelligent Systems, Vol.6, No.1, pp.1-6, 2016.
  9. F. Mohareb, O. Papadopoulou, and E. Panagou, "Ensemblebased support vector machine classifiers as an efficient tool for quality assessment of beef fillets from electronic nose data," Analytical Methods, Vol.8, Iss.18, pp.3711-3721, 2016. https://doi.org/10.1039/C6AY00147E
  10. D. Hendrycks, M. Mazeika, and T. Dietterich, "Deep anomaly detection with outlier exposure," International Conference on Learning Representations, 2019.
  11. C. Jian, J. Gao, and Y. Ao, "A new sampling method for classifying imbalanced data based on support vector machine ensemble," Neurocomputing, Vol.193, Iss.C, pp.115-122, 2016. https://doi.org/10.1016/j.neucom.2016.02.006
  12. M. Farrash and W. Wang, "How data partitioning strategies and subset size influence the performance of an ensemble?," IEEE International Conference on Big Data, pp.42-49, 2013.
  13. M. Hossin and M. N. Sulaiman, "A review on evaluation metrics for data classification evaluations," International Journal of Data Mining & Knowledge Management Process, Vol.5, No.2, pp.1-11, 2015.
  14. C. M. Bishop, "Neural networks for pattern recognition," Oxford University Press, pp.365, 1996.
  15. H. Y. Lee, "Research methodology," 2nd ed. Seoul, Korea: CRbooks, pp.234-235, 2014.
  16. S. M. Nzuva, L. Nderu, and T. Mwalili, "Ensemble model for enhancing classification accuracy in intrusion detection systems," International Conference on Electrical, Computer and Energy Technologies, 2021.
  17. C. Ayuya, "Ensemble learning on bias and variance," Updated on January 20, 2021, Section [Internet], https://www.section.io/engineering-education/ensemble-bias-var/
  18. D. H. Yang, K. M. Ngoc, I. S. Shin, K. H. Lee, and M. G. Hwang, "Ensemble-based out-of-distribution detection," Electronics, Vol.10, Iss.5, 2021.
  19. M. Sensoy, L. Kaplan, and M. Kandemir, "Evidential deep learning to quantify classification uncertainty," Neural Information Processing Systems, 2018.