DOI QR코드

DOI QR Code

결함 심각도에 기반한 소프트웨어 품질 예측

Software Quality Prediction based on Defect Severity

  • Hong, Euy-Seok (School of Information Technology, Sungshin Women's University)
  • 투고 : 2015.04.04
  • 심사 : 2015.05.07
  • 발행 : 2015.05.30

초록

소프트웨어 결함 예측 연구들의 대부분은 입력 개체의 결함 유무를 예측하는 이진 분류 모델들에 관한 것들이다. 하지만 모든 결함들이 같은 심각도를 갖지는 않으므로 예측 모델이 입력 개체의 결함경향성을 몇 개의 심각도 범주로 분류할 수 있다면 훨씬 유용하게 사용될 수 있다. 본 논문에서는 전통적인 복잡도와 크기 메트릭들을 입력으로 하는 심각도 기반 결함 예측 모델을 제안하였다. 학습 알고리즘은 많이 사용되는 네 개의 기계학습 기법들을 사용하였으며, 모델 구조는 삼진 분류 모델로 하였다. 모델 성능 평가를 위해 실험 데이터는 두 개의 NASA 공개 데이터 집합을 사용하였고, 평가 측정치는 Accuracy를 이용하였다. 평가 실험 결과는 역전파 신경망 모델이 두 데이터 집합에 대해 각각 81%와 88% 정도의 Accuracy 값으로 가장 좋은 성능을 보였다.

Most of the software fault prediction studies focused on the binary classification model that predicts whether an input entity has faults or not. However the ability to predict entity fault-proneness in various severity categories is more useful because not all faults have the same severity. In this paper, we propose fault prediction models at different severity levels of faults using traditional size and complexity metrics. They are ternary classification models and use four machine learning algorithms for their training. Empirical analysis is performed using two NASA public data sets and a performance measure, accuracy. The evaluation results show that backpropagation neural network model outperforms other models on both data sets, with about 81% and 88% in terms of accuracy score respectively.

키워드

참고문헌

  1. IEEE Standard Classification for Software Anomalies, IEEE Std. 1044-2009.
  2. C. Catal, "Software fault prediction:A literature review and current trends," Expert Systems with Applications, Vol.38, No.4, pp.4626-4636, 2011. https://doi.org/10.1016/j.eswa.2010.10.024
  3. R. Malhotra, "A systematic review of machine learning techniques for software fault prediction," Applied Soft. Computing Vol.27, pp.504-518, 2015. https://doi.org/10.1016/j.asoc.2014.11.023
  4. D. Radjenovic, M. Hericko, R. Torkar, and A. Zivkovic, "Software fault prediction metrics: A systematic literature review," Information Soft. Technology, Vol.55, pp.1397-1418, 2013. https://doi.org/10.1016/j.infsof.2013.02.009
  5. D. E. Harter, C. F. Kemerer, and S. A. Slaughter, "Does Software Process Improvement Reduce the Severity of Defects? A Longitudinal Field Study," IEEE Trans. Software Eng., Vol.38, No.4, pp. 810-827, 2012. https://doi.org/10.1109/TSE.2011.63
  6. R. Shatnawi and W. Li, "The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process," Journal of Systems and Software, Vol.81, No.11, pp.1868-1882, 2008. https://doi.org/10.1016/j.jss.2007.12.794
  7. Y. Zhou and H. Leung, "Empirical analysis of object-oriented design metrics for predicting high and low severity faults," IEEE Trans. Software Eng., Vol.32, No.10, pp.771-789, 2006. https://doi.org/10.1109/TSE.2006.102
  8. Y. Singh, A. Kaur and R. Malhotra, "Empirical validation of object-oriented metrics for predicting fault proneness models," Software Quality Journal, Vol.18, pp.3-35, 2010. https://doi.org/10.1007/s11219-009-9079-6
  9. T. Menzies and A. Marcus, "Automated Severity Assessment of Software Defect Reports," Proc. of ICSM'2008, pp.346-355.
  10. E. S. Hong, "A Metrics Set for Measuring Software Module Severity," Journal of The Korea Society of Computer and Information, Vol.20, No.1, pp.197-206, 2015. https://doi.org/10.9708/jksci.2015.20.1.197
  11. P. S. Bishnu and V. Bhattacherjee, "Software fault prediction using quad tree-based k-means clustering algorithm," IEEE Trans. Knowledge and Data Eng., Vol.24, No.6, pp.1146-1150, 2012. https://doi.org/10.1109/TKDE.2011.163
  12. E. S. Hong and M. K. Park, "Unsupervised learning model for fault prediction using representative clustering algorithms," KIPS Trans. Software and Data Engineering, Vol.3, No.2, pp.57-64, 2014. https://doi.org/10.3745/KTSDE.2014.3.2.57
  13. S. R. Chidamber and C. F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Trans. Software Eng., Vol.20, No.6, pp.476-493, 1994. https://doi.org/10.1109/32.295895
  14. E. S. Hong, "Ambiguity Analysis of Defectiveness in NASA MDP data sets," Journal of the Korea Society of IT Services, Vol.12, No.2, pp.361-371, 2013. https://doi.org/10.9716/KITS.2013.12.2.361
  15. T. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," Neural Computation Vol.10, pp.1895-1924, 1998. https://doi.org/10.1162/089976698300017197