Browse > Article
http://dx.doi.org/10.5351/KJAS.2013.26.6.977

Standard Criterion of VUS for ROC Surface  

Hong, C.S. (Department of Statistics, Sungkyunkwan University)
Jung, E.S. (Department of Statistics, Sungkyunkwan University)
Jung, D.G. (Department of Statistics, Sungkyunkwan University)
Publication Information
The Korean Journal of Applied Statistics / v.26, no.6, 2013 , pp. 977-985 More about this Journal
Abstract
Many situations are classified into more than two categories in real world. In this work, we consider ROC surface and VUS, which are graphical representation methods for classification models with three categories. The standard criteria of AUC for the probability of default based on Basel II is extended to the VUS for ROC surface; therefore, the standardized criteria of VUS for the classification model is proposed. The ranges of AUC, K-S and mean difference statistics corresponding to VUS values for each class of the standard criteria are obtained. The standard criteria of VUS for ROC surface can be established by exploring the relationships of these statistics.
Keywords
AUC; classification; default; FPR; risk; threshold; TPR; validation; VUS;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recognition, 30, 1145-1159.   DOI   ScienceOn
2 Dreiseitl, S., Ohno-Machado, L. and Binder, M. (2000). Comparing three-class diagnostic tests by three-way ROC analysis, Medical Decision Making, 20, 323-331.   DOI
3 Engelmann, B., Hayden, E. and Tasche, D. (2003). Measuring the discriminative power of rating systems, Risk, 82-86.
4 Fawcett, T. (2003). ROC graphs: notes and practical considerations for data mining researchers, HP Labs Tech Report HPL-2003-4.
5 Hanley, J. A. and McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve, Radiology, 143, 29-36.   DOI
6 Heckerling, P. S. (2001). Parametric three-way receiver operating characteristic surface analysis using mathematica, Medical Decision Making, 21, 409-417.   DOI
7 Hong, C. S. and Choi, J. S. (2009). Optimal threshold from ROC and CAP curves, The Korean Journal of Applied Statistics, 22, 911-921.   과학기술학회마을   DOI   ScienceOn
8 Hong, C. S., Joo, J. S. and Choi, J. S. (2010). Optimal thresholds from mixture distributions, The Korean Journal of Applied Statistics, 23, 13-28.   과학기술학회마을   DOI   ScienceOn
9 Hosmer, D. W. and Lemeshow, S. (2000). Applied Logistic Regression, John Wiley & Sons, New York.
10 Joseph, M. P. (2005). A PD validation framework for Basel II internal ratings-based systems, Credit Scoring and Credit Control IV.
11 Lim, C. K. (2005). Introduction of goodness-of-fit test methods for credit evaluation system, Financial Supervisory Service, Risk Review, 33-54.
12 Mossman, D. (1999). Three-way ROCs, Medical Decision Making, 19, 78-89.   DOI
13 Nakas, C. T., Alonzo, T. A. and Yiannoutsos, C. T. (2010). Accuracy and cut off point selection in three class classification problems using a generalization of the Youden index, Statistics in Medicine, 29, 2946-2955.   DOI   ScienceOn
14 Nakas, C. T. and Yiannoutsos, C. T. (2004). Ordered multiple-class ROC analysis with continuous measurements, Statistics in Medicine, 23, 3437-3449.   DOI   ScienceOn
15 Patel, A. C. and Markey, M. K. (2005). Comparison of three-class classification performance metrics: a case study in breast cancer CAD, International Society for Optical Engineering, 5749, 581-589.
16 Provost, F. and Fawcett, T. (2001). Robust classification for imprecise environments, Machine Learning, 42, 203-231.   DOI   ScienceOn
17 Scurfield, B. K. (1996). Multiple-event forced-choice tasks in the theory of signal detectability, Journal of Mathematical Psychology, 40, 253-269.   DOI   ScienceOn
18 Sobehart, J. R. and Keenan, S. C. (2001). Measuring default accurately, Credit Risk Special Report, Risk, 14, 31-33.
19 Tasche, D. (2006). Validation of internal rating systems and PD estimates, The Analytics of Risk Model Validation, 28, 169-196.
20 Thomas, L. C., Edelman, D. B. and Crook, J. N. (2004). Readings in Credit Scoring: Foundations, Developments, and Aims, Oxford finance, Oxford University Press, New York.
21 Wandishin, M. S. and Mullen, S. J. (2009). Multiclass ROC analysis, Weather and Forecasting, 24, 530-547.   DOI   ScienceOn
22 Wilkie, A. D. (2004). Measures for comparing scoring systems, in readings in credit scoring-recent decelopments, Advances, and Aims, Oxford Finance.
23 Zou, K. H. (2002). Receiver operation characteristic literature research, On-line ibliography available from: http://www.spl.havard.edu/pages/ppl/zou/roc.html.