• Title/Summary/Keyword: Statistics Classification

Search Result 876, Processing Time 0.021 seconds

Receiver Operating Characteristic (ROC) Curves Using Neural Network in Classification

  • Lee, Jea-Young;Lee, Yong-Won
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.911-920
    • /
    • 2004
  • We try receiver operating characteristic(ROC) curves by neural networks of logistic function. The models are shown to arise from model classification for normal (diseased) and abnormal (nondiseased) groups in medical research. A few goodness-of-fit test statistics using normality curves are discussed and the performances using neural networks of logistic function are conducted.

  • PDF

Cutpoint Selection via Penalization in Credit Scoring (신용평점화에서 벌점화를 이용한 절단값 선택)

  • Jin, Seul-Ki;Kim, Kwang-Rae;Park, Chang-Yi
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.2
    • /
    • pp.261-267
    • /
    • 2012
  • In constructing a credit scorecard, each characteristic variable is divided into a few attributes; subsequently, weights are assigned to those attributes in a process called coarse classification. While partitioning a characteristic variable into attributes, one should determine appropriate cutpoints for the partition. In this paper, we propose a cutpoint selection method via penalization. In addition, we compare the performances of the proposed method with classification spline machine (Koo et al., 2009) on both simulated and real credit data.

Tree size determination for classification ensemble

  • Choi, Sung Hoon;Kim, Hyunjoong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.1
    • /
    • pp.255-264
    • /
    • 2016
  • Classification is a predictive modeling for a categorical target variable. Various classification ensemble methods, which predict with better accuracy by combining multiple classifiers, became a powerful machine learning and data mining paradigm. Well-known methodologies of classification ensemble are boosting, bagging and random forest. In this article, we assume that decision trees are used as classifiers in the ensemble. Further, we hypothesized that tree size affects classification accuracy. To study how the tree size in uences accuracy, we performed experiments using twenty-eight data sets. Then we compare the performances of ensemble algorithms; bagging, double-bagging, boosting and random forest, with different tree sizes in the experiment.

Naive Bayes classifiers boosted by sufficient dimension reduction: applications to top-k classification

  • Yang, Su Hyeong;Shin, Seung Jun;Sung, Wooseok;Lee, Choon Won
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.603-614
    • /
    • 2022
  • The naive Bayes classifier is one of the most straightforward classification tools and directly estimates the class probability. However, because it relies on the independent assumption of the predictor, which is rarely satisfied in real-world problems, its application is limited in practice. In this article, we propose employing sufficient dimension reduction (SDR) to substantially improve the performance of the naive Bayes classifier, which is often deteriorated when the number of predictors is not restrictively small. This is not surprising as SDR reduces the predictor dimension without sacrificing classification information, and predictors in the reduced space are constructed to be uncorrelated. Therefore, SDR leads the naive Bayes to no longer be naive. We applied the proposed naive Bayes classifier after SDR to build a recommendation system for the eyewear-frames based on customers' face shape, demonstrating its utility in the top-k classification problem.

Comparison of Variable Importance Measures in Tree-based Classification (나무구조의 분류분석에서 변수 중요도에 대한 고찰)

  • Kim, Na-Young;Lee, Eun-Kyung
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.5
    • /
    • pp.717-729
    • /
    • 2014
  • Projection pursuit classification tree uses a 1-dimensional projection with the view of the most separating classes in each node. These projection coefficients contain information distinguishing two groups of classes from each other and can be used to calculate the importance measure of classification in each variable. This paper reviews the variable importance measure with increasing interest in line with growing data size. We compared the performances of projection pursuit classification tree with those of classification and regression tree(CART) and random forest. Projection pursuit classification tree are found to produce better performance in most cases, particularly with highly correlated variables. The importance measure of projection pursuit classification tree performs slightly better than the importance measure of random forest.

VUS and HUM Represented with Mann-Whitney Statistic

  • Hong, Chong Sun;Cho, Min Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.3
    • /
    • pp.223-232
    • /
    • 2015
  • The area under the ROC curve (AUC), the volume under the ROC surface (VUS) and the hypervolume under the ROC manifold (HUM) are defined and interpreted with probability that measures the discriminant power of classification models. AUC, VUS and HUM are expressed with the summation and integration notations for discrete and continuous random variables, respectively. AUC for discrete two random samples is represented as the nonparametric Mann-Whitney statistic. In this work, we define conditional Mann-Whitney statistics to compare more than two discrete random samples as well as propose that VUS and HUM are represented as functions of the conditional Mann-Whitney statistics. Three and four discrete random samples with some tie values are generated. Values of VUS and HUM are obtained using the proposed statistic. The values of VUS and HUM are identical with those obtained by definition; therefore, both VUS and HUM could be represented with conditional Mann-Whitney statistics proposed in this paper.

Alternative accuracy for multiple ROC analysis

  • Hong, Chong Sun;Wu, Zhi Qiang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1521-1530
    • /
    • 2014
  • The ROC analysis is considered for multiple class diagnosis. There exist many criteria to find optimal thresholds and measure the accuracy of diagnostic tests for k dimensional ROC analysis. In this paper, we proposed a diagnostic accuracy measure called the correct classification simple rate, which is defined as the summation of true rates for each classification distribution and expressed as a function of summation of sequential true rates for two consecutive distributions. This measure does not weight accuracy across categories by the category prevalence and is comparable across populations for multiple class diagnosis. It is found that this accuracy measure does not only have a relationship with Kolmogorov - Smirnov statistics, but also can be represented as a linear function of some optimal threshold criteria. With these facts, the suggested measure could be applied to test for comparing multiple distributions.

Prediction of extreme PM2.5 concentrations via extreme quantile regression

  • Lee, SangHyuk;Park, Seoncheol;Lim, Yaeji
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.319-331
    • /
    • 2022
  • In this paper, we develop a new statistical model to forecast the PM2.5 level in Seoul, South Korea. The proposed model is based on the extreme quantile regression model with lasso penalty. Various meteorological variables and air pollution variables are considered as predictors in the regression model, and the lasso quantile regression performs variable selection and solves the multicollinearity problem. The final prediction model is obtained by combining various extreme lasso quantile regression estimators and we construct a binary classifier based on the model. Prediction performance is evaluated through the statistical measures of the performance of a binary classification test. We observe that the proposed method works better compared to the other classification methods, and predicts 'very bad' cases of the PM2.5 level well.

Motion classification using distributional features of 3D skeleton data

  • Woohyun Kim;Daeun Kim;Kyoung Shin Park;Sungim Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.551-560
    • /
    • 2023
  • Recently, there has been significant research into the recognition of human activities using three-dimensional sequential skeleton data captured by the Kinect depth sensor. Many of these studies employ deep learning models. This study introduces a novel feature selection method for this data and analyzes it using machine learning models. Due to the high-dimensional nature of the original Kinect data, effective feature extraction methods are required to address the classification challenge. In this research, we propose using the first four moments as predictors to represent the distribution of joint sequences and evaluate their effectiveness using two datasets: The exergame dataset, consisting of three activities, and the MSR daily activity dataset, composed of ten activities. The results show that the accuracy of our approach outperforms existing methods on average across different classifiers.

Genetic classification of various familial relationships using the stacking ensemble machine learning approaches

  • Su Jin Jeong;Hyo-Jung Lee;Soong Deok Lee;Ji Eun Park;Jae Won Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.3
    • /
    • pp.279-289
    • /
    • 2024
  • Familial searching is a useful technique in a forensic investigation. Using genetic information, it is possible to identify individuals, determine familial relationships, and obtain racial/ethnic information. The total number of shared alleles (TNSA) and likelihood ratio (LR) methods have traditionally been used, and novel data-mining classification methods have recently been applied here as well. However, it is difficult to apply these methods to identify familial relationships above the third degree (e.g., uncle-nephew and first cousins). Therefore, we propose to apply a stacking ensemble machine learning algorithm to improve the accuracy of familial relationship identification. Using real data analysis, we obtain superior relationship identification results when applying meta-classifiers with a stacking algorithm rather than applying traditional TNSA or LR methods and data mining techniques.