• Title/Summary/Keyword: Misclassification Error

Search Result 37, Processing Time 0.02 seconds

Chi-squared Tests for Homogeneity based on Complex Sample Survey Data Subject to Misclassification Error

  • Heo, Sunyeong
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.853-864
    • /
    • 2002
  • In the analysis of categorical data subject to misclassification errors, the observed cell proportions are adjusted by a misclassification probabilities and estimates of variances are adjusted accordingly. In this case, it is important to determine the extent to which misclassification probabilities are homogeneous within a population. This paper considers methods to evaluate the power of chi-squared tests for homogeneity with complex survey data subject to misclassification errors. Two cases are considered: adjustment with homogeneous misclassification probabilities; adjustment with heterogeneous misclassification probabilities. To estimate misclassification probabilities, logistic regression method is considered.

Cost-Sensitive Case Based Reasoning using Genetic Algorithm: Application to Diagnose for Diabetes

  • Park Yoon-Joo;Kim Byung-Chun
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2006.06a
    • /
    • pp.327-335
    • /
    • 2006
  • Case Based Reasoning (CBR) has come to be considered as an appropriate technique for diagnosis, prognosis and prescription in medicine. However, canventional CBR has a limitation in that it cannot incorporate asymmetric misclassification cast. It assumes that the cast of type1 error and type2 error are the same, so it cannot be modified according ta the error cast of each type. This problem provides major disincentive to apply conventional CBR ta many real world cases that have different casts associated with different types of error. Medical diagnosis is an important example. In this paper we suggest the new knowledge extraction technique called Cast-Sensitive Case Based Reasoning (CSCBR) that can incorporate unequal misclassification cast. The main idea involves a dynamic adaptation of the optimal classification boundary paint and the number of neighbors that minimize the tatol misclassification cast according ta the error casts. Our technique uses a genetic algorithm (GA) for finding these two feature vectors of CSCBR. We apply this new method ta diabetes datasets and compare the results with those of the cast-sensitive methods, C5.0 and CART. The results of this paper shaw that the proposed technique outperforms other methods and overcomes the limitation of conventional CBR.

  • PDF

Evaluating Predictive Ability of Classification Models with Ordered Multiple Categories

  • Oong-Hyun Sung
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.2
    • /
    • pp.383-395
    • /
    • 1999
  • This study is concerned with the evaluation of predictive ability of classification models with ordered multiple categories. If categories can be ordered or ranked the spread of misclassification should be considered to evaluate the performance of the classification models using loss rate since the apparent error rate can not measure the spread of misclassification. Since loss rate is known to underestimate the true loss rate the bootstrap method were used to estimate the true loss rate. thus this study suggests the method to evaluate the predictive power of the classification models using loss rate and the bootstrap estimate of the true loss rate.

  • PDF

Estimating Prediction Errors in Binary Classification Problem: Cross-Validation versus Bootstrap

  • Kim Ji-Hyun;Cha Eun-Song
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.151-165
    • /
    • 2006
  • It is important to estimate the true misclassification rate of a given classifier when an independent set of test data is not available. Cross-validation and bootstrap are two possible approaches in this case. In related literature bootstrap estimators of the true misclassification rate were asserted to have better performance for small samples than cross-validation estimators. We compare the two estimators empirically when the classification rule is so adaptive to training data that its apparent misclassification rate is close to zero. We confirm that bootstrap estimators have better performance for small samples because of small variance, and we have found a new fact that their bias tends to be significant even for moderate to large samples, in which case cross-validation estimators have better performance with less computation.

Empirical Bayesian Misclassification Analysis on Categorical Data (범주형 자료에서 경험적 베이지안 오분류 분석)

  • 임한승;홍종선;서문섭
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.1
    • /
    • pp.39-57
    • /
    • 2001
  • Categorical data has sometimes misclassification errors. If this data will be analyzed, then estimated cell probabilities could be biased and the standard Pearson X2 tests may have inflated true type I error rates. On the other hand, if we regard wellclassified data with misclassified one, then we might spend lots of cost and time on adjustment of misclassification. It is a necessary and important step to ask whether categorical data is misclassified before analyzing data. In this paper, when data is misclassified at one of two variables for two-dimensional contingency table and marginal sums of a well-classified variable are fixed. We explore to partition marginal sums into each cells via the concepts of Bound and Collapse of Sebastiani and Ramoni (1997). The double sampling scheme (Tenenbein 1970) is used to obtain informations of misclassification. We propose test statistics in order to solve misclassification problems and examine behaviors of the statistics by simulation studies.

  • PDF

Data-Adaptive ECOC for Multicategory Classification

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Error Correcting Output Codes (ECOC) can improve generalization performance when applied to multicategory classification problem. In this study we propose a new criterion to select hyperparameters included in ECOC scheme. Instead of margins of a data we propose to use the probability of misclassification error since it makes the criterion simple. Using this we obtain an upper bound of leave-one-out error of OVA(one vs all) method. Our experiments from real and synthetic data indicate that the bound leads to good estimates of parameters.

  • PDF

Bootstrap confidence intervals for classification error rate in circular models when a block of observations is missing

  • Chung, Hie-Choon;Han, Chien-Pai
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.4
    • /
    • pp.757-764
    • /
    • 2009
  • In discriminant analysis, we consider a special pattern which contains a block of missing observations. We assume that the two populations are equally likely and the costs of misclassification are equal. In this situation, we consider the bootstrap confidence intervals of the error rate in the circular models when the covariance matrices are equal and not equal.

  • PDF

Bootstrap Confidence Intervals of Classification Error Rate for a Block of Missing Observations

  • Chung, Hie-Choon
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.4
    • /
    • pp.675-686
    • /
    • 2009
  • In this paper, it will be assumed that there are two distinct populations which are multivariate normal with equal covariance matrix. We also assume that the two populations are equally likely and the costs of misclassification are equal. The classification rule depends on the situation when the training samples include missing values or not. We consider the bootstrap confidence intervals for classification error rate when a block of observation is missing.

Hyperparameter Selection for APC-ECOC

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1219-1231
    • /
    • 2008
  • The main object of this paper is to develop a leave-one-out(LOO) bound of all pairwise comparison error correcting output codes (APC-ECOC). To avoid using classifiers whose corresponding target values are 0 in APC-ECOC and requiring pilot estimates we developed a bound based on mean misclassification probability(MMP). It can be used to tune kernel hyperparameters. Our empirical experiment using kernel mean squared estimate(KMSE) as the binary classifier indicates that the bound leads to good estimates of kernel hyperparameters.

  • PDF